(Replying to PARENT post)
(Replying to PARENT post)
So someone who invests $10 million has their investment βcappedβ at $1 billion. Lol. Basically unlimited unless the company grew to a FAANG-scale market value.
(Replying to PARENT post)
I wonder if the profit cap multiple is going to end up being a significant signalling risk for them. A down-round is such a negative event in the valley, I can imagine a "increasing profit multiple" would have to be treated the same way.
One other question for the folks at OpenAI: How would equity grants work here? You get X fraction of an LP that gets capped at Y dollar profits? Are the fractional partnerships/transferable if earned into?
Would you folks think about publishing your docs?
(Replying to PARENT post)
Elon was irritated that he was behind in the AI intellectual property race and this narrative created a perfect opportunity. Not surprised in the end. Tesla effectively did the same thing - "come help me save the planet" with overpriced cars. [Edit: Apparently Elon has left OpenAI but I don't believe for a second that he will not participate in this LP]
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
But without a more concrete and specific definition, "benefit of all" is meaningless. For most projects, one can construct a claim that it has the potential to benefit most or all of a large group of people at some point.
So, what does that commitment mean?
If an application benefits some people and harms others, is it unacceptable? What if it harms some people now in exchange for the promise of a larger benefit at some point in the future?
Must it benefit everyone it touches and harm no one? What if it harms no one but the vast majority of its benefits accrue to only the top 1% of humanity?
What is the line?
(Replying to PARENT post)
This change seems to be about ease of raising money and retaining talent. My question is: are you having difficulty doing those things today, and do you project having difficulty doing that in the foreseeable future?
I'll admit I'm skeptical of these changes. Creating a 100x profit cap significantly (I might even say categorically) changes the mission and value of what you folks are doing. Basically, this seems like a pretty drastic change and I'm wondering if the situation is dire enough to warrant it. There's no question it will be helpful in raising money and retaining talent, I'm just wondering if it's worth it.
(Replying to PARENT post)
I think all of us here are tired of "altruistic" tech companies which are really profit mongers in disguise. The burden is on you all to prove this is not the case (and this doesn't really help your case).
(Replying to PARENT post)
More like: If we make enough money to own the whole world, we'll give you some food not to starve.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
And they are unironically talking about creating AGI. AGI is awesome of course but maybe that is a tiny little bit overconfident ?
(Replying to PARENT post)
Now with OpenAI leaving the non-profit path the Charter content, fuzzy as it is, is 100% up for interpretation. It does not specify what "benefit of all" or "undue concentration of power" means concretely. It's all up for interpretation.
So at this point the trust that I can put into this Charter is about the same that I can put into Google's "Don't be evil"...
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Was this announced before or is this the first time they've mentioned it?
(Replying to PARENT post)
But there are precedents for investing billions of dollars into blue sky technologies and still being able to spread the wealth and knowledge gathered - it's called government investment in science - it has built silicon chips and battery technologies and ... well quite a lot.
Is this company planning on "fundamental" research (anti-adversarial, "explainable" outcomes?) - and why do we think government investment is not good enough?
Or, worryingly, are the major tech leaders now so rich that they can honestly taken on previous government roles (with only the barest of nods to accountability and legal obligation to return value to the commons)
I am a bit scared that it's the latter - and even then this is too expensive for any one firm alone.
(Replying to PARENT post)
(Replying to PARENT post)
If no IP was sold to the new OpenAI LP because some or all of the IP created under the original nonprofit OpenAI was open sourced, will the new OpenAI LP continue that practice?
(Replying to PARENT post)
(Replying to PARENT post)
Since I'm not a lawyer, can you help understand the theoretical limits of the LP's "lock in" to the Charter? In a cynical scenario, what would it take to completely capture OpenAI's work for profit?
If the Nonprofit's board was 60% people who want to break the Charter, would they be capable of voting to do so?
(Replying to PARENT post)
You guys raised free money in forms of grants, acquired the best talent in the name of a non-profit that has a purpose of saving humanity, and always had publicity stunts that is actually hurting science and the AI community, and talking the first steps against reproducibility by not releasing gpt2 so you can further commercialize your future models.
Also, you guys claim that the non-profit board retains full control, but seems like the same 7 white men on that board are also on the board of your profit company and have a strong influence there.
Call it what you want, but I think this was planned out from day one. Now, you guy won the game. It's just a matter of time to dominate the AI game, keep manipulating us, and appear on the Forbes list.
Also, I expect that you guys will dislike that comment instead of having an actual dialogue and discussion.
(Replying to PARENT post)
Grammar--would change to: as broad an impact
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
This "cooperative" ostensibly elects its board. In reality, nomination by existing members of the REI board is the only way to stand for election by the REI membership, and when you vote you only by marking "For" the nominated candidates (there's no information on how to vote against, though at another time they indicated that the alternative was "Withold vote"). While the board members don't earn much, there is a nice path from board member to REI executive ... which can pay as much as $2M/year for the CEO position.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
For those (including myself) who wonder whether a 100x cap will really change an organization from being profit-driven to being positive-impact-driven:
How could we improve on this?
One idea is to not allow investors on the board. Investors are profit-driven. If they're on the board, you'll likely get pressure to do things that optimize for profit rather than for positive impact.
Another idea is to make monetary compensation based on some measure of positive impact. That's one explicit way to optimize for positive impact rather than money.
(Replying to PARENT post)
(Replying to PARENT post)
Since you seem to be answering questions in this thread, here's one:
How does OpenAI LP's structure differ from that of a L3C (Low-profit Limited Liability company)?
(Replying to PARENT post)
How will OpenAI do that?
(Replying to PARENT post)
I'd like to offer up an alternate opinion: non-profits operating models are generally ineffective compared to for-profit operating models.
There are many examples.
* Bill Gates is easy; make squillions being a merciless capitalist, then turn that into a very productive program of disease elimination and apparently energy security nowadays.
* Open source is another good one in my opinion - even when they literally give the software away, many of the projects leading their fields (eg, Google Chrome, Android, PostgreSQL, Linux Kernel) draw heavily on sponsorship by for-profit companies using them for furthering their profits - even if the steering committee is nominally non-profit.
* I have examples outside software, but they are all a bit complicated to type up. Things like China's rise.
It isn't that there isn't a place for researchers who are personally motivated to do things, there is a just a high correlation between something making a profit and it getting done to a high standard.
(Replying to PARENT post)
(Replying to PARENT post)
Between the market pressures from investors, employees, competitors, to what extent can a company really stay true to its business and deny potential profit that conflicts with it.
Also, itβs hard to root for specific for profit companies (although Iβm rooting for capitalism per se).
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Let's not forget that Khosla himself does not exactly care about public interest or existing laws https://www.google.com/amp/s/www.nytimes.com/2018/10/01/tech...
(Replying to PARENT post)
Sorry guys, but before you were probably able to get talent which is not (primarily) motivated by money. Now you are just another AI startup. If the cap would be 2x, it could still make sense. But 100x times? That's laughable! And the split board, made up of friends and closely connected people smells like "greenwashing" as well. Don't get me wrong, it's totally ok to be an AI startup. You just shouldn't pretend to be a non-profit then...