> The two companies reportedly signed an agreement [in 2023] stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits.
They didn't have a better definition of AGI to draw from. The old Turing test proved to not be a particularily good test. So lacking a definition money was used as a proxy. Which to me seems fair. Unless you've got a better definition of AGI that is solid enough to put in a high dollar value contract?
That's true, but the $100 billion requirement is the only hard qualification defined in earlier agreements. The rest of the condition was left to the "reasonable discretion" of the board of OpenAI. (https://archive.is/tMJoG)
It's kind of sad, but I've found myself becoming more and more this guy whenever someone "serious" brings up AI in conversation: https://www.instagram.com/p/DOELpzRDR-4/
I quit Google last year because I was just done with the incessant push for "AI" in everything (AI exclusively means LLMs of course). I still believe in the company as a whole, the work culture just took a hard right towards kafkaville. Nowadays when my relatives say "AI will replace X" or whatever I just nod along. People are incredibly naive and unbelievably ignorant, but that's about as new as eating wheat.
HN has big problem with reading comprehension. First of all $100B is likely what Microsoft demanded on top of what AGI is defined by OpenAI, which is “ highly autonomous systems that outperform humans at most economically valuable work” - [0]. Secondly that is no longer part of this revised agreement, replaced with a review by a panel of experts.
This is the most sick implementation of Goodhart's Law I've ever seen.
>"When a measure becomes a target, it ceases to be a good measure"
What appalls me is that companies are doing this stuff in plain sight. In the 1920s before the crash, were companies this brazen or did they try to hide it better?
that's very different from OpenAI's previous definition (which was "autonomous systems that surpass humans in most economically valuable tasks") for at least one big reason:
This new definition likely only triggers if OpenAI's AI is substantially different or better than other companies' AI. Because in a world where 2+ companies have similar AGI, both would have huge income but the competition would mean their profit margins might not be as large. The only reason their profit would soar to 100B+ would be because of no competition, right?
It doesn't seem to say 100B a year. So presumably a business selling spoons will also eventually achieve AGI. Also good to know that the US could achieve AGI at any time by just printing more money until hyperinflation lets openai hit their target.
Nice unlock to hyperinflate their way to $100B. I'd buy an AGI spoon but preferably before hyperinflation hits. I'd expect forks to outcompete the spoons though.
No. When you're thinking about questions like these, it is useful to remember that multiple (probably dozens) professional A-grade lawyers have been paid considerable sums of actual money, by both sides, to think about possible loopholes and fix them.
No. "Pro" subscriptions have nothing to do with AGI, my pet GPS tracker sells those.
We're talking about things that would make AGI recognizable as AGI, in the "I know it when I see it" sense.
So things we think about when the word AGI comes up: AI-driven commercial entity selling AI-designed services or products, AI-driven portfolio manager trading AI-selected stocks, AI-made movie going at the boxoffice, AI-made videogame selling loads, AI-won tournament prizes at computationally difficult games that the AI somehow autonomously chose to take part in, etc.
Don't worry, it'll be relevant ads, just like google. You're going to love when code output is for proprietary libraries and databases and getting things the way you want will involve annoying levels of "clarification" that'll be harder and harder to use.
I kind of meant this as a joke as I typed this, but by the end almost wanted to quit the tech industry all together.
Just download a few SOTA (free) open-weights models well ahead of that moment and either run them from inside your living-room or store them onto a (cheap) 2TB external hard drive until consumer compute makes it affordable to run them from your living room.
>This is an important detail because Microsoft loses access to OpenAI’s technology when the startup reaches AGI, a nebulous term that means different things to everyone.
I think some of this is just the typical bluster of company press releases / earnings reports. Can't ever show weakness or the shareholders will leave. Can't ever show doubt or the stock price will drop.
Nevertheless, I've been wondering of late. How will we know when AGI is accomplished? In the books or movies, it's always been handwaved or described in a way that made it seem like it was obvious to all. For example, in The Matrix there's the line "We marveled at our own magnificence as we gave birth to AI." It was a very obvious event that nobody could question in that story. In reality though? I'm starting to think it's just going to be more of a gradual thing, like increasing the resolution of our TVs until you can't tell it's not a window any longer.
It's certainly not an specific thing that can be accomplished. AGI is a useful name for a badly defined concept, but any objective application of it (like in a contract) is just stupid things done by people that could barely be described as having the natural variety of GI.
If I remember correctly, Microsoft was previously promised ownership of every pre-AGI asset created by OpenAI. Now they are being promised ownership of things post-AGI as well:
Microsoft’s IP rights for both models and products are extended through 2032 and now includes models post-AGI...
To me, this suggests a further dilution of the term "AGI."
To be honest, I think this is somewhat assymetric, and kind of implies that openai are truer "Believers" than Microsoft.
If you believe in a hard takeoff, than ownership of assets post agi is pretty much meaningless, however, it protects Microsoft from an early declaration of agi by openai.
OpenAI wants to be free from MS. The cost is 27% of ownership, which is about $135B currently, plus IP access until 2032. Considered MS invested about $10B initially, that’s a big concession on the part of OpenAI.
OpenAI’s Jakob Pachocki said on a call today that he expects that AI is “less than a decade away from superintelligence”
I think the more interesting question is who will be on the panel?
A group of ex frontier lab employees? You could declare AGI today. A more diverse group across academia and industry might actually have some backbone and be able to stand up to OpenAI.
It's quite possible that GI and thus AGI does not actually exist. Though now the paper the other day by all those heavy hitters in the industry makes more sense in this context.
I wonder what criteria that panel will use to define/resolve this.