Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Once AGI is declared by OpenAI, that declaration will now be verified by an independent expert panel.

I wonder what criteria that panel will use to define/resolve this.



> The two companies reportedly signed an agreement [in 2023] stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits.

https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...


A sufficiently large profit margin is what constitutes AGI? What a fucking joke.


The real AGI was the money we siphoned along the way.


This is pretty good!


Absolute Grift Industry


They didn't have a better definition of AGI to draw from. The old Turing test proved to not be a particularily good test. So lacking a definition money was used as a proxy. Which to me seems fair. Unless you've got a better definition of AGI that is solid enough to put in a high dollar value contract?


“Only” means that it is a necessary condition, not a sufficient one.


That's true, but the $100 billion requirement is the only hard qualification defined in earlier agreements. The rest of the condition was left to the "reasonable discretion" of the board of OpenAI. (https://archive.is/tMJoG)


The “reasonableness” is something they could go to court over if necessary, whereas the $100 billion is a hard requirement.


It's all so unfathomably stupid. And it's going to bring down an economy.


Hey, don't forget the climate effects too!


> It's all so unfathomably stupid. And it's going to bring down an economy.

Dot-com bubble all over again


Way bigger and deeper than that, there was some slack in the energy situation remaining at that point. Not any more.


with extra stoopid


It's kind of sad, but I've found myself becoming more and more this guy whenever someone "serious" brings up AI in conversation: https://www.instagram.com/p/DOELpzRDR-4/


I'm honestly starting to feel embarrassed to even be employed in the software industry now.


I quit Google last year because I was just done with the incessant push for "AI" in everything (AI exclusively means LLMs of course). I still believe in the company as a whole, the work culture just took a hard right towards kafkaville. Nowadays when my relatives say "AI will replace X" or whatever I just nod along. People are incredibly naive and unbelievably ignorant, but that's about as new as eating wheat.


I've been telling people I do "computer stuff" since the NFT days.


Five straight years of having to tell everyone who asks about your job that the hottest thing in your industry is a scam sure does wear on a person.


HN has big problem with reading comprehension. First of all $100B is likely what Microsoft demanded on top of what AGI is defined by OpenAI, which is “ highly autonomous systems that outperform humans at most economically valuable work” - [0]. Secondly that is no longer part of this revised agreement, replaced with a review by a panel of experts.

[0] - https://openai.com/charter/


This is the most sick implementation of Goodhart's Law I've ever seen.

>"When a measure becomes a target, it ceases to be a good measure"

What appalls me is that companies are doing this stuff in plain sight. In the 1920s before the crash, were companies this brazen or did they try to hide it better?


that's very different from OpenAI's previous definition (which was "autonomous systems that surpass humans in most economically valuable tasks") for at least one big reason: This new definition likely only triggers if OpenAI's AI is substantially different or better than other companies' AI. Because in a world where 2+ companies have similar AGI, both would have huge income but the competition would mean their profit margins might not be as large. The only reason their profit would soar to 100B+ would be because of no competition, right?


It doesn't seem to say 100B a year. So presumably a business selling spoons will also eventually achieve AGI. Also good to know that the US could achieve AGI at any time by just printing more money until hyperinflation lets openai hit their target.


Nice unlock to hyperinflate their way to $100B. I'd buy an AGI spoon but preferably before hyperinflation hits. I'd expect forks to outcompete the spoons though.


So they can just introduce ads in ChatGPT responses, make $100 billion, and call that AGI?


No. When you're thinking about questions like these, it is useful to remember that multiple (probably dozens) professional A-grade lawyers have been paid considerable sums of actual money, by both sides, to think about possible loopholes and fix them.


What would you consider valid methods of generating $100 billion? Enough Max/Pro subscribers?


No. "Pro" subscriptions have nothing to do with AGI, my pet GPS tracker sells those.

We're talking about things that would make AGI recognizable as AGI, in the "I know it when I see it" sense.

So things we think about when the word AGI comes up: AI-driven commercial entity selling AI-designed services or products, AI-driven portfolio manager trading AI-selected stocks, AI-made movie going at the boxoffice, AI-made videogame selling loads, AI-won tournament prizes at computationally difficult games that the AI somehow autonomously chose to take part in, etc.

Most probably a combination of these and more.


Don't worry, it'll be relevant ads, just like google. You're going to love when code output is for proprietary libraries and databases and getting things the way you want will involve annoying levels of "clarification" that'll be harder and harder to use.

I kind of meant this as a joke as I typed this, but by the end almost wanted to quit the tech industry all together.


Just download a few SOTA (free) open-weights models well ahead of that moment and either run them from inside your living-room or store them onto a (cheap) 2TB external hard drive until consumer compute makes it affordable to run them from your living room.


So Nvidia, Microsoft, Apple, Alphabet, and Saudi Aramco are AGI


Wow that is so dumb. Can these addicts think about anything else than profits?


That's a pretty blatant public admission that corporations fundamentally regard intelligent entities as profit sources.


So what, there just won't be a word for general intelligence anymore, you know, in the philosophical sense?


Well this is why it's framed that way:

>This is an important detail because Microsoft loses access to OpenAI’s technology when the startup reaches AGI, a nebulous term that means different things to everyone.

Not sure how OpenAI feels about that.


lol, this is "autopilot" and "full self driving" all over again.

Just redefine the terms into something that's easy to accomplish but far from the definition of the terms/words/promises.


I know, they could get a big banner that says MISSION ACCOMPLISHED.


Apparently the US military is for sale, so they probably could hang it up on a battleship even.


So if their erotic bot reaches $100b in profit, they will declare AGI? lol


Given the money involved, they may be contractually obliged to?


Wait until they announce that they’ve been powering OnlyFans accounts this whole time.


This. This sentence reached off the page and hit me in the face.

It only just then became obvious to me that to them it's a question of when, in large part because of the MS deal.

Their next big move in the chess game will be to "declare" AGI.


I think some of this is just the typical bluster of company press releases / earnings reports. Can't ever show weakness or the shareholders will leave. Can't ever show doubt or the stock price will drop.

Nevertheless, I've been wondering of late. How will we know when AGI is accomplished? In the books or movies, it's always been handwaved or described in a way that made it seem like it was obvious to all. For example, in The Matrix there's the line "We marveled at our own magnificence as we gave birth to AI." It was a very obvious event that nobody could question in that story. In reality though? I'm starting to think it's just going to be more of a gradual thing, like increasing the resolution of our TVs until you can't tell it's not a window any longer.


> How will we know when AGI is accomplished?

It's certainly not an specific thing that can be accomplished. AGI is a useful name for a badly defined concept, but any objective application of it (like in a contract) is just stupid things done by people that could barely be described as having the natural variety of GI.


"We are now confident we know how to build AGI as we have traditionally understood it." - Sam Altman, Jan 2025

'as we have traditionally understood it' is doing a lot of heavy lifting there

https://blog.samaltman.com/reflections#:~:text=We%20believe%...


This is phenomenally conceited on both companies’ parts. Wow.


Don't worry, I'm sure we can just keep handing out subprime mortgages like candy forever. Infinite growth, here we come!


If I remember correctly, Microsoft was previously promised ownership of every pre-AGI asset created by OpenAI. Now they are being promised ownership of things post-AGI as well:

Microsoft’s IP rights for both models and products are extended through 2032 and now includes models post-AGI...

To me, this suggests a further dilution of the term "AGI."


To be honest, I think this is somewhat assymetric, and kind of implies that openai are truer "Believers" than Microsoft.

If you believe in a hard takeoff, than ownership of assets post agi is pretty much meaningless, however, it protects Microsoft from an early declaration of agi by openai.


This makes me feel that the extremely short AGI timelines might be less likely.

To sign this deal today, presumably you wouldn’t bother if AGI is just around the corner?

Maybe I’m reading too much into it.


OpenAI wants to be free from MS. The cost is 27% of ownership, which is about $135B currently, plus IP access until 2032. Considered MS invested about $10B initially, that’s a big concession on the part of OpenAI.

OpenAI’s Jakob Pachocki said on a call today that he expects that AI is “less than a decade away from superintelligence”


Or if one party has a different timeline than the other...


Obligatory the office line:

"I just wanted you to know that you can't just say the word "AGI" and expect anything to happen.

- Michael Scott: I didn't say it. I declared it


I think the more interesting question is who will be on the panel?

A group of ex frontier lab employees? You could declare AGI today. A more diverse group across academia and industry might actually have some backbone and be able to stand up to OpenAI.


The criteria changes more times than the weather forecast as it depends on the definition of "AGI".


It's quite possible that GI and thus AGI does not actually exist. Though now the paper the other day by all those heavy hitters in the industry makes more sense in this context.


>It's quite possible that GI and thus AGI does not actually exist.

Aren't we humans supposed to have GI? Maybe you're conflating AGI and ASI.


> Aren't we humans supposed to have GI?

Supposed by humans, who might not be aware of their own limitations.


> Aren't we humans supposed to have GI

Show me where GI is and how to measure it in a way that isn't just "it's however humans think"


what paper?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: