It's a not a binary thing, it's a spectrum. There are many elements of uncertainty in every action imaginable. I'm inclined to agree with the other commenter though, the LLM slot machine is absolutely closer on that spectrum to gambling than your example is.
Anthropic's optimization target is getting you to spend tokens, not produce the right answer. It's to produce an answer plausible enough but incomplete enough that you'll continue to spend as many tokens as possible for as long as possible. That's about as close to a slot machine as I can imagine. Slot rewards are designed to keep you interested as long as possible, on the premise that you _might_ get what you want, the jackpot, if you play long enough.
Anthropic's game isn't limited to a single spin either. The small wins (small prompts with well defined answers) are support for the big losses (trying to one shot a whole production grade program).
> Anthropic's optimization target is getting you to spend tokens, not produce the right answer.
The majority of us are using their subscription plans with flat rate fees.
Their incentive is the precise opposite of what you say. The less we use the product, the more they benefit. It's like a gym membership.
I think all of the gambling addiction analogies in this thread are just so strained that I can't take them seriously. Even the basic facts aren't even consistent with the real situation.
Thats a bit naive. Anthropic makes way more money if they gey you to use past your plans limit and wonder if you should get the next tier or switch to tokens
The price jump between subscription tiers is so high that relatively few people will upgrade instead of waiting a few more hours, and even if somebody does upgrade to the next subscription level, Anthropic still has an incentive to provide satisfactory answers as quickly as possible, to minimize tokens used per subscription, and because there is plenty of competition so any frustrated users are potential lost customers.
I swear this whole conversation is motivated reasoning from AI holdouts who desperately want to believe everybody else is getting scammed by a gambling scheme, that they don't stop and think about the situation rationally. Insofar as Claude is dominant, it's only because Claude works the best. There is meaningful competition in this market, as soon as Anthropic drops the ball they'll be replaced.
It's not going to get worse than now though. Open models like GLM 5 are very good. Even if companies decide to crank up the costs, the current open models will still be available. They will likely get cheaper to run over time as well (better hardware).
To run GLM-5 you need access to many, many consumer grade GPUs, or multiple data center level GPUs.
>They will likely get cheaper to run over time as well (better hardware).
Unless they magically solve the problem of chip scarcity, I don't see this happening. VRAM is king, and to have more of it you have to pay a lot more. Let's use the RTX 3090 as an example. This card is ~6 years old now, yet it still runs you around $1.3k. If you wanted to run GLM-5 I4 quantization (the lowest listed in the link above) with a 32k context window, you would need *32 RTX 3090's*. That's $42k dollars you'd be spending on obsolete silicon. If you wanted to run this on newer hardware, you could reasonable expect to multiply that number by 2.
I mean it would make sense to see this as a hardware investment into a virtual employee, that you actually control (or rent from someone who makes this possible for you), not as private assistant. Ballparking your numbers, we would need at least an order of magnitude price-performance improvement for that I think.
Also, how much bang for the buck do those 3090s actually give you compared to enterprise-grade products?
Wouldn't that apply only to a truly unlimited subscription? Last I looked all of their subs have a usage limit.
If you're on anything but their highest tier, it's not altogether unreasonable for them to optimize for the greatest number of plan upgrades (people who decide they need more tokens) while minimizing cancellations (people frustrated by the number of tokens they need). On the highest tier, this sort of falls apart but it's a problem easily solved by just adding more tiers :)
Of course, I don't think this is actually what's going on, but it's not irrational.
For subscription isers, anthropic makes mkre money if you hit your usage limit and wonder idlf the next plan, or switching to tokens would be better. Especially given the FOMO you probably have from all these posts talking about peoples productivity
> you'll continue to spend as many tokens as possible for as long as possible.
I mean this only works if Anthropic is the only game in town. In your analogy if anyone else builds a casino with a higher payout then they lose the game. With the rate of LLM improvement over the years, this doesn't seem like a stable means of business.
While I don't know if this applies to AI usage, but actual gambling addicts most certainly do not shop around for the best possible rewards: they stick more or less to the place they got addicted at initially. Not to mention, there's plenty of people addicted to "casinos" that give 0 monetary rewards, such as Candy Crush or Farmville back in the day and Genshin Impact or other gacha games today.
So, if there's a way to get people addicted to AI conversations, that's an excellent way to make money even if you are way behind your competitors, as addicted buyers are much more loyal that other clients.
You're taking the gambling analogy too seriously. People do in fact compare different LLMs and shop around. How gamblers choose casinos is literally irrelevant because this whole analogy is nothing more than a retarded excuse for AI holdouts to feel smug.
Anthropic's optimization target is getting you to spend tokens, not produce the right answer. It's to produce an answer plausible enough but incomplete enough that you'll continue to spend as many tokens as possible for as long as possible. That's about as close to a slot machine as I can imagine. Slot rewards are designed to keep you interested as long as possible, on the premise that you _might_ get what you want, the jackpot, if you play long enough.
Anthropic's game isn't limited to a single spin either. The small wins (small prompts with well defined answers) are support for the big losses (trying to one shot a whole production grade program).