Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Their goal is to monopolize labor for anything that has to do with i/o on a computer, which is way more than SWE. Its simple, this technology literally cannot create new jobs it simply can cause one engineer (or any worker whos job has to do with computer i/o) to do the work of 3, therefore allowing you to replace workers (and overwork the ones you keep). Companies don't need "more work" half the "features"/"products" that companies produce is already just extra. They can get rid of 1/3-2/3s of their labor and make the same amount of money, why wouldn't they.

ZeroHedge on twitter said the following:

"According to the market, AI will disrupt everything... except labor, which magically will be just fine after millions are laid off."

Its also worth noting that if you can create a business with an LLM, so can everyone else. And sadly everyone has the same ideas, everyone ends up working on the same things causing competition to push margins to nothing. There's nothing special about building with LLMs as anyone can just copy you that has access to the same models and basic thought processes.

This is basic economics. If everyone had an oil well on their property that was affordable to operate the price of oil would be more akin to the price of water.

EDIT: Since people are focusing on my water analogy I mean:

If everyone has easy access to the same powerful LLMs that would just drive down the value you can contribute to the economy to next to nothing. For this reason I don't even think powerful and efficient open source models, which is usually the next counter argument people make, are necessarily a good thing. It strips people of the opportunity for social mobility through meritocratic systems. Just like how your water well isn't going to make your rich or allow you to climb a social ladder, because everyone already has water.

 help



> Its also worth noting that if you can create a business with an LLM, so can everyone else. And sadly everyone has the same ideas

Yeah, this is quite thought provoking. If computer code written by LLMs is a commodity, what new businesses does that enable? What can we do cheaply we couldn't do before?

One obvious answer is we can make a lot more custom stuff. Like, why buy Windows and Office when I can just ask claude to write me my own versions instead? Why run a commodity operating system on kiosks? We can make so many more one-off pieces of software.

The fact software has been so expensive to write over the last few decades has forced software developers to think a lot about how to collaborate. We reuse code as much as we can - in shared libraries, common operating systems & APIs, cloud services (eg AWS) and so on. And these solutions all come with downsides - like supply chain attacks, subscription fees and service outages. LLMs can let every project invent its own tree of dependencies. Which is equal parts great and terrifying.

There's that old line that businesses should "commoditise their compliment". If you're amazon, you want package delivery services to be cheap and competitive. If software is the commodity, what is the bespoke value-added service that can sit on top of all that?


We said the same thing when 3D printing came out. Any sort of cool tech, we think everybody’s going to do it. Most people are not capable of doing it. in college everybody was going to be an engineer and then they drop out after the first intro to physics or calculus class. A bunch of my non tech friends were vibe coding some tools with replit and lovable and I looked at their stuff and yeah it was neat but it wasn't gonna go anywhere and if it did go somewhere, they would need to find somebody who actually knows what they're doing. To actually execute on these things takes a different kind of thinking. Unless we get to the stage where it's just like magic genie, lol. Maybe then everybody’s going to vibe their own software.

I don't think claude code is like 3d printing.

The difference is that 3D printing still requires someone, somewhere to do the mechanical design work. It democratises printing but it doesn't democratise invention. I can't use words to ask a 3d printer to make something. You can't really do that with claude code yet either. But every few months it gets better at this.

The question is: How good will claude get at turning open-ended problem statements into useful software? Right now a skilled human + computer combo is the most efficient way to write a lot of software. Left on its own, claude will make mistakes and suffer from a slow accumulation of bad architectural decisions. But, will that remain the case indefinitely? I'm not convinced.

This pattern has already played out in chess and go. For a few years, a skilled Go player working in collaboration with a go AI could outcompete both computers and humans at go. But that era didn't last. Now computers can play Go at superhuman levels. Our skills are no longer required. I predict programming will follow the same trajectory.

There are already some companies using fine tuned AI models for "red team" infosec audits. Apparently they're already pretty good at finding a lot of creative bugs that humans miss. (And apparently they find an extraordinary number of security bugs in code written by AI models). It seems like a pretty obvious leap to imagine claude code implementing something similar before long. Then claude will be able to do security audits on its own output. Throw that in a reinforcement learning loop, and claude will probably become better at producing secure code than I am.


> I can't use words to ask a 3d printer to make something

Setting aside any implications for your analogy. This is now possible.


Meshy?

That's one. You can also do it just with Gemini: https://www.youtube.com/watch?v=9dMCEUuAVbM

Workflow can be text-to-model, image-to-model, or text-to-image to model.


> This pattern has already played out in chess and go. For a few years, a skilled Go player working in collaboration with a go AI could outcompete both computers and humans at go. But that era didn't last. Now computers can play Go at superhuman levels. Our skills are no longer required. I predict programming will follow the same trajectory.

Both of those are fixed, unchanging, closed, full information games. The real world is very much not that.

Though geeks absolutely like raving about go and especially chess.


> Both of those are fixed, unchanging, closed, full information games. The real world is very much not that.

Yeah but, does that actually matter? Is that actually a reason to think LLMs won't be able to outpace humans at software development?

LLMs already deal with imperfect information in a stochastic world. They seem to keep getting better every year anyway.


This is like timing the stock market. Sure, share prices seem to go up over time, but we don't really know when they go up, down, and how long they stay at certain levels.

I don't buy the whole "LLMs will be magic in 6 months, look at how much they've progressed in the past 6 months". Maybe they will progress as fast, maybe they won't.


I’m not claiming I know the exact timing. I’m just seeing a trend line. Gpt3 to 3.5 to 4 to 5. Codex and now Claude. The models are getting better at programming much faster than I am. Their skill at programming doesn’t seem to be levelling out yet - at least not as far as I can see.

If this trend continues, the models will be better than me in less than a decade. Unless progress stops, but I don’t see any reason to think that would happen.


There is verification and validation.

The first part is making sure you built to your specification, the second thing is making sure you built specification was correct.

The second part is going to be the hard part for complex software and systems.


I think validation is already much easier using LLMs. Arguably this is one of the best use cases for coding LLMs right now: you can get claude to throw together a working demo of whatever wild idea you have without needing to write any code or write a spec. You don't even need to be a developer.

I don't know about you, but I'd much rather be shown a demo made by our end users (with claude) than get sent a 100 page spec. Especially since most specs - if you build to them - don't solve anyone's real problems.

Demo, don't memo.


Hm, how much real life experience do you have in delivering production SW systems?

Demo for the main flow is easy. The hard part is thinking through all the corner cases and their interactions, so your system robustly works in real world, interacting with the everyday chaos in a non-brittle fashion.


Hard disagree, clients/users often don't know what the best/right solution is, simply because they don't know what's possible or they haven't seen any prior art.

I'd much rather have a conversation with them to discuss their current problems and workflow, then offer my ideas and solutions.


I don’t think you are using validation in the same sense as PC

> The second part is going to be the hard part for complex software and systems.

Not going to. Is. Actually, always has been; it isn’t that coding solutions wasn’t hard before, but verification and validation cannot be made arbitrarily cheap. This is the new moat - if your solutions require time consuming and expensive in dollar terms qa (in the widest sense), it becomes the single barrier to entry.


Amazon Kiro starts with making the detailed specification based on human input in natural language.

The design work remains.

I’m not a fan of analogies, but here goes: Apple don’t make iPhones. But they employ an enormous number of people working on iPhone hardware, which they do not make.

If you think AI can replace everyone at Apple, then I think you’re arguing for AGI/superintelligence, and that’s the end of capitalism. So far we don’t have that.


> I can't use words to ask a 3d printer to make something.

You can: the words are in the G-code language.

I mean: you are used to learn foreign languages in school, so you are already used to formulate your request in a different language to make yourself understood. In this case, this language is G-code.


This is a strange take; no one is hand-writing the g-code for their 3d print. There are ways to model objects using code (eg openscad), but that still doesn't replace the actual mechanical design work involved in studying a problem and figuring out what sort of part is required to solve it.

Funny you should mention that.

I spent years writing a geometry and gcode generator in grasshopper. I wasn’t generating every line of gcode (my typical programs are about 500k lines), but I write the entire generator to go from curves to movements and extrusions.

I used opus to rewrite the entire thing, more cleanly, with fewer bugs and more features, in an afternoon. Admittedly it would have taken a lot longer without the domain expertise from years of staring at geometry and gcode side by side.


Produce the g code needed to 3D print the object of the attached illustrations from various angles.

Produce the 3D images of xxx from various angles.xxx should be able to do yyy.


Re: Produce the 3D images of xxx from various angles.xxx should be able to do yyy.

This is the tricky part. Do you know anything about mechanical engineering?


You can basically hand it a design, one that might take a FE engineer anywhere from a day to a week to complete and Codex/Claude will basically have it coded up in 30 seconds. It might need some tweaks, but it's 80% complete with that first try. Like I remember stumbling over graphing and charting libraries, it could take weeks to become familiar with all the different components and APIs, but seemingly you can now just tell Codex to use this data and use this charting library and it'll make it. All you have to do is look at the code. Things have certainly changed.

It might be 80-95% complete but the last 5% is either going to take twice the time or be downright impossible.

This is like Tesla's self-driving: 95% complete very early on, still unsuitable for real life many years later.

Not saying adding few novel ideas (perhaps working world models) to the current AI toolbox won't make a breakthrough, but LLMs have their limits.


That was the same thing with human products though.

https://en.wikipedia.org/wiki/Ninety%E2%80%93ninety_rule

Except that the either side of it is immensely cheaper now.


I figure it takes me a week to turn the output of ai into acceptable code. Sure there is a lot of code in 30 seconds but it shouldn't pass code review (even the ai's own review).

For now. Claude is worse than we are at programming. But its improving much faster than I am. Opus 4.6 is incredible compared to previous models.

How long before those lines cross? Intuitively it feels like we have about 2-3 years before claude is better at writing code than most - or all - humans.


It is certainly already better than most humans, even better than most humans who occasionally code. The bar is already quite high, I'd say. You have to be decent in your niche to outcompete frontier LLM Agents in a meaningful way.

I'm only allowed 4.5 at work where I do this (likely to change soon but bureaucracy...). Still the resulting code is not at a level I expect.

i told my boss (not fully serious) we should ban anyone with less than 5 years experience from using the ai so they learn to write and recognize good code.


The key difference here is that humans can progress. They can learn reasoning skills, and can develop novel methods.

The LLM is a stochastic parrot. It will never be anything else unless we develop entirely new theories.


I keep seeing this. The "for now" comments, and how much better it's getting with each model.

I don't see it in practice though.

The fundamental problem hasn't changed: these things are not reasoning. They aren't problem solving.

They're pattern matching. That gives the illusion of usefulness for coding when your problem is very similar to others, but falls apart as soon as you need any sort of depth or novelty.

I haven't seen any research or theories on how to address this fundamental limitation.

The pattern matching thing turns out to be very useful for many classes of problems, such as translating speech to a structured JSON format, or OCR, etc... but isn't particularly useful for reasoning problems like math or coding (non-trivial problems, of course).

I'm pretty excited about the applications for AI overall and it's potential to reduce human drudgery across many fields, I just think generating code in response to prompts is a poor choice of a LLM application.


Not really. What the FE engineer will produce in a week will be vastly different from what the AI will produce. That's like saying restaurants are dead because it takes a minute to heat up a microwave meal.

It does make the lowest common denominator easier to reach though. By which I mean your local takeaway shop can have a professional looking website for next to nothing, where before they just wouldn't have had one at all.

I think exceptional work, AI tools or not, still takes exceptional people with experience and skill. But I do feel like a certain level of access to technology has been unlocked for people smart enough, but without the time or tools to dive into the real industry's tools (figma, code, data tools etc).


The local takeaway shop could have had a professional looking website for years with Wix, Squarespace, etc. There are restaurant specific solutions as well. Any of these would be better than vibe coding for a non-tech person. No-code has existed for years and there hasn't been a flood of bespoke software coming from end users. I find it hard to believe that vibe-coding is easier or more intuitive than GUI tooling designed for non-experts...

I think the idea that LLM's will usher in some new era where everyone and their mom are building software is a fantasy.


I more or less agree specifically on the angle that no-code has existed, yet non-technical people still aren't executing on technical products. But I don't think vibe-coding is where we see this happening, it will be in chat interfaces or GUIs. As the "scafolding" or "harnesses" mature more, and someone can just type what they want, then get a deployed product within the day after some back and forth.

I am usually a bit of an AI skeptic but I can already see that this is within the realm of possibility, even if models stopped improving today. I think we underestimate how technical things like WIX or Squarespace are, to a non-technical person, but many are skilled business people who could probably work with an LLM agent to get a simple product together.

People keep saying code was never the real skill of an engineer, but rather solving business logic issues and codifying them. Well people running a business can probably do that too, and it would be interesting to see them work with an LLM to produce a product.


> I think we underestimate how technical things like WIX or Squarespace are, to a non-technical person, but many are skilled business people who could probably work with an LLM agent to get a simple product together.

In the same vein, I think you underestimate how much "hidden" technical knowledge must be there to actually build a software that works most of the time (not asking for a bug-free program). To design such a program with current LLM coding agents you need to be at very least a power user, probably a very powerful one, in the domain of the program you want to build and also in the domain of general software. Maybe things will improve with LLM and agents and "make it work" will be enough for the agent to create tests, try extensively the program, finding bugs and squashing them and do all the extra work needed, who know. But we are definitely not there today.


Yeah I've thought for a while that the ideal interface for non-tech users would be these no-code tools but with an AI interface. Kinda dumb to generate code that they can't make sense of, with no guard rails etc.

There were some good and some pretty terrible FE devs though, and it's not clear which ones prevailed.

Wouldn’t we have more restaurants if there was no microwave ovens? But microwave oven also gave rise to many frozen food industry. Overall more industrializations.

> You can basically hand it a design

And, pray tell, how people are going to come up with such design?


Honestly you could just come up with a basic wireframe in any design software (MS paint would work) and a screen shot of a website with a design you like and tell it "apply the aesthetic from the website in this screenshot to the wireframe" and it would probably get 80% (probably more) of the way there. Something that would have taken me more than a day in the past.

I've been in web design since images were first introduced to browsers and modern designs for the majority of sites are more templated than ever. AI can already generate inspiration, prototypes and designs that go a long way to matching these, then juice them with transitions/animations or whatever else you might want.

The other day I tested an AI by giving it a folder of images, each named to describe the content/use/proportions (e.g., drone-overview-hero-landscape.jpg), told it the site it was redesigning, and it did a very serviceable job that would match at least a cheap designer. On the first run, in a few seconds and with a very basic prompt. Obviously with a different AI, it could understand the image contents and skip that step easily enough.


I have never once seen this actually work in a way that produces a product I would use. People keep claiming these one-shot (or nearly one-shot) successes, but in the mean time I ask it to modify a simple CSS rule and it rewrites the enter file, breaks the site, and then can't seem to figure out what it did wrong.

It's kind of telling that the number of apps on Apple's app store has been decreasing in recent years. Same thing on the Android store too. Where are the successful insta-apps? I really don't believe it's happening.

https://www.appbrain.com/stats/number-of-android-apps

I've recently tried using all of the popular LLMs to generate DSP code in C++ and it's utterly terrible at it, to the point that it almost never even makes it through compilation and linking.

Can you show me the library of apps you've launched in the last few years? Surely you've made at least a few million in revenue with the ease with which you are able to launch products.


AI is typically better at working with AI-generated code than human-authored. AI on AI tends to work great.

This, of course, is the problem.

There's a really painful Dunning-Kruger process with LLMs, coupled with brutal confirmation bias that seems to have the industry and many intelligent developers totally hoodwinked.

I went through it too. I'm pretty embarrassed at the AI slop I dumped on my team, thinking the whole time how amazingly productive I was being.

I'm back to writing code by hand now. Of course I use tools to accelerate development, but it's classic stuff like macros and good code completion.

Sure, a LLM can vomit up a form faster than I can type (well, sometimes, the devil is always the details), but it completely falls apart when trying to do something the least bit interesting or novel.


The number of non-technical people in my orbit that could successfully pull up Claude code and one shot a basic todo app is zero. They couldn’t do it before and won’t be able to now.

They wouldn’t even know where to begin!


You don't need to draw the line between tech experts and the tech-naive. Plenty of people have the capability but not the time or discipline to execute such a thing by hand.

You go to chatGPT and say "produce a detailed prompt that will create a functioning todo app" and then put that output into Claude Code and you now have a TODO app.

This is still a stumbling block for a lot of people. Plenty of people could've found an answer to a problem they had if they had just googled it, but they never did. Or they did, but they googled something weird and gave up. AI use is absolutely going to be similar to that.

Maybe I’m biased working in insurance software, but I don’t get the feeling much programming happens where the code can be completely stochastically generated, never have its code reviewed, and that will be okay with users/customers/governments/etc.

Even if all sandboxing is done right, programs will be depended on to store data correctly and to show correct outputs.


Insurance is complicated, not frequently discussed online, and all code depends on a ton of domain knowledge and proprietary information.

I'm in a similar domain, the AI is like a very energetic intern. For me to get a good result requires a clear and detailed enough prompt I could probably write expression to turn it into code. Even still, after a little back and forth it loses the plot and starts producing gibberish.

But in simpler domains or ones with lots of examples online (for instance, I had an image recognition problem that looked a lot like a typical machine learning contest) it really can rattle stuff off in seconds that would take weeks/months for a mid level engineer to do and often be higher quality.

Right in the chat, from a vague prompt.


Step one: you have to know to ask that. Nobody in that orbit knows how to do that. And these aren’t dumb people. They just aren’t devs.

Its not our current location, but our trajectory that is scary.

The walls and plateaus that have been consistently pulled out from "comments of reassurance" have not materialized. If this pace holds for another year and a half, things are going to be very different. And the pipeline is absolutely overflowing with specialized compute coming online by the gigawatt for the foreseeable future.

So far the most accurate predictions in the AI space have been from the most optimistic forecasters.


There is a distribution of optimism, some people in 2023 were predicting AGI by 2025.

No such thing as trajectory when it comes to mass behavior because it can turn on a dime if people find reason to. Thats what makes civilization so fun.



Thank you for posting this.

Im really tired, and exhausted of reading simple takes.

Grok is a very capable LLM that can produce decent videos. Why are most garbage? Because NOT EVERYONE HAS THE SKILL NOR THE WILL TO DO IT WELL!


The answer is taste.

I don't know if they will ever get there, but LLMs are a long ways away from having decent creative taste.

Which means they are just another tool in the artist's toolbox, not a tool that will replace the artist. Same as every other tool before it: amazing in capable hands, boring in the hands of the average person.


Also, if you are a human who does taste, it's very difficult to get an AI to create exactly what you want. You can nudge it, and little by little get closer to what you're imagining, but you're never really in control.

This matters less for text (including code) because you can always directly edit what the AI outputs. I think it's a lot harder for video.


> Also, if you are a human who does taste, it's very difficult to get an AI to create exactly what you want.

I wonder if it would be possible to fine train an AI model on my own code. I've probably got about 100k lines of code on github. If I fed all that code into a model, it would probably get much better at programming like me. Including matching my commenting style and all of my little obsessions.

Talking about a "taste gap" sounds good. But LLMs seem like they'd be spectacularly good at learning to mimic someone's "taste" in a fine train.


Taste is both driven by tools and independent of it.

It's driven by it in the sense that better tools and the democratization of them changes people's baseline expectations.

It's independent of it in that doing the baseline will not stand out. Jurassic Park's VFX stood out in 1993. They wouldn't have in 2003. They largely would've looked amateurish and derivative in 2013 (though many aspects of shot framing/tracking and such held up, the effects themselves are noticeably primitive).

Art will survive AI tools for that reason.

But commerce and "productivity" could be quite different because those are rarely about taste.


100% correct. Taste is the correct term - I avoid using it as Im not sure many people here actually get what it truly means.

How can I proclaim what I said in the comment above? Because Ive spent the past week producing something very high quality with Grok. Has it been easy? Hell no. Could anyone just pick up and do what Ive done? Hell no. It requires things like patience, artistry, taste etc etc.

The current tech is soul-less in most people hands and it should remain used in a narrow range in this context. The last thing I want to see is low quality slop infesting the web. But hey that is not what the model producers want - they want to maximize tokens.


The job of a coder has far from become obsolete, as you're saying. It's definitely changed to almost entirely just code review though.

With Opus 4.6 I'm seeing that it copies my code style, which makes code review incredibly easy, too.

At this point, I've come around to seeing that writing code is really just for education so that you can learn the gotchas of architecture and support. And maybe just to set up the beginnings of an app, so that the LLM can mimic something that makes sense to you, for easy reading.

And all that does mean fewer jobs, to me. Two guys instead of six or more.

All that said, there's still plenty to do in infrastructure and distributed systems, optimizations, network engineering, etc. For now, anyway.


> To actually execute on these things takes a different kind of thinking

Agreed. Honestly, and I hate to use the tired phrase, but some people are literally just built different. Those who'd be entrepreneurs would have been so in any time period with any technology.


This goes well along with all my non-tech and even tech co-workers. Honestly the value generation leverage I have now is 10x or more then it was before compared to other people.

HN is a echo chamber of a very small sub group. The majority of people can’t utilize it and needs to have this further dumbed down and specialized.

That’s why marketing and conversion rate optimization works, its not all about the technical stuff, its about knowing what people need.

For funded VC companies often the game was not much different, it was just part of the expenses, sometimes a lot sometimes a smaller part. But eventually you could just buy the software you need, but that didn’t guarantee success. Their were dramatic failures and outstanding successes, and I wish it wouldn’t but most of the time the codebase was not the deciding factor. (Sometimes it was, airtable, twitch etc, bless the engineers, but I don’t believe AI would have solved these problems)


> The majority of people can’t utilize it

Tbh, depending on the field, even this crowd will need further dumbing down. Just look at the blog illustration slops - 99% of them are just terrible, even when the text is actually valuable. That's because people's judgement of value, outside their field of expertise, is typically really bad. A trained cook can look at some chatgpt recipe and go "this is stupid and it will taste horrible", whereas the average HN techbro/nerd (like yours truly) will think it's great -- until they actually taste it, that is.


The example is bad imo because chatgpt can be really great for cooking if you utilize it correctly. Like in coding you already need some skill and shouldn't believe everything it says.

Agreed. This place amazes in regards to how overly confident some people feel stepping outside of their domains.. the mistakes I see here in relation to talking about subject areas associated with corporate finance, valuation etc is hilarious. Truly hilarious.

> whereas the average HN techbro/nerd (like yours truly) will think it's great -- until they actually taste it, that is.

This is the schtick though, most people wouldn't even be able to tell when they taste it. This is typically how it works, the average person simply lacks the knowledge so they don't even know what is possible.


3 things

1) I don’t disagree with the spirit of your argument

2) 3D printing has higher startup costs than code (you need to buy the damn printer)

3) YOU are making a distinction when it comes to vibe coding from non-tech people. The way these tools are being sold, the way investments are being made, is based on non-domain people developing domain specific taste.

This last part “reasonable” argument ends up serving as a bait and switch, shielding these investments. I might be wrong, but your comment doesn’t indicate that you believe the hype.


This whole comment thread here is really echoing and adding to some thoughts ive had lately on the shift from considering LLMs replacing engineering to make software (much of which is about integration, longevity and customization of a general system), vs LLMs replacing buying software.

If most software is just used by me to do a specific task, then being able to make software for me to do that task will become the norm. Following that thought, we are going to see a drastic reduction in SASS solutions, as many people who were buying a flexible-toolbox for one usecase to use occasionally, just get an llm to make them the script/software to do that task as and when they need it, without any concern for things like security, longevity, ease of use by others (for better or for worse).

I guess what im circling around is that if we define engineering as building the complex tools that have to interact with many other systems, persist, be generally useful and understandable to many people, and we consider that many people actually dont need that complexity for their use of the system, the complexity arises from it needing to serve its purpose at huge scale over time. then maybe there will be less need for enginners, but perhaps first and foremost because the problems that engineering is required to solve are much less if much more focused and bespoke solutions to peoples problems are available on demand.

As an engineer i have often felt threatened by LLMs and agents of late, but i find that if i reframe it from Agents replacing me, to Agents causing the type of problems that are even valuable to solve to shift, it feels less threatening for some reason. Ill have to mull more.


Taking it further, imagine a traditional desktop OS but it generates your programs on the fly.

Google's weird AI browser project is kind of a step in this direction. Instead of starting with a list of programs and services and customizing your work to that workflow, you start with the task you need accomplished and the operating system creates an optimized UI flow specifically for that task.


but bringing it back, you 1° need to pitch this idea to investors liberate money to cover the Sahara desert with a huge server to suffice these sci-fi needs /s

Even if code gets cheaper, running your own versions of things comes with significant downsides.

Software exists as part of an ecosystem of related software, human communities, companies etc. Software benefits from network effects both at development time and at runtime.

With full custom software, you users / customers won't be experienced with it. AI won't automatically know all about it, or be able to diagnose errors without detailed inspection. You can't name drop it. You don't benefit from shared effort by the community / vendors. Support is more difficult.

We are also likely to see "the bar" for what constitutes good software raise over time.

All the big software companies are in a position to direct enormous token flows into their flagship products, and they have every incentive to get really good at scaling that.


The logical endgame (which I do not think we will necessarily reach) would be the end of software development as a career in itself.

Instead software development would just become a tool anybody could use in their own specific domain. For instance if a manager needs some employee scheduling software, they would simply describe their exact needs and have software customized exactly to their needs, with a UI that fits their preference, ready to go in no time, instead of finding some SaaS that probably doesn't fit exactly what they want, learning how to use it, jumping through a million hoops, dealing with updates you don't like, and then paying a perpetual rent on top of all of this.


Writing the code has never been the hard part for the vast majority of businesses. It's become an order of magnitude cheaper, and that WILL have effects. Businesses that are selling crud apps will falter.

But your hypothetical manager who needs employee scheduling software isn't paying for the coding, they're paying for someone to _figure out_ their exact needs, and with a UI that fits their preference, ready to go in no time.

I've thought a lot about this and I don't think it'll be the death of SaaS. I don't think it's the death of a software engineer either — but a major transformation of the role and the death if your career _if you do not adapt_, and fast.

Agentic coding makes software cheap, and will commoditize a large swath of SaaS that exists primarily because software used to be expensive to build and maintain. Low-value SaaS dies. High-value SaaS survives based on domain expertise, integrations, and distribution. Regulations adapt. Internal tools proliferate.


This reminds me of the old idea of the Lisp curse. The claim was that Lisp, with the power of homoiconic macros, would magnify the effectiveness of one strong engineer so much that they could build everything custom, ignoring prior art.

They would get amazing amounts done, but no one else could understand the internals because they were so uniquely shaped by the inner nuances of one mind.


> If software is the commodity, what is the bespoke value-added service that can sit on top of all that?

Troubleshooting and fixing the big mess that nobody fully understands when it eventually falls over?


> Troubleshooting and fixing the big mess that nobody fully understands

If that's actually the future of humans in software engineering then that sounds like a nightmare career that I want no part of. Just the same as I don't want anything to do with the gigantic mess of Cobal and Java powering legacy systems today.

And I also push back on the idea that llms can't troubleshoot and fix things, and therefore will eventually require humans again. My experience has been the opposite. I've found that llms are even better at troubleshooting and fixing an existing code base than they are at writing greenfield code from scratch.


My experience so far has been they are somewhat good at troubleshooting code, patterns, etc, that exist in the publicly viewable sphere of stuff it's trained on, where common error messages and pitfalls are "google-able"

They are much worse at code/patterns/apis that were locally created, including things created by the same LLM that's trying to fix a problem.

I think LLMs are also creating a decline in the amount of good troubleshooting information being published on the internet. So less future content to scrape.


> One obvious answer is we can make a lot more custom stuff. Like, why buy Windows and Office when I can just ask claude to write me my own versions instead? Why run a commodity operating system on kiosks? We can make so many more one-off pieces of software

yes, it will enable a lot of custom one-off software but I think people are forgetting the advantages of multiple copied instances, which is what enabled software to be so successful in the first place.

Mass production of the same piece of software creates standards, every word processor uses the same format and displays it the same way.

Every date library you import will calculate two months from now the same way, therefore this is code you don't have to constantly double check in your debug sessions.


Software isn't just the code, it's also the stability that can only be gained after years of successful operation and ironing out bugs, the understanding of who your customers truly are, what are their actual needs (and not perceived needs), which features will drive growth. etc. I think there's still a "there" there.

I think the kind of software that everybody needs (think Slack or Jira) is at the greatest risk, as everybody will want to compete in those fields, which will drive margins to 0 (and that's a good thing for customers)! However, I think small businesses pandering to specific user groups will still be viable.


> why buy Windows and Office when I can just ask claude to write me my own versions instead? Why run a commodity operating system on kiosks?

Linux costs $0. Creating a linux clone compatible with your hardware from the hardware spec sheets with an AI for complicated hardware would cost thousands to millions of dollars in tokens, and you'd end up with something that works worse than linux (or more likely something that doesn't even boot).

Even if the price falls by a thousand fold, why would you spend thousands of dollars on tokens to develop an OS when there's already one you can use?

Even if software becomes cheaper to write, it's not free, and there's a lot of software (especially libraries) out there which is free.


> cost thousands to millions of dollars in tokens

> Even if the price falls by a thousand fold, why would you spend thousands of dollars on tokens to develop an OS when there's already one you can use?

Why do you assume token price will only fall a thousand fold? I'm pretty sure tokens have fallen by more than that in the last few years already - at least if we're speaking about like-for-like intelligence.

I suspect AI token costs will fall exponentially over the next decade or two. Like Dennard scaling / Moore's law has for CPUs over the last 40 years. Especially given the amount of investment being poured into LLMs at the moment. Essentially the entire computing hardware industry is retooling to manufacture AI clusters.

If it costs you $1-$10 in tokens to get the AI to make a bespoke operating system for your embedded hardware, people will absolutely do it. Especially if it frees them up from supply chain attacks. Linux is free, but linux isn't well optimized for embedded systems. I think my electric piano runs linux internally. It takes 10 seconds to boot. Boo to that.


> If software is the commodity, what is the bespoke value-added service that can sit on top of all that?

It would be cool if I can brew hardware at home by getting AI to design and 3D print circuit boards with bespoke software. Alas, we are constrained by physics. At the moment.


> If software is the commodity, what is the bespoke value-added service that can sit on top of all that?

Aggregation. Platforms that provide visibility, influence, reach.


> Yeah, this is quite thought provoking. If computer code written by LLMs is a commodity, what new businesses does that enable? What can we do cheaply we couldn't do before?

The model owner can just withhold access and build all the businesses themselves.

Financial capital used to need labor capital. It doesn't anymore.

We're entering into scary territory. I would feel much better if this were all open source, but of course it isn't.


I think this risk is much lower in a world where there are lots of different model owners competing with each other, which is how it appears to be playing out.

New fields are always competitive. Eventually, if left to its own devices, a capitalist market will inevitably consolidate into cartels and monopolies. Governments better pay attention and possibly act before it's too late.

> Governments better pay attention and possibly act before it's too late.

Before its too late for what? For OpenAI and Claude to privatise their models and restrict (or massively jack up the prices) for their APIs?

The genie is already out of the bottle. The transformers paper was public. The US has OpenAI, Anthropic, Grok, Google and Meta all making foundation models. China has Deepseek. And Huggingface is awash with smaller models you can run at home. Training and running your own models is really easy.

Monopolistic rent seeking over this technology is - for now - more or less impossible. It would simply be too difficult & expensive for one player to gobble up all their competitors, across multiple continents. And if they tried, I'm sure investors will happily back a new company to fight back.


Why would the model owner do that? You still need some human input to operate the business, so it would be terribly impractical to try to run all the businesses. Better to sell the model to everyone else, since everyone will need it.

The only existential threat to the model owner is everyone being a model owner, and I suspect that's the main reason why all the world's memory supply is sitting in a warehouse, unused.


I have never been in an organization where everyone was sitting around, wondering what to do next. If the economy was actually as good as certain government officials claimed to be, we would be hiring people left and right to be able to do three times as much work, not firing.

That's the thing, profits and equities are at all time highs, but these companies have laid off 400k SWEs in the last 16 months in the US, which should tell you what their plans are for this technology and augmenting their businesses.

The last 16 months of layoffs are almost certainly not because of LLMs. All the cheap money went away, and suddenly tech companies have to be profitable. That means a lot of them are shedding anything not nailed down to make their quarter look better.

The point is there’s no close positive correlation at that scale between labor and profits — hence the layoffs while these companies are doing better than ever. There’s zero reason to think increased productivity would lead to vastly more output from the company with the same amount of workers rather than far fewer workers and about the same amount of output, which is probably driven more by the market than a supply bottleneck.

Last I checked, the tractor and plow are doing a lot more work than 3 farmers, yet we've got more jobs and grow more food.

People will find work to do, whether that means there's tens of thousands of independent contractors, whether that means people migrate into new fields, or whether that means there's tens of multi-trillion dollar companies that would've had 200k engineers each that now only have 50k each and it's basically a net nothing.

People will be fine. There might be big bumps in the road.

Doom is definitely not certain.


America has lost over 50% of farms and farmers since 1900. Farming used to be a significant employer, and now it's not. Farming used to be a significant part of the GDP, and now it's not. Farming used to be politically significant... and not its complicated?.

If you go to the many small towns in farm country across the United States, I think the last 100 years will look a lot closer to "doom" than "bumps in the road". Same thing with Detroit when we got foreign cars. Same thing with coal country across Appalachia as we moved away from coal.

A huge source of American political tension comes from the dead industries of yester-year combined with the inability of people to transition and find new respectable work near home within a generation or two. Yes, as we get new technology the world moves on, but it's actually been extremely traumatic for many families and entire towns, for literally multiple generations.


Same thing with Walmart and local shops.

On the one hand, it brings a greater selection, at cheaper prices, delivered faster, to communities.

On the other hand, it steamrolls any competing businesses and extracts money that previously circulated locally (to shareholders instead).


> it brings a greater selection,

Greater selection in one store perhaps, but over a continent you now have one garden shovel model.


Farming GDP has grown 2-3x since the 1900s. It's just everything else has grown even more. That doesn't make farming somehow irrelevant work. There's just more stuff to do now. This seems pretty consistent with OPs point.

What does that matter that a lot of people were farming? If anything that's a good argument for not worrying because we don't have 50%+ unemployment so clearly all those farming jobs were reallocated.

This transformation back then took many many decades like few generations. People had time to adopt - it worked like this: as a kid you have seen family business was going worse, the writing was on the wall and teenagers pursued different professions. This time you won't have time to pivot different profession - most likely you will have not clue where to pivot to.

> Last I checked, the tractor and plow are doing a lot more work than 3 farmers, yet we've got more jobs and grow more food.

Not sure when you checked.

In the US more food is grown for sure. For example just since 2007 it has grown from $342B to $417B, adjusted for inflation[1].

But employment has shrunk massively, from 14M in 1910 to around 3M now[2] - and 1910 was well after the introduction of tractors (plows not so much... they have been around since antiquity - are mentioned extensively in the old testament Bible for example).

[1] https://fred.stlouisfed.org/series/A2000X1A020NBEA

[2] https://www.nass.usda.gov/Charts_and_Maps/Farm_Labor/fl_frmw...


That's his point. Drastically reducing agricultural employment didn't keep us from getting fed (and led to a significantly richer population overall -- there's a reason people left the villages for the industrial cities)

I'm not sure that's what they meant. Read like this:

> the tractor and plow are doing a lot more work than 3 farmers, yet we've got more jobs and grow more food.

it sounds to me like they mean "more job and grow more food" in the same context as "the tractor and plow [that] are doing a lot more work than 3 farmers"

But you could be right in which case I agree with them.


But where will office workers displaced by AI leave? Industrialization brought demand for factory work (and later grew service sector), but I can't see what new opportunities AI is creating. There are only so many service people AI billionaires need to employ.

You realize this was the exact argument with the tractor / steam engine, electricity, and the computer?

No, you cannot ignore every argument by claiming someone else made it before. Make an actual response.

What new opportunities does the LLM create for the workers it may displace? What new opportunities did neural machine translation create for the workers it displaced?

In what way is a text-generation machine that dominates all computer use alike with the steam engine?

The steam engine powered new factories workers could slave away in, demanded coal that created mining towns. The LLM gives you a data centre. How many people does a data centre employ?


there's no reason to believe this trend will continue forever, simply because it has held for the past hundred years or so

More jobs where? In farming? Is that why farming in the US is dying, being destroyed by corporations and farmers are now prisoners to John Deer? It’s hilarious that you chose possibly the worst counter example here…

More output, not more farmers. The stratification of labor in civilization is built on this concept, because if not for more food, we'd have more "farmer jobs" of course, because everyone would be subsistence farming...

That’s not the statement made by the grand parent comment tho. That comment reads as stating an increase in farming jobs.

Wow you are making a point of everything will be ok using farming ! Farming is struggling consolidated to big big players and subsidies keep it going

You get layed off and spend 2-3 years migrating to another job type what do you think g that will do to your life or family. Those starting will have a paused life those 10 fro retirement are stuffed.


I always find these "anti-AI" AI believer takes fascinating. If true AGI (which you are describing) comes to pass, there will certainly be massive societal consequences, and I'm not saying there won't be any dangers. But the economics in the resulting post-scarcity regime will be so far removed from our current world that I doubt any of this economic analysis will be even close to the mark.

I think the disconnect is that you are imagining a world where somehow LLMs are able to one-shot web businesses, but robotics and real-world tech is left untouched. Once LLMs can publish in top math/physics journals with little human assistance, it's a small step to dominating NeurIPS and getting us out of our mini-winter in robotics/RL. We're going to have Skynet or Star Trek, not the current weird situation where poor people can't afford healthy food, but can afford a smartphone.


> We're going to have Skynet or Star Trek

Star Trek only got a good society after an awful war, so neither of these options are good.


Star Trek only got a good society after discovering FTL and existence of all manner of alien societies. And even after that Star Treks story motivations on why we turned good sound quite implausible given what we know about human nature and history. No effing way it will ever happen even if we discover aliens. Its just a wishful fever dream.

I'm definitely not a Star Trek connoisseur but I thought a big part of the lore is the "never again"-ish response to the wars through WW3?

But anyway, I share your lack of optimism.


Well they didn't necessarily stop waging war in Star Trek either.. They also spent most of their time trying to not get defeated by parasitic artificial intelligence.

It isn't even just the aliens (although my headcanon is that the human belief that they "evolved beyond their base instincts" is part a trauma response to first contact and World War 3, and part Vulcan propaganda/psyop.) Star Trek's post scarcity society depends on replicators and transporters and free energy all of which defy the laws of physics in our universe (on top of FTL.)

We'll never have Star Trek. We'll also never have SkyNet, because SkyNet was too rational. It seems obvious that any AGI that emerges from LLMs - assuming that's possible - will not behave according to the old "cold and logical machine" template of AI common in sci-fi media. Whatever the future holds will be more stupid and ridiculous than we can imagine, because the present already is.


> Their goal is to monopolize labor for anything that has to do with i/o on a computer, which is way more than SWE. Its simple, this technology literally cannot create new jobs it simply can cause one engineer (or any worker whos job has to do with computer i/o) to do the work of 3, therefore allowing you to replace workers (and overwork the ones you keep). Companies don't need "more work" half the "features"/"products" that companies produce is already just extra. They can get rid of 1/3-2/3s of their labor and make the same amount of money, why wouldn't they.

Yes, that's how technology works in general. It's good and intended.

You can't have baristas (for all but the extremely rich), when 90%+ of people are farmers.

> ZeroHedge on twitter said the following:

Oh, ZeroHedge. I guess we can stop any discussion now..


The baristas example can only make me think that with the growing wealth disparity and no obvious exit path for white collars we might see a big return of servant-like jobs for below 1%. Who wouldn't want to wake up and daily assist life of some remaining upper-middle class Anthropic's employee?

What growing wealth disparity?

Btw, globally equality hasn't looked better in probably more than a century by now. Especially in terms of real consumption.


Sorry, I don't see your point. While lifting up the masses out of extreme poverty globally is obviously good, it doesn't transfer to your situation unless you happen to live in one of these upstart countries. The society you live in is not global, even if we share more of popculture and technology now.

The price of oil at the price of water (ecology apart) should be a good thing.

Automation should be, obviously, a good thing, because more is produced with less labor. What it says of ourselves and our politics that so many people (me included) are afraid of it?

In a sane world, we would realize that, in a post-work world, the owner of the robots have all the power, so the robots should be owned in common. The solution is political.


Throughout history Empires have bet their entire futures on the predictions of seers, magicians and done so with enthusiasm. When political leaders think their court magicians can give them an edge, they'll throw the baby out with the bathwater to take advantage of it. It seems to me that the Machine Learning engineers and AI companies are the court magicians of our time.

I certainly don't have much faith in the current political structures, they're uneducated on most subjects they're in charge of and taking the magicians at their word, the magicians have just gotten smarter and don't call it magic anymore.

I would actually call it magic though, just actually real. Imagine explaining to political strategists from 100 years ago, the ability to influence politicians remotely, while they sit in a room by themselves a la dictating what target politicians see on their phones and feed them content to steer them in a certain directions.. Its almost like a synthetic remote viewing.. And if that doesn't work, you also have buckets of cash :|


What do we “need” more of? Here in France we need more doctors, more nurseries, more teachers… I don’t see AI helping much there in short to middle term (with teachers all research points to AI making it massively worse even)

Globally I think we need better access to quality nutrition and more affordable medicine. Generally cheaper energy.


Isn’t the end game that all the displaced SWEs give up their cushy, flexible job and get retrained as nurses?

Wait, my job is not cushy. I think hard all day long, I endure levels of frustration that would cripple most, and I do it because I have no choice, I must build the thing I see or be tormented by its possibility. Cushy? Right.

This is the most "1st world problems" comment I've read today.

How is that 1st world, there are plenty of people that "think hard" and deal with really hard problems in the "3rd World"

Give compiler engineering for medical devices a whirl for 14 hours a day for a month or so and let me know if you think it's "cushy". Not everything is making apps and games, sometimes your mistakes can mean life or death. Lots of SWE isn't cushy at all, or necessarily well paid.

Go get a bachelors and masters in EE while being eating just two bowls of rice and lentils everyday for 5 years and let me know if that's cushy.


As compared to risking life and limbs every day in a mine, breathing in cancerous powders, finding yourself with most of your joints fucked at 45, likely carrying PTSD from accidents happened to you or your colleagues... Yes, "hard thinking" looks pretty cushy in comparison.

Have you any idea how many people die every day on their workplace in manufacturing, construction, or mining; or how many develop chronic issues from agriculture...? And all for salaries that are a tenth of the average developer (in the developed world; elsewhere, more like a hundredth). Come on now.

Everyone has problems and everyone is entitled to feel aggrieved by their condition, but one should maintain a reasonable degree of perspective at all times.


That sounds and is incredibly cushy lmao

While I agree, I am not hopeful. The incentive alignment has us careening towards Elysium rather than Star Trek.

There is no such thing that you can always keep adding more of and have it automatically be effective.

I tend to automate too much because it's fun, but if I'm being objective in many cases it has been more work than doing the stuff manually. Because of laziness I tend to way overestimate how much time and effort it would took to do something manually if I just rolled my sleeved and simply did it.

Whether automating something actually produces more with less labor depends on nuance of each specific case, it's definitely not a given. People tend to be very biased when judging the actual productivity. E.g. is someone who quickly closes tickets but causes disproportionate amount of production issues, money losing bugs or review work on others really that productive in the end?


> And sadly everyone has the same ideas, everyone ends up working on the same things

This is someone telling you they have never had an idea that surprised them. Or more charitably, they've never been around people whose ideas surprised them. Their entire model of "what gets built" is "the obvious thing that anyone would build given the tools." No concept of taste, aesthetic judgment, problem selection, weird domain collisions, or the simple fact that most genuinely valuable things were built by people whose friends said "why would you do that?"


I'm speaking about the vast majority of people, who yes, build the same things. Look at any HN post over the last 6 months and you'll see everyone sharing clones of the same product.

Yes some ideas or novel, I would argue that LLMs destroy or atrophy the creative muscle in people, much like how GPS powered apps destroyed people's mental navigation "muscles".

I would also argue that very few unique valuable "things" built by people ever had people saying "Why would you build that". Unless we're talking about paradigm shifting products that are hard for people to imagine, like a vacuum cleaner in the 1800s. But guess what, llms aren't going to help you build those things.. They can create shitty images, clones of SaaS products that have been built 50x over, and all around encourage people to be mediocre and destroy their creativity as their brains atrophy from their use.


> They can get rid of 1/3-2/3s of their labor and make the same amount of money, why wouldn't they.

Because companies want to make MORE money.

Your hypothetical company is now competing with another company who didn’t opposite, and now they get to market faster, fix bugs faster, add feature faster, and responding to changes in the industry faster. Which results in them making more, while your employ less company is just status quo.

Also. With regards to oil, the consumption of oil increases as it became cheaper. With AI we now have a chance to do projects that simply would have cost way too much to do 10 years ago.


> Which results in them making more

Not necessarily.

You are assuming that the people can consume whatever is put in front of them. Markets get saturated fast. The "changes in the industry" mean nothing.


A) People are so used to infinite growth that it’s hard to imagine a market where that doesn’t exist. The industry can have enough developers and there’s a good chance we’re going to crash right the fuck into that pretty quickly. America’s industrial labor pool seemed like it provided an ever-expanding supply of jobs right up until it didn’t. Then, in the 80s, it started going backwards preeeetttty dramatically.

B) No amount of money will make people buy something that doesn’t add value to or enrich their lives. You still need ideas, for things in markets that have room for those ideas. This is where product design comes in. Despite what many developers think, there are many kinds of designers in this industry and most of them are not the software equivalent of interior decorators. Designing good products is hard, and image generators don’t make that easier.


Its really wild how much good UI stands out to me now that the internet is been flooded with generically produced slop. I created a bookmarks folder for beautiful sites that clearly weren't created by LLMs and required a ton of sweat to design the UI/UX.

I think we will transition to a world where handmade software/design will come at a huge premium (especially as the average person gets more distanced from the actual work required to do so, and the skills become rarer). Just like the wealthy pay for handmade shoes, as opposed to something off the shelf from footlocker, I think companies will revert back to hand crafted UX. These identical center column layout's with a 3x3 feature card grid at the bottom of your landing page are going to get really old fast in a sea of identical design patterns.

To be fair component libraries were already contributing to this degradation in design quality, but LLM s are making it much worse.


Yeah. For a few years, I’ve been predicting that human-made and designed digital goods will be desirable luxury items in the same exact way the Arts and Crafts movement, in the late 19th/early 20th century, made artisan furniture, buildings, etc. to push back against the megatons of chintzy shit produced during the Industrial Revolution.

Component libraries can be used to great effect if they are used thoughtfully in the design process, rather than in lieu of a design process.


Paying a premium for "luxury" makes sense for people looking status signaling or an unique experience. Software is (most of the time) an utility. People would be willing to pay for a premium when there is tangible performance improvement. No one is going to pay more for a run-of-the-mill SaaS offering because the website was handcrafted.

> With AI we now have a chance to do projects that simply would have cost way too much to do 10 years ago.

Not sure about that, at least if we're talking about software. Software is limited by complexity, not the ability to write code. Not sure LLMs manage complexity in software any better than humans do.


It's not as easy to build a business as just copying someone (otherwise we'd have all been doing that long before LLMs).

I expect the software market will change from lots of big kitchen sink included systems and services to many smaller more specialized solutions with small agile teams behind them.

Some engineers that lose their jobs are going to create new businesses and new jobs.

The question in my mind: is there enough feature and software demand out there to keep all of the engineers employed at 3x the productivity? Maybe. Software has been limited on the supply side by how expensive it was to produce. Now it may bump into limits on the demand side instead.

Meanwhile LLMs are better than junior devs, so nobody wants to hire a junior dev. No idea how we get senior devs then. How many people will be scared away from entering this career path?

The job has changed. How many software engineers will leave the career now that the job is more of a technically minded product person and code reviewer?

I can't predict how it all plays out, but I'm along for the ride. Grieving the loss of programming and trying to get used to this new world.


So like....every business having electricity? I am not a economist so would love someone smarter than me explain how this is any different than the advent of electricity and how that affected labor.

The difference is that electricity wasn't being controlled by oligarchs that want to shape society so they become more rich while pillaging the planet and hurting/killing real human beings.

I'd be more trusting of LLM companies if they were all workplace democracies, not really a big fan of the centrally planned monarchies that seem to be most US corporations.


Heard of Carnegie? He controlled coal when it was the main fuel used for heating and electricity.

A reference to one of the hall of fame Robber Barons does seem pretty apt right now..

At least they built libraries, cultural centers and the occasional university.

Give the current crop a chance to realise their mortality and want to secure a better legacy than 'took all the money'.

Bill Gates did... has anyone else followed in those footsteps?

Nowadays they just try to put more whiteys on the moon, or sabotage liberal democracy.

Did Carnegie try to overthrow a democracy and believe in monarchism?

> The difference is that electricity wasn't being controlled by oligarchs that want to shape society so they become more rich while pillaging the planet and hurting/killing real human beings.

Yes it was. Those industrialists were called "robber barons" for a reason.


Its main distinction from previous forms of automation is its ability to apply reasoning to processes and its potential to operate almost entirely without supervision, and also to be retasked with trivial effort. Conventional automation requires huge investments in a very specific process. Widespread automation will allow highly automated organizations to pivot or repurpose overnight.

While I’m on your side electricity was (is?) controlled by oligarchs whose only goal was to become richer. It’s the same type of people that now build AI companies

Control over the fuels that create electricity has defined global politics, and global conflict, for generations. Oligarchs built an entire global order backed up by the largest and most powerful military in human history to control those resource flows, and have sacrificed entire ecosystems and ways of life to gain or maintain access.

So in that sense, yes, it’s the same


I mean your description sounds a lot like the early history of large industrialization of electricity. Lots of questionable safety and labor practices, proprietary systems, misinformation, doing absolutely terrible things to the environment to fuel this demand, massive monopolies, etc.

An obvious argument to this is that electricity is becoming a lot more expensive (because of LLMs), so how is that going to affect labour?

> Its also worth noting that if you can create a business with an LLM, so can everyone else.

False. Anyone can learn about index ETFs and still yolo into 3DTE options and promptly get variation margined out of existence.

Discipline and contextual reasoning in humans is not dependent on the tools they are using, and I think the take is completely and definitively wrong.


*Checks Bio* Owns AI company and.... the whole family tree's portfolio :eyes:

This is just a theory of mine, but the fact that people don't see LLMs as something that will grow the pie and increase their output leading to prosperity for all just means that real economic growth has stagnated.

From all my interactions with C-level people as an engineer, what I learned from their mindset is their primary focus is growing their business - market entry, bringing out new products, new revenue streams.

As an engineer I really love optimizing out current infra, bringing out tools and improved workflows, which many of my colleagues have considered a godsend, but it seems from a C-level perspective, it's just a minor nice-to-have.

While I don't necessarily agree with their world-view, some part of it is undeniable - you can easily build an IT company with very high margins - say 3x revenue/expense ratio, in this case growing the profit is a much more lucrative way of growing the company.


I don’t think we are running out of work to do… there seems to be an endless amount of work to be done. And most of it comes from human needs and desires.

This worldview has, IMO, one omission. It implicitly assumes that everything will stay the same except for LLMs getting better and better, but in reality there are many interconnected factors in play.

Will it fundamentally change or eliminate some jobs? I think yes.

But at the same time, no one knows how this will play out in the long run. We certainly shouldn't extrapolate what will happen in the job market or society by treating AI performance as an independent variable.


Here is a very real example of how an LLM can at least save, if not create jobs, and also not take a programmers job:

I work for a cash-strapped nonprofit. We have a business idea that can scale up a service we already offer. The new product is going to need coding, possibly a full-scale app. We don't have any capacity to do it in-house and don't have an easy way to find or afford vendor that can work on this somewhat niche product.

I don't have the time to help develop this product but I'm VERY confident an LLM will be able to deliver what we need faster and at a lower cost than a contractor. This will save money we couldn't afford to gamble on an untested product AND potentially create several positions that don't currently exist in our org to support the new product.


There are ton's of underprivileged college grads or soon to be grads that could really use the experience, and pro bono work for a non profit would look really good on their CVs. Have you considered contacting a local university's CS department? This seems more valuable to society from a non profit's perspective, imo, than giving that money/work to an AI company. Its not like the students don't have access to these tools, and will be able to leverage them more effectively while getting the same outcome for you.

Do you have someone who can babysit and review what the LLM does? Otherwise, I'm not sure we're at the point where you can just tell an agent to go off and build something and it does it _correctly_.

IME, you'll just get demoware if you don't have the time and attention to detail to really manage the process.


But if you could afford to hire a worker for this job, that an LLM would be able to do for a fraction of the cost (by your estimation), then why on earth would you ever waste money on a worker? By extension if you pay a worker and an AI or robot comes along that can do the work for cheaper, then why would you not fire the worker and replace them with the cheaper alternative?

Its kind of funny to see capitalists brains all over this thread desperately try to make it make sense. It's almost like the system is broken, but that can't possibly be right everybody believes in capitalism, everybody can't be wrong. Wake the fuck up.


New people hired for this project would not be coders. They would be an expert in the service we offer, and would be doing work an LLM is not capable of.

I don't know if LLMs would be capable of also doing that job in the future, but my org (a mission-driven non profit) can get very real value from LLMs right now, and it's not a zero-sum value that takes someone's job away.


I was talking about the project that needs coding, the part you would hire a contractor for, but can't afford to. I said hypothetically, if you COULD afford it. Now read what I said again.

I am interested I might help you with that

> They can get rid of 1/3-2/3s of their labor and make the same amount of money, why wouldn't they.

Competition may encourage companies to keep their labor. For example, in the video game industry, if the competitors of a company start shipping their games to all consoles at once, the company might want to do the same. Or if independent studios start shipping triple A games, a big studio may want to keep their labor to create quintuple A games.

On the other hand, even in an optimistic scenario where labor is still required, the skills required for the jobs might change. And since the AI tools are not mature yet, it is difficult to know which new skills will be useful in ten years from now, and it is even more difficult to start training for those new skills now.

With the help of AI tools, what would a quintuple A game look like? Maybe once we see some companies shipping quintuple A games that have commercial success, we might have some ideas on what new skills could be useful in the video game industry for example.


Yeah but there’s no reason to assume this is even a possibility. SW Companies that are making more money than ever are slashing their workforces. Those garbage Coke and McDonald’s commercials clearly show big industry is trying to normalize bad quality rather than elevate their output. In theory, cheap overseas tweening shops should have allowed the midcentury American cartoon industry to make incredible quality at the same price, but instead, there was a race straight to the bottom. I’d love to have even a shred of hope that the future you describe is possible but I see zero empirical evidence that anyone is even considering it.

> Its also worth noting that if you can create a business with an LLM

If that were true, LLM companies would just use it themselves to make money rather than sell and give away access to the models at a loss.


> Their goal is to monopolize labor for anything that has to do with i/o on a computer, which is way more than SWE. Its simple, this technology literally cannot create new jobs it simply can cause one engineer (or any worker whos job has to do with computer i/o) to do the work of 3, therefore allowing you to replace workers (and overwork the ones you keep). Companies don't need "more work" half the "features"/"products" that companies produce is already just extra. They can get rid of 1/3-2/3s of their labor and make the same amount of money, why wouldn't they.

Most companies have "want to do" lists much longer than what actually gets done.

I think the question for many will be is it actually useful to do that. For instance, there's only so much feature-rollout/user-interface churn that users will tolerate for software products. Or, for a non-software company that has had a backlog full of things like "investigate and find a new ERP system", how long will that backlog be able to keep being populated.


Edit: This ended up being such a big text. Sorry.

I guess I agree but I want to add to your point is that, this tech is inexpensive.

And unfortunately, not in the sense where it is related to the real value of a product or need for it, but as a market condition.

But, to me, it seems that it will be more expensive anyway.

I see these possibilities: 1. Few companies own all the technology. They cut the men in the middle and they have all kinds of super apps and will try to force into that ecosystem

2. Or, they succeeded in the substitution, they keep the man in the middle but they control whom will have access and how much it is going to be charged. The goal in this case will be to be more expensive to kickstart an engineering team than using the product and ofc, their goal will be to reach that threshold.

3. They completely fail, these businesses plateau'ed and they can't make it a better condition to subvert the current balance and take the market. This could happen if a big financial risk materialize or if they get stuck without big advancements for a long time and investors starts to demand their money back.

I think we are going this 3rd route. We are seeing early signals of nonsense marketing strategy selling things that are not there yet. We see all of them silencing ethics and transparency teams. The truth is that they started to stack models together and sell as one thing which is much different from what they sold just a year and a half ago. I am not saying this couldn't be because this is really the best model, but because they couldn't scale it up even more now, even 18 months after the previous gen of giant model releases.

The truth is that they probably need to start capitalising now because the crisis they are causing themselves might hurt them bad.

We saw this decline or every bubble popping. They need to sell it too much so they can shift the risk from being on top of their money to be on top of someone else's money, and this potential is resold multiple times as investors realise the improvements are not coming. Until there is only the speculators dealing with this sorta of business, which will ultimately make those companies to take unpopular stupid decisions like it happened with bitcoin, super hero movies, NFT and maybe much more if I could think about it.


> everyone has access to the same models and basic thought processes

Why haven't Warners acquired Netflix then, but the other way around? Even though they had access to the same labor market, a human LLM replacement?

I think real economics is a little more complex than the "basic economics" referenced in your reply.

This does not negate the possibility that enterprises will double down on replacing everyone with AI, though. But it does negate the reasoning behind the claim and the predictions made.


> Its also worth noting that if you can create a business with an LLM, so can everyone else. And sadly everyone has the same ideas, everyone ends up working on the same things causing competition to push margins to nothing.

This was true before LLMs. For example, anyone can open a restaurant (or a food truck). That doesn't mean that all restaurants are good or consistent or match what people want. Heck, you could do all of those things but if your prices are too low then you go out of business.

A more specific example with regards to coding:

We had books, courses, YouTube videos, coding boot camps etc but it's estimated that even at the PEAK of developer pay less than 5% of the US adult working population could write even a basic "Hello World" program in any language.

In other words, I'm skeptical of "everyone will be making the same thing" (emphasis on the "everyone").


> Its also worth noting that if you can create a business with an LLM, so can everyone else. And sadly everyone has the same ideas

Yeah, people are going to have to come to terms with the "idea" equivalent of "there are no unique experiences". We're already seeing the bulk move toward the meta SaaS (Shovels as a Service).


> And sadly everyone has the same ideas

I'm not sure that's true. If LLMs can help researchers implement (not find) new ideas faster, they effectively accelerate the progress of research.

Like many other technologies, LLMs will fail in areas and succeed in others. I agree with your take regarding business ideas, but the story could be different for scientific discovery.


One thing that's clear, LLMs cannot come up with novel ideas.

Its also worth noting that if you can create a business with an LLM, so can everyone else.

One possibility may be that we normalize making bigger, more complex things.

In pre-LLM days, if I whipped up an application in something like 8 hours, it would be a pretty safe assumption that someone else could easily copy it. If it took me more like 40 hours, I still have no serious moat, but fewer people would bother spending 40 hours to copy an existing application. If it took me 100 hours, or 200 hours, fewer and fewer people would bother trying to copy it.

Now, with LLMs... what still takes 40+ hours to build?


The arrow of time leads towards complexity. There is no reason to assume anything otherwise.

I don't disagree with everything you are saying. But you seem to be assuming that contributing to technology is a zero sum game when it concretely grows the wealth of the world.

> If everyone had an oil well on their property that was affordable to operate the price of oil would be more akin to the price of water.

This is not necessarily even true https://en.wikipedia.org/wiki/Jevons_paradox


Jevon's Paradox is know as a paradox for a reason. It's not "Jevon's Law that totally makes sense and always happens".

There's an older article that gets reposted to HN occasionally, titled something like "I hate almost all software". I'm probably more cynical than the average tech user and I relate strongly to the sentiment. So so much software is inexcusably bad from a UX perspective. So I have to ask, if code will really become this dirt cheap unlimited commodity, will we actually have good software?

Depends on whether you think good software comes from good initial design (then yes, via the monkeys with typewriters path) or intentional feature evolution (then no, because that's a more artistic, skilled endeavor).

Anyone who lived through 90s OSS UX and MySpace would likely agree that design taste is unevenly distributed throughout the population.


Which leads to the uncomfortable but difficult to avoid conclusion that having some friction in the production of code was actually helping because it was keeping people from implementing bad ideas.

If one person can do the job of three, then you can keep output the same and reduce headcount, or maintain headcount and improve output etc.

Anecdotally it seems demand for software >> supply of software. So in engineering, I think we’ll see way more software. That’s what happened in the Industrial Revolution. Far more products, multiple orders of magnitude more, were produced.

The Industrial Revolution was deeply disruptive to labour, even whilst creating huge wealth and jobs. Retraining is the real problem. That’s what we will see in software. If you can’t architect and think well, you’ll struggle. Being able to write boiler plate and repetitive low level code is a thing of the past. But there are jobs - you’re going to have to work hard to land them.

Now, if AGI or superintelligence somehow renders all humans obsolete, that is a very different problem but that is also the end of capitalism so will be down to governments to address.


Retail water[1] costs $881/bbl which is 13x the price of Brent crude.

[1] https://www.walmart.com/ip/Aquafina-Purified-Drinking-Water-...


What a good faith reply. If you sincerely believe this, that's a good insight into how dumb the masses are. Although I would expect a higher quality of reply on HN.

You found the most expensive 8pck of water on Walmart. Anyone can put a listing on Walmart, its the same model as Amazon. There's also a listing right below for bottles twice the size, and a 32 pack for a dollar less.

It cost $0.001 per gallon out of your tap, and you know this..


I'm in South Australia, the driest state on the driest continent, we have a backup desalination plant and water security is common on the political agenda - water is probably as expensive here than most places in the world

"The 2025-26 water use price for commercial customers is now $3.365/kL (or $0.003365 per litre)"

https://www.sawater.com.au/my-account/water-and-sewerage-pri...


Water just comes out of a tap?

My household water comes from a 500 ft well on my property requiring a submersible pump costing $5000 that gets replaced ever 10-15 years or so with a rig and service that cost another 10k. Call it $1000/year... but it also requires a giant water softener, in my case a commercial one that amortizes out to $1000/year, and monthly expenditure of $70 for salt (admittedly I have exceptionally hard water).

And of course, I, and your municipality too, don't (usually) pay any royalties to "owners" of water that we extract.

Water is, rightly, expensive, and not even expensive enough.


You have a great source of water, which unfortunately for you cost you more money than the average, but because everyone else also has water that precious resource of yours isn't really worth anything if you were to try and go sell it. It makes sense why you'd want it to be more expensive, and that dangerous attitude can also be extrapolated to AI compute access. I think there's going to be a lot of people that won't want everyone to have plentiful access to the highest qualities of LLMs for next to nothing for this reason.

If everyone has easy access to the same powerful LLMs that would just drive down the value you can contribute to the economy to next to nothing. For this reason I don't even think powerful and efficient open source models, which is usually the next counter argument people make, are necessarily a good thing. It strips people of the opportunity for social mobility through meritocratic systems. Just like how your water well isn't going to make your rich or allow you to climb a social ladder, because everyone already has water.

I think the technology of LLMs/AI is probably a bad thing for society in general. Even a full post scarcity AGI world where machines do everything for us ,I don't even know if that's all that good outside of maybe some beneficial medical advances, but can't we get those advances without making everyone's existence obsolete?


I agree water should probably be priced more in general, and it's certainly more expensive in some places than others, but neither of your examples is particularly representative of the sourcing relevant for data centers (scale and potability being different, for starters).

Just for completeness, it's about $0.023/gal in Pittsburgh (1)-- still perfectly affordable but 23x more than 0.001. but still 50x less than Brent crude.

(1) Combined water+ sewer fees. Sewer charges are based on your water consumption so it rolls into the per-gallon effective price. https://www.pgh2o.com/residential-commercial-customers/rates


decreasing COGS creates wealth and consumer surplus, though.

If we can flatten the social hierarchy to reduce the need for social mobility then that kills two birds with one stone.


Do you really think the ruling class has any plans to allow that to happen... There's a reason so much surveillance tech is being rolled out across the world.

If the world needs 1/3 of the labor to sustain the ruling class's desires, they will try to reduce the amount of extra humans. I'm certain of this.

My guess is during this "2nd industrial revolution" they will make young men so poor through the alienation of their labor that they beg to fight in a war. In that process they will get young men (and women) to secure resources for the ruling class and purge themselves in the process.


In a simplified economic model though.

Reply to your edit: what if we wanted to do with the water was simply to drink it?

"Meritocratic climbing on the social ladder", I'm sorry but what are you on about?? As if that was the meaning in life? As if that was even a goal in itself?

If it's one thing we need to learn in the age of AI, it's not to confuse the means to an end and the end itself!


Yeah, but a Stratocaster guitar is available to everybody too, but not everybody’s an Eric Clapton

I can buy the CD From the Cradle for pennies, but it would cost me hundreds of dollars to see Eric Clapton live

This is correct. An LLM is a tool. Having a better guitar doesn’t make you sound good if you don’t know how to play. If you were a low skill software systems etc arch before LLM you’re gonna be a bad one after as well. Someone at some point is deciding what the agent should be doing. LLMs compete more with entry level / juniors.

I have a few app ideas that I've been sitting on for years and they would all be things that would help me, things that I would actually use.. But they're also things that I think others would find useful. I had Claude Code create two of them so far, and yeah the code isn't what I would write, but the apps generally work and are useful to me. The idea of trying to monetize these apps that I didn't even write is strange to me, especially considering anyone else can just tell their Claude Code to "create an app that's a clone of appwebsite.com" and within an hour they will probably have a virtually identical clone of my app that I'm trying to charge money for.

In this way, AI coding is a bummer. I also sincerely miss writing code. Merely reading it (or being a QA and telling Claude about bugs I find) is a shell of what software engineering used to be.

I know with apps especially, all that really matters is how large your user base is, but to spend all that time and money getting the user base, only for them to jump ship next month for an even better vibe-coded solution... eh. I don't have any answers, I just agree that everyone has the same ideas and it's just going to be another form of enshittification. "My AI slop is better than your AI slop".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: