We'll try everything, it seems, other than holding parents accountable for what their children consume.
In the United States, you can get in trouble if you recklessly leave around or provide alcohol/guns/cigarettes for a minor to start using, yet somehow, the same social responsibility seems thrown out the window for parents and the web.
Yes, children are clever - I was one once. If you want to actually protect children and not create the surveillance state nightmare scenario we all know is going to happen (using protecting children as the guise, which is ironic, because often these systems are completely ineffective at doing so anyway) - then give parents strong monitoring and restriction tools and empower them to protect their children. They are in a much better and informed position to do so than a creepy surveillance nanny state.
That is, after all, the primary responsibility of a parent to begin with.
> It is not ideal, but it is necessary when the higher-desirability options are not working.
What has worried me for years is that Americans would not resort to this level. That things are just too comfortable at home to take that brave step into the firing lines of being on the right side of justice but the wrong side of the law.
I'm relieved to see more and more Americans causing necessary trouble. I still think that overall, Americans are deeply underreacting to the times. But that only goes as far as to be my opinion. I can't speak for them and I'm not their current king.
Apple is very tied to Chinese manufacturing in a way that is hard to replicate in US.
They will agree to make some high margin simple to assemble thing in the US to appease government, but if it goes as well as last time, they will stop as soon as they can.
In china they were often able to iterate on designs and have custom screws and other parts made and ramped up in very short times. Something about having the whole supply chain in one place and very motivated and it all fell apart when tried to move to US.
So things that took weeks became hard on anytime line.. per Apple in China book.
According to the EU Identity Wallet's documentation, the EU's planned system requires highly invasive age verification to obtain 30 single use, easily trackable tokens that expire after 3 months. It also bans jailbreaking/rooting your device, and requires GooglePlay Services/IOS equivalent be installed to "prevent tampering". You have to blindly trust that the tokens will not be tracked, which is a total no-go for privacy.
These massive privacy issues have all been raised on their Github, and the team behind the wallet have been ignoring them.
> OpenAI is projecting that its total revenue for 2030 will be more than $280 billion
For context, that is more than the annual revenue of all but 3 tech companies in the world (Nvidia, Apple, Google), and about the same as Microsoft.
OpenAI meanwhile is projected to make $20 billion in 2026. So a casual 1300% revenue growth in under 4 years for a company that is already valued in the hundreds of billions.
Must be nice to pull numbers out of one's ass with zero consequence.
The byte-for-byte identical output requirement is the smartest part of this whole thing. You basically get to run the old and new pipelines side by side and diff them, which means any bug in the translation is immediately caught. Way too many rewrites fail because people try to "improve" things during the port and end up chasing phantom bugs that might be in the old code, the new code, or just behavioral differences.
Also worth noting that "translated from C++" Rust is totally fine as a starting point. You can incrementally make it more idiomatic later once the C++ side is retired. The Rust compiler will still catch whole classes of memory bugs even if the code reads a bit weird. That's the whole point.
I work at a European identity wallet system that uses a zero knowledge proof age identification system. It derives an age attribute such as "over 18" from a passport or ID, without disclosing any other information such as the date of birth. As long as you trust the government that gave out the ID, you can trust the attribute, and anonymously verify somebodies age.
I think there are many pros and cons to be said about age verification, but I think this method solves most problems this article supposes, if it is combined with other common practices in the EU such as deleting inactive accounts and such. These limitations are real, but tractable. IDs can be issued to younger teenagers, wallet infrastructure matures over time, and countries without strong identity systems primarily undermine their own age bans. Jurisdictions that accept facial estimation as sufficient verification are not taking enforcement seriously in the first place. The trap described in this article is a product of the current paradigm, not an inevitability.
Running a small project on Hetzner from Germany. Got the email this morning. Honestly, even after the increase their dedicated boxes are still absurdly cheap compared to what you'd pay at AWS or GCP for equivalent specs.
The real story here isn't Hetzner being greedy. It's that AI companies are vacuuming up every DRAM chip on the planet and the rest of us get to pay the tax. I priced out a RAM upgrade for my home server last week. Same kit I bought 8 months ago for 90 EUR is now 400+. That's not normal market dynamics.
What worries me more is the second-order effects. Startups that would normally spin up cheap VPS instances to prototype and iterate now face meaningfully higher costs at the exact stage where every euro matters. The "just deploy it" culture that made European indie dev scene so productive was built on sub-10 EUR/month boxes. Those days might be over for a while.
A) These models are trained by ignoring IP. It is hypocritical and absurd to then try to assert IP over them. And I am for the destruction of IP on all ends.
B) What this essentially means is that the Chinese labs are taking the work of these mega corporations into making it freely accessible to other labs and businesses, to serve inference, fine tune, and host privately on prem. That's clearly a good thing for competition in the market as a whole.
C) I don't see why we should have to duplicate the massive energy and infrastructure investment of building foundation models over and over forever just because we want to preserve the IP rights of a few companies. That seems a shame and it seems better to me for everything to learn from everything else for the whole ecosystem to get better by topping each other and building off each other; that's also why publishing research into the architecture and training of these models is so much better than what the proprietary labs do (keeping everything a secret), although tbf Anthropic's interpretability research is cool.
D) these Chinese models give 90% of the performance of frontier proprietary models at a 10th or 20th of the cost. That seems like a win for everyone. Not to mention the fact that this distilling also allows them to make much smaller local models that everyone can run. This is a win for actual democratization, decentralization, and accessibility for the little guy.
Everyone wants an untrackable unblockable currency that is out of government control until the day it is used for things they don't like, then suddenly "government please control this!"
This breakdown in rule of law is unfortunate. Ideally, this would be handled by, in order of desirability:
- Flock decision-makers and customers holding ethics as a priority, and not taking the actions they are due to sense of duty, community, morals etc
- Peer pressure resulting in ostracization of Flock execs and decision makers until they stop the unethical behavior
- Governments using legislation and law enforcement to prevent the cameras being used in the way they are
Below this, is citizens breaking the law to address the situation, e.g. through this destruction. It is not ideal, but it is necessary when the higher-desirability options are not working.
Really don’t understand why sane developers who for decades have been advocating for best practices when it comes to security and privacy seem to be completely abandoning all of them simply because it’s AI. Why would you ever let a non deterministic program god level access to everything? What could possibly go wrong?
I know this is weird, but I'm in some ways not really sure who is on the side of freedom here. I get your position, but like. The whole idea of the promise of the internet has been destroyed by newsfeeds and mega-corps.
There is almost literally documented examples of Facebook executives twirling their mustaches wondering how they can get kids more addicted. This isn't a few bands with swear words, and in fact, I think that the damage these social media companies are doing is in fact, reducing the independence teens and kids that have that were the fears parents originally had.
I dunno, are you uncertain about your case at all or just like. I just like, can't help but start with fuck these companies. All other arguments are downstream of that. Better the nanny state than Nanny Zuck.
I worked at a company that had effectively no physical security during work hours until the second time someone came in during lunch and stole an armload of laptops.
Then we got card readers and a staffed front desk, and discovered our snack budget was too high because people from other companies on other floors were coming to ours for snacks too.
I never felt the office was insecure, except in retrospect once it was actually secure.
Older HN users may recall when busy discussions had comments split across several pages. This is because the Arc [1] language that HN runs on was originally hosted on top of Racket [2] and the implementation was too slow to handle giant discussions at HN scale. Around September 2024 Dang et al finished porting Arc to SBCL, and performance increased so much that even the largest discussions no longer need splitting. The server is unresponsive/restarting a lot less frequently since these changes, too, despite continued growth in traffic and comments:
MiniMax, DeepSeek, and Moonshot are all releasing models for the public to use for free.
Anthropic, OpenAI, Google ect have been scraping information to train their models that they had no right in scraping yet when these company pay them to scrap data we are suppose to be worried?
Labs like Anthropic always preach we are trying to build AI for everyone while releasing expensive models that are closed source.
The only reason AI is affordable at all is because of these Chinese AI labs.
> Something about having the whole supply chain in one place
I can't find the source but I thought I read somewhere that the major manufacturing cities in China are all geographically laid out like giant assembly lines. The companies that process the raw materials are located mostly inland, then the companies that form those raw materials into metal and plastic stock are next door, and then the companies that take that stock and make components are next door to them, and the companies that input those components and output subassemblies are next door to them, and so on all the way down to the harbor where the companies that produce finished products output directly onto the loading docks where the ships await.
The US can't even zone a residential neighborhood without lawyers and special interests jamming things up for decades through endless impact studies and litigation. How is it going to compete with a country that can lay out entire cities, organizing the value chain geographically towards the ocean?
Code generation is cheap in the same way talk is cheap.
Every human can string words together, but there's a world of difference between words that raise $100M and words that get you slapped in the face.
The raw material was always cheap. The skill is turning it into something useful. Agentic engineering is just the latest version of that. The new skill is mastering the craft of directing cheap inputs toward valuable outcomes.
Even a dog can vibe-code! And the apps kinda, sorta work most of the time, like most apps vibe-coded by people!
I'm reminded of the old cartoon: "On the Internet, nobody knows you're a dog."[a]
Maybe the updated version should be: "AI doesn't know or care if you're a dog, as long as you can bang the keys on on a computer keyboard, even if you only do it to get some delicious treats."
> The agency has lost more than a quarter of its staff, withdrawn directives to auditors to crack down on aggressive tax shelters and permitted other auditing efforts to falter.
When you see a government doing this, you know they're not interested in collecting Tax from their rich buddies.
In the United States, you can get in trouble if you recklessly leave around or provide alcohol/guns/cigarettes for a minor to start using, yet somehow, the same social responsibility seems thrown out the window for parents and the web.
Yes, children are clever - I was one once. If you want to actually protect children and not create the surveillance state nightmare scenario we all know is going to happen (using protecting children as the guise, which is ironic, because often these systems are completely ineffective at doing so anyway) - then give parents strong monitoring and restriction tools and empower them to protect their children. They are in a much better and informed position to do so than a creepy surveillance nanny state.
That is, after all, the primary responsibility of a parent to begin with.