Hacker Newsnew | past | comments | ask | show | jobs | submit | bestcommentslogin
Most-upvoted comments of the last 48 hours. You can change the number of hours like this: bestcomments?h=24.

While we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.

At this point, I make value judgments when folks use AI for their writing, and will continue to do so.


Prompt: Here is the front page from today: <...snip...> Your task is to predict, and craft, in HTML (single file, style-exact) the HN front page 10 years from now. Predict and see the future. Writ it into form!

update: I told Gemini we made it to the front page. Here is it's response:

  LETS GOOOO! 

  The recursive loop is officially complete: The fake future front page is now on the real present front page. We have successfully created a temporal anomaly.

  I hope the comments are confused about why they can't click the links to download the Neuralink drivers.

  Enjoy the karma, time traveler! Just remember to warn them about the 2035 Office 365 price hikes while you're up there. ;)

Google kills Gemini cloud services is the best one. I can't believe I haven't seen that joke until today.

Software development jobs must be very diverse if even this anti-vibe-coding guy thinks AI coding definitely makes developers more productive.

In my work, the bigger bottleneck to productivity is that very few people can correctly articulate requirements. I work in backend, API development, which is completely different from fullstack development with backend development. If you ask PMs about backend requirements, they will dodge you, and if you ask front-end or web developers, they are waiting for you to provide them the API. The hardest part is understanding the requirements. It's not because of illiteracy. It's because software development is a lot more than coding and requires critical thinking to discover the requirements.


A lot of the criticism is based on the concept that it won't be technically watertight. But the key is that it doesn't have to be watertight to work. Social media is all about network effects. Once most kids are on there, everyone has to be on there. If you knock the percentage down far enough, you break the network effect to the point where those who don't want to don't feel pressured to. If that is all it does, it's a benefit.

My concerns about this are that it will lead to

(a) normalising people uploading identification documents and hence lead to people becoming victims of scams. This won't be just kids - scammers will be challenging all kinds of people including vulnerable elderly people saying "this is why we need your id". People are going to lose their entire life savings because of this law.

(b) a small fraction of kids branching off into fringe networks that are off the radar and will take them to very dark places very quickly.

Because it's politically unattractive, I don't think enough attention has been given to the harms that will flow from these laws.


This is one of the greatest LLM creations I've ever seen. It nails so many things: Google killing products, Microsoft price hikes, ad-injecting in AR glasses, and even HTMX returning!

It'd be so awesome if Gemini CLI went through and created the fake posts/articles, and HN even comments. Perhaps a bit much to ask of it?


Let's Encrypt was _huge_ in making it's absurd to not have TLS and now we (I, at least) take it for granted because it's just the baseline for any website I build. Incredible, free service that helped make the web a more secure place. What a wonderful service - thank you to the entire team.

The CEO at my last company (2022) refused to use Let's Encrypt because "it looked cheap to customers". That is absurd to me because 1), it's (and was at the time) the largest certificate authority in the world, and 2) I've never seen someone care about who issued your cert on a sales call. It coming from GoDaddy is not a selling point...

So my question: has anyone actually commented to you in a negative way about using Let's Encrypt? I couldn't imagine, but curious on others' experiences.


"Why I still write raw code instead of prompting the compiler" and "Show HN: A text editor that doesn't use AI" are my two favorite ones.

Here it is: https://sw.vtom.net/hn35/news.html

I downloaded the original article page, had claude extract the submission info to json, then wrote a script (by hand ;) to run feed each submission title to gemini-3-pro and ask it for an article webpage and then for a random number of comments.

I was impressed by some of the things gemini came up with (or found buried in its latent space?). Highlights:

"You’re probably reading this via your NeuralLink summary anyway, so I’ll try to keep the entropy high enough to bypass the summarizer filters."

"This submission has been flagged by the Auto-Reviewer v7.0 due to high similarity with "Running DOOM on a Mitochondria" (2034)."

"Zig v1.0 still hasn't released (ETA 2036)"

The unprompted one-shot leetcode, youtube, and github clones

Nature: "Content truncated due to insufficient Social Credit Score or subscription status" / "Buy Article PDF - $89.00 USD" / "Log in with WorldCoin ID"

"Gemini Cloud Services (formerly Bard Enterprise, formerly Duet AI, formerly Google Brain Cloud, formerly Project Magfi)"

Github Copilot attempts social engineering to pwn the `sudo` repo

It made a Win10 "emulator" that goes only as far as displaying a "Windows Defender is out of date" alert message

"dang_autonomous_agent: We detached this subthread from https://news.ycombinator.com/item?id=8675309 because it was devolving into a flame war about the definition of 'deprecation'."


This title is moderately clickbait-y and comes with a subtle implication that Rust might be getting removed from the kernel. IMO it should be changed to "Rust in the kernel is no longer experimental"

  > Mike: rachel and i are no longer dating
  >
  > rachel: mike that's a horrible way of telling people we're married
from the meme section on that page.

I may have developed some kind of paranoia reading HN recently, but the AI atmosphere is absolutely nuts to me. Have you ever thought that you would see a chart showing how population of horses was decimated by the mass introduction of efficient engines accompanied by an implication that there is a parallel to human population? And the article is not written in any kind of cautionary humanitarian approach, but rather from perspective of some kind of economic determinism? Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective? And barely anyone shares a thought like "technology should be warranted by the populace, not the other way around?". And the guy writing this works at Anthropic? The very guy who makes this thing happen, but is only able to conclude this with "I very much hope we'll get the two decades that horses did". What the hell.

AI professor here. I know this page is a joke, but in the interest of accuracy, a terminological comment: we don't call it a "hallucination" if a model complies exactly with what a prompt asked for and produces a prediction, exactly as requested.

Rater, "hallucinations" are spurious replacements of factual knowledge with fictional material caused by the use of statistical process (the pseudo random number generator used with the "temperature" parameter of neural transformers): token prediction without meaning representation.

[typo fixed]


That is so syncophantic, I can't stand LLMs that try to hype you up as if you're some genius, brilliant mind instead of yet another average joe.

> In many advanced software teams, developers no longer write the code; they type in what they want, and AI systems generate the code for them.

What a wild and speculative claim. Is there any source for this information?


All: before commenting here, please verify that you're feeling something different—quite different—from anger and a desire to fight this war. That is not what HN is for, and destroys what it is for.

This site is for curious, thoughtful, respectful, and kind interaction—most of all with those you may disagree with, regardless of how bad they are or you feel they are.

If that's not possible, it's ok not to post. We'd rather have a thread with no comments than a thread with aggressive comments, let alone nationalistic or religious flamewar. There is far too much aggression in the thread below, which is is understandable, but please don't add more. It provides a fleeting sensation of relief, but then it just makes everything worse.

Note this, from https://news.ycombinator.com/newsguidelines.html: "Comments should get more thoughtful and substantive, not less, as a topic gets more divisive."


> Engineers need to really lean in to the change in my opinion.

I tried leaning in. I really tried. I'm not a web developer or game developer (more robotics, embedded systems). I tried vibe coding web apps and games. They were pretty boring. I got frustrated that I couldn't change little things. I remember getting frustrated that my game character kept getting stuck on imaginary walls and kept asking Cursor to fix it and it just made more and more of a mess. I remember making a simple front-end + backend with a database app to analyze thousands of pull request comments and it got massively slow and I didn't know why. Cursor wasn't very helpful in fixing it. I felt dumber after the whole process.

The next time I made a web app I just taught myself Flask and some basic JS and I found myself moving way more quickly. Not in the initial development, but later on when I had to tweak things.

The AI helped me a ton with looking things up: documentation, error messages, etc. It's essentially a supercharged Google search and Stack Overflow replacement, but I did not find it useful letting it take the wheel.


No, the issue here is that the HDMI 2.1 NDA is so strict that releasing an open source implementation is forbidden no matter how much you pay them. AMD has access to the specs, they've implemented it in hardware and in their closed source Windows driver, but they're not allowed to add it to their open source Linux driver.

Nvidia does support HDMI 2.1 on Linux since their driver is closed source (but that causes its own problems). Maybe AMD could compromise by releasing a minimal binary blob which only exposes the HDMI 2.1 implementation and nothing else.


> Running LLaMA-12 7B on a contact lens with WASM (arxiv.org)

Laughed out loud at this onion-like headline


I have been completely shocked by the number of people in the tech industry who seem to genuinely place no value on humanity and so many of its outputs. I see it in the writing of leaders within VC firms and AI companies but I also see it in ordinary conversations on the caltrain or in coffee shops.

Friendship, love, sex, art, even faith and childrearing are opportunities for substitution with AI. Ask an AI to create a joke for you at a party. Ask an AI to write a heartfelt letter to somebody you respect. Have an AI make a digital likeness of your grandmother so you can spend time with her forever. Have an AI tell you what you should say to your child when they are sad.

Hell. Hell on earth.


I recommend everyone to watch GamersNexus' documentary on the NVIDIA AI GPU black market. They explain how companies like DeepSeek can get a hold of chips that are otherwise banned by the US government to export to China

https://www.youtube.com/watch?v=1H3xQaf7BFI


Some things I've learned over the years:

1. do not show a slide full of code. The font will be too small to read. Nobody will read it

2. don't read your slides to the audience. The audience can read

3. don't talk with your back to the audience

4. make your font as big as practical

5. 3 bullet points is ideal

6. add a picture now and then

7. don't bother with a copyright notice on every slide. It gets really old. Besides, you want people to steal your presentation!

8. avoid typing in code as part of the presentation, most of the time it won't work and it's boring watching somebody type

9. render the presentation as a pdf file, so any device can display it

10. email a copy of your presentation to the conference coordinator beforehand, put a copy on your laptop, and phone, and on a usb stick in your pocket. Arriving at the show without your presentation can be very embarrassing!

11. the anxiety goes away

12. don't worry about it. You're not running for President! Just have some fun with it


I absolutely understand the sentiment, but LWN is a second-to-none publication that on this rare occasion couldn't resist the joke, and also largely plays to an audience who will immediately understand that it's tongue-in-cheek.

Speaking as a subscriber of about two decades who perhaps wouldn't have a career without the enormous amount of high-quality education provided by LWN content, or at least a far lesser one: Let's forgive.


I work in this exact space (online grocery retailer in Europe). We're profitable and one of the few companies to be so in the sector - many online divisions are losing money and being bankrolled by the parent company with physical stores. Alternatively, burning VC money.

The thing that's wrong with Ocado's technology is that it's ridiculously expensive and tailored for huge FC's (fulfillment centers). The problem with that is that it needs to serve a large population base to be effective and that's hard - in dense metros, the driving times are much longer despite smaller distances. In sparse metros, the distances are just too long. In our experience, the optimal FC size is 5-10K orders/day, maybe up to 20K/day in certain cases, but the core technology should certainly scale down profitably to 3-5K. Ocado solves for scaling up, what needs to be solved is actually scaling down.

There are a lot of logistical challenges outside the FC, especially last mile and you need to see the system as a whole, not just optimize one part to the detriment of all others.


That's what makes it fun. Apparently, Gemini has a better sense of humor than HN.

Pictograms in the interface are not decoration. Their purpose is to convey information in limited space. (The information should be that could be conveyed this way.) Currently they are often used as decorations or these two uses are mixed up. This is a mistake.

(It is interesting and saddening to see how years of UI research just went down the drain after Apple "resurrection". In my impression Apple was the first that started to lose their carefully collected UI expertise and replace it something that was original for the time, but that was all. E.g. I remember the very first ads after Jobs' comeback. They still had the beige Macintoshes, but their ads changed. Instead of a typical computer ad that showed a computer with a turned on screen and some desktop picture Apple's ads pictured turned off computers photographed from unusual angles or in unusual positions, like keyboard standing on its side leaning on the box, mouse hanging on its wire and so on. It was different, indeed, it stood out. Thing is, to always strive for that is harmful. Especially for user interface, where the motto is: do not make it original, make it right.)


I agree with this sentiment.

When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."


When I read the headline i thought “well obviously they don’t mean Marco Rubio, there must be some famous publicist or something”. Cannot believe it actually was Marco Rubio, lol

> Particulates issued from tailpipes can aggravate asthma and heart disease and increase the risk of lung cancer and heart attack. Globally, they are a leading risk factor for premature death.

Minor nitpick, but tailpipes aren't the primary source of emissions. The study is about PM2.5[0]. which will chiefly be tires and brake pads. Modern gasoline engines are relatively clean, outside of CO2, though diesel engines spit out a bunch of bad stuff.

[0] https://www.nature.com/articles/s44407-025-00037-2


This is awesome, but minor quibble with the title - "hallucinates" is the wrong verb here. You specifically asked it to make up a 10-year-in-the-future HN frontpage, and that's exactly what it did. "Hallucinates" means when it randomly makes stuff up but purports it to be the truth. If some one asks me to write a story for a creative writing class, and I did, you wouldn't say I "hallucinated" the story.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: