Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can forgive AMD for not seeing how important CUDA was ten years ago. Nvidia was both smart and lucky.

But failing to see it five years ago is inexcusable. Missing it two years ago is insane. And still failing to treat ML as an existential threat is, IDK, I’ve got no words.



That's besides the point. They are offering ML solutions. I believe pytorch and most other stuff works decently well on their datacenter/hpc GPUs these days. They just haven't managed to offer something attractive to small scale enterprises and hobbyists, which costs them a lot of midshare in discussions like these.

But they're definitely aware of AI/ML stuff, pitching it to their investors, acquiring other companies in the field and so on.


Meanwhile the complete lack of enthusiast ML software for their consumer grade cards mean they can put gobs of GPU memory on their GPUs without eating into their HPC business line.

I feel like that's something they would be explaining to their investors if it was intentional though.


Not sure which complete lack you're talking about. You can run the SotA open source image and text generation models on the 7900 xtx. They might be one or two iterations behind their nvidia counterparts and you will run into more issues, but there is a community.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: