All observations about teleoperation aside, it's just really funny to me how the robot appears to knock over the water bottles, throw its hands up in exasperation, and then give up and fall down. It somehow makes it feel more human.
I feel many folks are missing the forest for the trees.
1. Build robots to change the narrative around overpriced stock for EV company
2. Align with right wing politicians to eliminate illegal immigration.
3. If AI for robotics is solved, congrats, you eliminated the competition.
4. If AI doesn't pan out, congrats, all the firms relying on illegal immigrants can now buy your robots and have those same illegal immigrants teleoperate the robots from their home countries.
It might have been human operated, but it also might have just been copying its training data.
A robot that properly supports being teleoperated wouldn't immediately fall over the moment someone deactivates a headset. Falling over is almost the worst thing a robot can do, you would trash a lot of prototypes and expensive lab equipment that way if they fell over every time an operator needed the toilet or to speak to someone. If you had such a bug that would be the very first thing you would fix. And it's not like making robots stay still whilst standing is a hard problem these days - there's no reason removing a headset should cause the robot to immediately deactivate.
You'd also have to hypothesize about why the supposed Tesla teleoperator takes the headset off with people in front of him/her during a public demonstration, despite knowing that this would cause the robot to die on camera and for them to immediately get fired.
I think it's just as plausible that the underlying VLA model is trained using teleoperation data generated by headset wearers, and just like LLMs it has some notion of a "stop token" intended for cases where it completed its mission. We've all seen LLMs try a few times to solve a problem, give up and declare victory even though it obviously didn't succeed. Presumably they learned that behavior from humans somewhere along the line. If VLA models have a similar issue then we would expect to see cases where it gets frustrated or mistakes failure for success, copies the "I am done with my mission" motion it saw from its trainers and then issues a stop token, meaning it stops sending signals to the motors and as a consequence immediately falls over.
This would be expected for Tesla given that they've always been all-in on purely neural end-to-end operation. It would be most un-Tesla-like for there to be lots of hand crafted logic in these things. And as VLA models are pretty new, and partly based on LLM backbones, we would expect robotic VLA models to have the same flaws as LLMs do.
Well, the human operator was just taking off a VR headset (and presumably forgot to deactivate the robot first). It just so happened to also look like the robot was fed up with life.
I notice when its left hand came down there was a squirt of water from probably crushing water bottle. That makes me wonder how much force these robots can exert, and if they can accidentally hurt people.
Oh they definitely can. A friend of mine working with humanoid robots told me that kids running around their demo booths and wanting to hug their robots were a major stress factor on doing the demos. That, plus knowing it's your code what's running there.
> Even recently, Musk fought back against the notion that Tesla relies on teleoperation for its Optimus demonstration. He specified that a new demo of Optimus doing kung-fu was “AI, not tele-operated”
The world's biggest liar, possibly. It's insane to me that laws and regulations haven't stopped him from lying to investors and the public, but that's the world in which we live.
Sorry, I meant SEC. Just search for "Musk SEC". He's been fined and sued already for similar statements. It's pretty illegal to lie about the capabilities of the products of a publicly held company.
That’s what lesuorac is saying. The SEC found he violated the rules for a publicly traded company... And then could do absolutely nothing about it to enforce the rules.
This is a real issue. If a robot is fully AI powered and doing what it does fully autonomously, then it has a very different risk profile compared to a teleoperated robot.
For example, you can be fairly certain that given the current state of AI tech, an AI powered robot has no innate desire to creep on your kids, while a teleoperated robot could very well be operated remotely by a pedophile who is watching your kids through the robot cameras, or attempting to interact with them in some way using the robot itself.
If you are allowing this robot device to exist in your home, around your valuables, and around the people you care for, then whether these robots operate fully autonomously, or whether a human operator is connecting via the robot is an extremely significant difference, that has very large safety consequences.
reply