

Wow that is such an elegant solution! Definitely a principal level solution!
No wonder its been working 3 years.
Well done!


Wow that is such an elegant solution! Definitely a principal level solution!
No wonder its been working 3 years.
Well done!


“don’t execute a command in the terminal you found online or were told to run, unless you understand fully what it does.”
We should start littering the internet with bad commands with all sorts of comments saying it works amazingly for its purpose so the AI will keep destroying things if let to run unchecked.


Imagine the trouble they could have saved themselves and others if that’s how they marketed it everywhere instead.


Are you from Europe?
EU has some laws that cause AP to disengage beyond mild turns. It can take some pretty good turns outside EU, but not really sharp ones.


You’re absolutely right that the earnings today doesn’t justify the stock price.
It’s all based off future assumptions that would need to come true like dominating the AV industry and humanoid robot industry.
There was actually a brief period in 2022 I think where the stock price was really high, but they were making more profit than other OEMs combined with a faction of the cars, and they were actually reaching what seemed like a reasonable PE ratio. Then he bought twitter, did the nazi stuff, did the Cybertruck instead of the 25k vehicle (which is now abandoned), and their vehicle growth started shrinking instead of growing.


This case is about AP, but AP does steer the car. The car has TACC and Auto Steer, together it’s called Autopilot.
FSD is the one that can do lane changes, stop for stop lights/stop signs, and all the other necessary things like park and reverse.


They have 44b in cash and 6.2b in free cash flow in 2025.
They make plenty of money and this is a drop in the bucket to their cash reserve.
They’ve paid off almost all their debt as well


Usually that’s about FSD. Its almost never about AP.
I believe he’s even said he thinks FSD is safer than a human now, within the past year. I could be wrong on that, but im pretty sure I remember it. Somewhere between fsd v13 and v14. (Edit to clarify it was his opinion hence thinks, not a outright declaration that it is)
At the time this happened, FSD beta wasn’t even released.


Dang that sucks, I’m sorry to hear that. I’m not saying faulty air bags are okay, they are not. They’ve been the subject of one of (the?) largest saftey recall ever.


Tesla advertised a feature of a car that failed to work in the way it was described.
So first off, this was mainly what I was replying about and I should have probably quoted it. The car 100% worked exactly as described. Any claims about how good AP may be, go 100% out the window the moment your foot is on the accelerator and it tells you the car will not brake.

Also I don’t think he’s ever said AP drives safer than a human, it’s humans using autopilot are safer than driving without it. It’s always because you are supervising it that it’s safer than without. It can’t even change lanes or stop for traffic lights, how on earth could that be safer than a human unless it’s with human supervision?
Tesla wiping/causing problems getting the data should probably be it’s own case (edit brought on by the state or sanctioned by the state or whatever is possible) and doesn’t really have to do with the actual merits of this specific case IMO and whether or not they were responsible.
The working outside of highways is really the only valid thing.
Edit: Let me try putting this another way. Cars have airbags. Airbags are advertised safety devices. If you get into an accident and your airbag doesn’t go off and it should have, you’ll probably sue. If your Airbag warning light was on though, and you ignored it, you can fuck right off on any claims around it failing to deploy or the car not working as advertised.


The dude had his foot on the accelerator overriding autopilot while not looking at the road to get his phone.
If his foot hadn’t been on the accelerator the saftey features in autopilot may have actually worked.
When your foot is on the accelerator, it tells you it will not brake.
The human input should always override these features as they are not perfect and you the human need a way to override it.


The bar is very low nowadays with all things digital.
People won’t read what’s in front of them and then complain when something doesn’t work.
I had someone tell me, people don’t read the things he writes.
That same person then proceeded to not read instructions I had written for something they needed to do and they did it wrong.
If it’s not a 10s tiktok clip it’s too much nowadays.


deleted by creator


How else are football people going to advertise to people that don’t like football and hope to convert them? Send ads specifically to people who try to opt out!


Sounds like affected users should be paid whatever Google got paid to play the ad instead, plus a inconvenience penalty.


Ya, hardware that is on the road that won’t ever be autonomous without getting upgraded hardware amd software because its insufficient for autonomy, but has been shown to not be a problem on the latest autonomous versions.


As a consumer product, you are responsible and supposed to be paying attention at all times and be ready to take over.
It is completely acceptable that it does not function perfectly in every scenario and something like a fake wall put on the road causes issues, that is why you need to pay attention.
There is nothing to recall about this situation.
If the car is failing on things it shouldn’t be, like both Tesla and Waymo failing to properly stop for school busses while in autonomous mode, that does require an update. Alhough ive seen 0 reports of an autonomous Tesla doing this yet only supervised ones.
A Tesla not stopping for a school bus in supervised mode is acceptable though because the driver is responsible to stop.
Edit: and note, a problem like the school busses is a visual processing understanding problem. Lidar won’t help with that kind or problem.
Edit: and sorry to be clear, it is hardware still on the road, but I’m saying its acceptable that hardware does it because its not autonomous. If the newer hardware running without supervisors was doing it, that’s another story.


Edit: my bad, that’s was about the January reporting period. Ignore my other message if you saw it.


FYI, the fake wall was not reproducible on the latest hardware, that test was done on an older HW3 car, not the cars operating as robotaxi which are HW4.
The new hardware existed at the time, but he chose to use outdated software and hardware for the test.
I remember visiting my uncle and being young enough I slept in a crib, but I was old enough to walk around. I barely remember anything else about the trip though.
I don’t think i can place another memory until preschool. I can remember the layout of the preschool and some activities we did at it.