The company left out some key details regarding the incident involving one of its robotaxis and a pedestrian.
On October 2, 2023, a woman was run over and pinned to the ground by a Cruise robotaxi. Given the recent string of very public malfunctions the robotaxis have been experiencing in San Francisco, it was only a matter of time until a pedestrian was hurt by the self-driving cars. New reports, though, suggest that Cruise held back one of the most horrifying pieces of information: that the woman was dragged 20 feet by the robotaxi after being pushed into its path.
The LA Times reports:
A car with a human behind the wheel hit a woman who was crossing the street against a red light at the intersection of 5th and Market Streets. The pedestrian slid over the hood and into the path of a Cruise robotaxi, with no human driver. She was pinned under the car, and was taken to a hospital.
But this is what Cruise left out:
What Cruise did not say, and what the DMV revealed Tuesday, is that after sitting still for an unspecified period of time, the robotaxi began moving forward at about 7 mph, dragging the woman with it for 20 feet.
read more: https://jalopnik.com/woman-hit-by-cruise-robotaxi-was-dragged-20-feet-1850963884
archive link: https://archive.ph/8ENHu
I think a lot of people forget that machines and AI are not infallible gods. Like a few dings from some highly charged particles from space and next thing you know they flip a 1 and 0 somewhere in their code and run that instead.
Tell that self driving car "experts. Because there is a silicon chip controlling they think it’s automatically better than a human. Said experts have never heard of random failures (that’s besides bugs).
The term your looking for is called “special pleading”. It’s when you ignore relevant facts to make your claim seem more legitimate.
“Yeah it’s a computer and computers can have glitches but it’s still better because it’s AI and it’s smarter than humans.”
It’s true computers are more efficient at some tasks than humans but there is no way we are anywhere near the level of having one of these things think like us.
The question isn’t whether they’re infallible, just whether they’re less fallible then humans, which is a far lower bar when it comes to driving.