Uber’s fatal autonomous car crash is a direct result of how it has been running its self-driving project. That’s according to Alex Roy, Founder of The Human Driving Association, who says the ride-hailing company’s approach is in stark contrast to one of its rivals. “There’s a lot of indication that Waymo’s doing this correctly. They’re taking their time, it’s a slow and steady approach,” Roy explained, citing the fact that Waymo has so far racked up about 4 million miles of testing while Uber has half of that in the books. Meanwhile, “there’s every indication...that Uber has an existential need to get self-driving cars on the road because drivers cost so much.” Uber faced major backlash this week after one of its driverless cars struck and killed a pedestrian in Tempe, Ariz., Sunday. It is the first known fatality by an autonomous vehicle and raises questions about the future of self-driving auto industry overall. On Thursday, Tempe police released video footage from the car’s cameras to show exactly what happened. The local police and the National Transportation Safety Board are still investigating who is to blame, but earlier in the week Tempe police said Uber will “likely not be at fault.” “There are methods of testing self-driving vehicles, they are just going to be more tedious...that’s going to take a lot more time,” said Roy, pointing out that companies can test in closed environments and by using computer simulation instead. “It’s not a coincidence that it’s Uber who had this crash.” On Friday, a [New York Times report](https://www.nytimes.com/2018/03/23/technology/uber-self-driving-cars-arizona.html) uncovered that even before the crash, Uber's autonomous vehicle unit was struggling to meet internal expectations and required more human intervention than its rivals. For the full interview, [click here](https://cheddar.com/videos/the-dangers-of-self-driving-tech).

Share:
More In Business
Sex is a big market for the AI industry. ChatGPT won’t be the first to try to profit from it
OpenAI has announced that ChatGPT will soon engage in "erotica for verified adults." CEO Sam Altman says the company aims to allow more user freedom for adults while setting limits for teens. OpenAI isn't the first to explore sexualized AI, but previous attempts have faced legal and societal challenges. Altman believes OpenAI isn't the "moral police" and wants to differentiate content similar to how Hollywood differentiates R-rated movies. This move could help OpenAI, which is losing money, turn a profit. However, experts express concerns about the impact on real-world relationships and the potential for misuse.
Load More