Tesla is Designing a Dangerous Future of Driving
While cruising towards a changing stoplight at 50 miles per hour, nestled in the shimmering interior of your Tesla and all its accompanying gadgets, your drive suddenly becomes high stakes. Do you make the safe and lawful choice to slam on your brakes to stop at the fast-approaching red light? Or, instead, because you know that “hard breaking” will damage your Safety Score and increase your insurance cost, do you blow right through the light?
Several lessons can be learnt from the recent and abrupt changes that Elon Musk has implemented at Twitter since acquiring the platform. The slew of recent Twitter crises offer a stark warning about how grandiose design concepts can turn into failures when put into practice. But failure isn’t an option when it comes to the safety of hundreds of millions of people as they partake in one of the most dangerous activities possible everyday: driving a car.
Tesla’s reputation as a company that produces “cars of the future” relies on it designing each vehicle with fittingly futuristic instruments: touchscreens, autopilot mechanisms, advanced diagnostics, and all the rest. But designing cars to make them “smarter”, more interconnected, and more automatic brings added dangers. It may also create new and perverse incentives. Take Tesla’s Safety Score, for example, which penalizes your insurance if you commit any of several blacklisted driving behaviors. Although this Safety Score promises to assess and incentivize “safer driving,” it doesn’t judiciously account for situational variables that arise on the road. Hard braking or sharp turns can negatively impact a driver’s score and , as a result, their auto insurance premiums. Telematics programs that are similar to Tesla’s score have become an increasingly popular method for insurance companies to monitor driving behavior. However, unlike the other insurance companies, Tesla’s program uses sensors that are built into their vehicles to generate driver assessments.
If you have driven in a metropolitan area, you know that situations like getting caught at changing stoplights or being forced to make aggressive turns to avoid other careless drivers happen all the time. You also probably know that preserving your Safety Score does not always equate to doing the safest thing on the road. As users of the Safety Score lament, they are often financially incentivized to make dangerous driving decisions to not have their score impacted. Features like Tesla’s Safety Score are obviously tech forward, but they are also contextually ignorant. Producing and marketing tech-savvy designs that are willfully unconcerned with the particularities of local traffic patterns, traffic laws, or driving cultures, carries significant repercussions. As fully self-driving cars continue to proliferate, implementing tech-forward features that overlook the real-world experience of drivers may eventually harm driver safety more than benefitting it.
Though methods of driver surveillance like Safety Scores are not widely adopted now, they might eventually become standard in all vehicles. If companies continue to push features like this, then opting out of this new “normal” will become harder and harder to do. Driving behaviors will join the wider sea of personal data that is being collected and sold by companies. This personal data could eventually be used, not only to determine your insurance rates, but perhaps also to create profiles that infer things about your trustworthiness or employability. In this “surveillance economy,” new and previously unexplored data points are increasingly valuable when leveraged to make inferences about individuals. If companies continue to ingrain safety scores and other sensing technologies into their cars, the information collected through these tools is likely to be used to exploit, buy, and sell inferences on consumers.
When it comes to designing cars, there are several other factors to consider. While Safety Scores might incentivize bad behavior among drivers, other design decisions can even make cars more dangerous themselves. These include fancy automatic doors that can trap passengers inside when cars are on fire or, automatic windows that can’t sense fingers or other crunchable extremities, a particular design decision that saw Tesla recall over one million vehicles. These design choices may bring shiny new features, but ultimately they raise steep tradeoffs when they’re not evaluated in ethical, practical, or sensitive ways. Ethical tools and methods that encourage developers or manufacturers to think through the practical implications of their latest technologies before deployment are vital. While Zuckerberg’s now infamous ethos of “moving fast and breaking things” might have worked at the early dawn of social media platforms, it should not apply to a future of technologies like car designs, where the very “things” at stake are human lives.