The headline says - "How Tesla hid accidents to test its Autopilot" but the actual article has no explanation as to (1) how Tesla hid anything or, for that matter, (2) who did Tesla hide this information from
It mashes together a Tesla data leak from 2022 and an unconnected lawsuit from 2026 without ever explaining how those 2 are connected.
Tesla has a pattern of making deceptive promises and deceptive disclosures but this article doesn't make that case at all.
It's not like they sold us leaded gasoline or "healthy tobacco" for decades.
This terrible statistic can’t just be explained by aggressive driving owners or some other factor like that. Dodge has plenty of aggressive drivers buying their 700HP V8 rear wheel drive vehicles but they have better fatal accident rates than Tesla.
I’m convinced that Tesla makes unsafe cars and covers it up wherever they can.
The crash test safety awards their vehicles have won are clearly not representative of reality.
The self-driving system Tesla offers is only “ahead” of the competition because the competition is unwilling to sell an unsafe system.
- The documentary is from the RTS. The RTS is the main publicly owned media from Switzerland. They are not the typical European owned public media: They are generally pretty well funded (contrary to most). They also tend to generate good (high) quality content, tend to be independent and rather neutral (leaning slightly to the left politically speaking).
- The video is in French because, in Switzerland, the media are divided in three group associated to the regional languages: RTS for the French, SSR for the German and RSI for the Italian. Thats why you do get German translation.
- They are generally pretty cooperative and open minded. If one of you want to submit english subtitles. Just contact them, they might accept it (I do not promise anything).
I'll get downvoted but just giving you the facts. I'm glad the Autopilot name has been retired. Such a bad name, but maybe a good name because autopilot in planes can't see and avoid obstacles either.
The Fools Self Driving (FSD) contraption once again revealed as a scam and continues to be pushed onto their fans as a "self-driving" capability.
If they (Tesla) can hide fatal accidents, what else is Tesla not telling us?
FSD has built this generation's newest children of the magenta line.
> If FSD (Supervised) was active at any point within five seconds leading up to a collision event, Tesla considers the collision to have occurred with FSD (Supervised) engaged for purposes of calculating collision rates for the Vehicle Safety Report. This approach accounts for the time required for drivers to recognize potential hazards and take manual control of the vehicle. This calculation ensures that our reported collision rates for FSD (Supervised) capture not only collisions that occur while the system is actively controlling the vehicle, but also scenarios where a driver may disengage the system or where the system aborts on its own shortly before impact.[0]
In theory, that should more than cover the common perception-response times of around ~1 to 1.5 seconds used as a rule of thumb for most car accidents. But I'm quite curious what research has been done on the disengagement process as driver assistance systems return control to the driver and its impact on driver response times and their overall alertness.
If drivers trust the car to handle braking and steering for you, are we really going to see perception–response times that low, or have we changed the behavior being measured? Instead of timing a direct response to a stimulus, we’re now including the time required to re-engage their attention (even if they're nominally "paying attention"), transition to full control of the vehicle, and then react to the stimulus that they're now barreling down on.
For that matter, this approach is making the implicit assumption that pressing the brake pedal or turning the steering while is a sign of now-active control and awareness. Is it? Or could it just be a sort of instinctual reaction? I've been in the passenger seat when a driver has slammed on the brakes, only to find myself moving my right foot as if to hit an imaginary brake pedal even knowing I obviously wasn't the one driving. Hell, I remember my mom doing that back when I was learning to drive during normal braking.
0. https://www.tesla.com/fsd/safety#:~:text=within five seconds
> the self-driving feature had “aborted vehicle control less than one second prior to the first impact”
It seems right to me that the self-driving feature aborts vehicle control as soon as it is in a situation it can’t resolve. If there’s evidence that Tesla is actively using this to “prove” that FSD is not behind a crash, I’m happy to change my mind. For me, probably 5s prior is a reasonable limit.
Also, Tesla routinely claims that "FSD was not active at the time of the crash" in such cases, and they own and control the data, so it's the driver's word against theirs. They most recently used this claim for the person who almost flew off an overpass in Houston because FSD deactivated itself 4 seconds before impact[1]. They used it unironically as an excuse why FSD is not at fault, despite the fact that FSD created the situation in the first place.
[1] https://electrek.co/2026/03/18/tesla-cybertruck-fsd-crash-vi...
For normal incidents, 2 seconds is taken as a response time to be added for corrective action to take effect (avoidance, braking). I’d expand this for FSD because it implies a lower level of engagement, so you need more time to reengage with the car.
That's still not a good look.
And it does mean that FSD isn't to be as trusted as it is because if the car is putting itself in unresolvable situations, that's still a problem with FSD even if it isn't in direct control at the moment of impact.
If you don't pay constant attention, you will never notice when it slips in a bug or security issue
in the BEST CASE, this is a confluence of coincidences. Engineering knows about this and leaves it "low prio wont fix" because its advantageous for metrics.
In the worst case, this is intentional.
In any case, the "right thing to do" is NOT turn off the cameras just before a collision, and yet it happens.
This is also Safety Critical Engineering 101. Like.... this would be one of the first scenarios covered in the safety analysis. Someone approved this behavior, either intentionally, or through an intentional omission.
I'm asking because I feel I must be missing something, inasmuch as to have my hands on the wheel while not controlling the car is an experience with which I'm familiar from skids and crashes, and thinking about it as an aspect of normal operation makes the hair stand up on the back of my neck. (Especially with no obviously described "deadman switch" or vigilance control!)
I’m in left lane on highway. Tesla ahead of me but quite a ways away.
I realize as I’m driving that the Tesla is moving quite slow for the left lane driving. And before you say it, yes there are lots of people speeding in highway left lanes too.
So - I passed on the right rather than tailgate. Look over and see a guy leaning back in his seat. No hands on wheel. Could’ve been asleep. And driving 10-15 mph slower than you’d expect in that lane.
To your point about using it FSD the way you do, makes total sense to me. Which implies you would also cruise at the right speed depending on the lane you are in, unlike my example.
When I'm driving I know what I'm doing, what I'm planning to do and can scan the road and controls with that context.
Making me have to try and guess what the car is going to do at any given time is adding complexity to the process: am I changing lanes now, oh I guess I am because the autonomy thinks we should etc.
This is something I find frequently as well, moreso with Musk related things than Tesla. Lord knows there are plenty of things to be critical of.
If investigative journalism wants to regain the respect it once had, fewer allegations with concrete claims serves both the public and faith in media over large quantities of vague claims.
I admit if you want to sway public opinion, the latter is more effective, but is also a mechanism that doesn't require alignment with the truth. When that approach is normalised, it opens the door for anyone to shove popular opinion around.
Source for autopilot being disabled “seconds before a crash” also disabling cameras? (Sorry if I missed it above.)
It's just nice having a 'second set of eyes' in a sense. It's also very useful when driving in unfamiliar cities where much of my attention would be spent on navigation and trying to recognize markings/signs/light positions that are atypical. FSD handles the minutia of basic vehicle operation so I can focus on higher level decisions. Generally, at inner-city speeds, safety and time-to-act are less of an issue and it just becomes a matter of splitting attention between pedestrians, obstacles, navigation, etc. FSD if very helpful in these situations.
Too bad that project failed.
Airline pilots aren't supposed to take a nap, and there are occasionally articles about the various things that have gone wrong because the pilots weren't paying attention.
How is a car supposed to pre-empt when it is in a situation that is to challenging for it to navigate? Isn't it the driver who should see a situation that looks dicey for FSD and take control?
The former is to be expected. The latter seems likely to potentially make an already dangerous situation worse by suddenly throwing the controls to an inattentive driver at a critical moment. It seems like it would be much safer for the autopilot to continue doing its best while sounding a loud alarm to make it clear that something dangerous is happening.
If there was a significant problem, my liability only insurance premiums would be higher for the Tesla compared to a non Tesla. But they are not.
Individual tragic anecdotal incidences aside the vagueness of the article really diluted the merit of the claims.
Have you been in a Waymo? SAE Level 4 is here, and it’s safer than humans [1].
Or LLM users.
[1] https://electrek.co/2026/03/18/tesla-cybertruck-fsd-crash-vi...
Tesla makes unsubstantiated, exaggerated claims about capabilities of their system and directly encourages unsafe behavior. How many other manufacturers encourage test subjects to drive full speed ahead into a concrete divider "to see what happens"?
I don't ever want to be inside an AI driven vehicle that might decide to sacrifice me to minimize other damage
Basically the same as Kia. Why are Kias so bad?
I know you probably don't know off the top of your head, I'm hoping someone can chime in.
It's not an apples-to-oranges comparison.
If autopilot was missleading, full self driving is too?
Tesla (or probably mostly Elon) was not selling "adaptive cruise control". It's selling "Autopilot" for $8k (now with a subscription AFAIK), with a pinky promise that "soon" or "next year" or "after two weeks" (jk) you essentially will set a destination, go to sleep and wake up at destination[1].
It's same as saying that "LLM != AI" and arguing that "ChatGPT is not AI - it's a glorified statistics model that is good at creating human sounding texts". Yeah - you and I understand this - but the average guy most likely does not and will get burned by this, because dozen tech-bros are burning billions of dollars and try to convince everyone that it's a panacea to every problem you can think of.
[1] It's a slight exageration, though I won't spend time digging for quotes but my main point is that's what Tesla are selling to an average guy and not nerds who can distinguish on what's possible, what's working and what level of driving assist there are.
Would you go to a driver's funeral and tell their family that um, ackshully it's sparkling autopilot?
What do you think you're adding to the conversation? You're trying to distract from the fact that real, actual people have been actually killed by this.
I started out writing a list of European countries with high quality public broadcasters, but the comment started looking silly since the list quickly grew very long.
It seems to me FSD for Tesla is not ready to go into Prod as it is now.
This is essentially what FSD does, today. When the system determines the driver needs to take over, it will sound an alert and display a take-over message without relinquishing control.
You’re correct inasmuch as we have no evidence there is “a significant problem.” But if Tesla is hiding evidence, as this article suggests, that might just be because lawsuits are still gaining steam.
Not all companies do illegal things.
IMO it’s also a distraction to blame it on “capitalism” or some “larger trend” rather than just pointing directly at the company and people responsible.
“The system is broken” line hasn’t worked for years now. Maybe if we stop blaming the system and start blaming the people?
"Prioritizing my life over every other concern" looks like plowing over pedestrians to get me to the hospital. I dont think you can legally sell a product that promises that.
This would affect both driver selection and performance during impact
Slap a ridiculously powerful drivetrain on it and a premium price tag and you have a Tesla
The main take-away for me from that page is that very few manufacturers seem to design for actual safety (only Volvo had good results), and Tesla was angry that a new test had been introduced which feels indicative of a bad safety culture.
I don’t like Elon but I also don’t think fiction and misleading stats serve anyone.
Tesla sells too many vehicles for it to be a “self selecting driver population” thing anymore. They sell almost as many Model Ys as Honda CRVs.
I have a hard time believing that driver profile has anything to do with it, and I especially dislike the temptation to explain away the data by making unsubstantiated excuses for the company.
Dodge has better statistics than Tesla and they almost exclusively sell muscle cars.
The lengths people will go to defend Tesla continue to astound me. Can’t we just say that they suck without making excuses for them?
For some reason you could turn this on when you're not driving on the highway. It doesn't do anything for traffic lights, stop signs, obstacles, etc. because it's just cruise control. It's also included with every vehicle (unlike FSD).
You mean deaths to multiple other people, do you not? Let's just call a spade a spade here and point out the genuine ethical dilemma.
What's the ratio between "bodies of your own kids" and "other human bodies you have no other connection with" in terms of what a "proper" AI that is controlling a car YOU purchased, should be willing to make in trade in terms of injury or death?
I think most people would argue that it's greater than 1* (unless you are a pure rationalist, in which case, I tip my hat to you), but what "SHOULD" it be?
*meaning, in the case of a ratio of 2 for example, you would require 2 nonfamiliar deaths to justify losing one of your own kids
This is a fair concern. I’m unconvinced it’s even remotely a real market or political pressure.
On the market side, Waymo is constrained by some combination of production and auxiliaries. (Tesla, by technology.) On the political side, the salient debate is around jobs, in large part because Waymo has put to bed many of the practical safety questions from a best-in-class perspective.
Kia have way smaller and cheaper cars with less security features to market. Tesla had front page news at some point saying how they were the safest car ever produced.
Tesla is giving people driving their cars a false sense of security.
Tesla stans tell us that they’re the most luxurious and best-built cars on the road, in reality they’re as poorly built as an economy car brand for people who don’t want to pay for a Toyota with a reputation for low quality.
The quality of European publicly owned medias is highly country specific and variates quite a lot:
- Some of them are critically underfunded and it becomes visible (tendency to cheap sensationalism, superficial investigation or recycled content).
- Some of them are politically rooted (Left or Right) or controlled due to a direct/indirect government involvment.
But all considered: I would say that the average are still an order of magnitude better in term of content quality and independence that the average privatized media.
I can say the same about the foreign bureaus of State-owned media thingies like Deutsche Welle and Radio France Internationale, both of these entities actively rooting for the Romanian political candidate that was seen as closer to German and French interests (I’m talking the last couple of rounds of Romanian presidential elections).
The Koch brothers stopped breaking the law because it was too expensive. Instead they started lobbying to get the laws changed. This is where the idea that the system is rotten comes from.
I don’t know what’s so hard to believe about the study. Tesla’s numbers are pretty similar to other low-performing brands.
I would suggest that all but the most narcissistic would have some limit to how many pedestrians they would be willing to run over to save their own lives. The demand that the AI have no such limit—“that the AI will prioritize my life and safety over literally any other concern”—is grotesque.
I mean deaths the AI predicts for other people, yes
And I'm not saying I would never choose to kill myself over killing a schoolbus full of children, but I'll be damned if a computer will make that choice for me.
As for me I actually like driving and I'm good at it. I'm not afraid of operating my own vehicle like so many people seem to be
> The study's authors make clear that the results do not indicate Tesla vehicles are inherently unsafe or have design flaws. In fact, Tesla vehicles are loaded with safety technology; the Insurance Institute for Highway Safety (IIHS) named the 2024 Model Y as a Top Safety Pick+ award winner, for example. Many of the other cars that ranked highly on the list have also been given high ratings for safety by the likes of IIHS and the National Highway Transportation Safety Administration, as well.
Sorry, I don't understand this. I'm just asking a question. Do you reply to every question with that?
“That’s it? If you’re gonna die, die with us?”
https://en.wikipedia.org/wiki/ISeeCars.com#Partnerships
https://x.com/larsmoravy/status/1860100416819855492
Looking for more. tl;dr is that NHTSA publishes accident rates but not mileage. ISeeCars has access to legacy auto mileage from dealership data but guessed at mileage for Tesla's in the period in question. Their methodology was not released and was a fraction of the total mileage that Tesla recorded over that period.
I do agree that Tesla could do a much better job with data transparency. But the claims of the ISC report are pretty difficult to reconcile with the crash test ratings they've gotten from many regulators across the world.
Also, they don't tout a single party line.
I'm not really thinking about when self driving is State of the Art Research. I'm talking about when it becomes table stakes.
Honestly the real truth is I just do not trust tech companies to make decisions that are remotely in my best interest anymore.
I can't even trust tech companies to build software that respects a "do not send me marketing emails" checkbox, why would I ever trust a car driven by software built by the same sort of asshole?
Why? They only pay out if you’re at fault. And if there aren’t final judgements in a deep pipeline of cases, premiums wouldn’t have a reason to adjust yet.
> Look, there is no way corporations would lie for their own interest. Especially when they spent tens of billions to develop something.
> It's not like they sold us leaded gasoline or "healthy tobacco" for decades.
You can't get into a trolley situation without driving unsafely for the conditions first, so companies focus on preventing that earlier issue.
What is the lowest likelihood of your own death you'd find acceptable in this situation?
Isn’t this entirely hypothetical? In reality, are any systems doing this calculus? Or are they mimicking humans, avoiding obstacles and reducing energies in a series of rapid-fire calls?
Replacing bad other drivers with good autonomous systems is likely a great trade off for you, even if you are in an autonomous vehicle that is eager to sacrifice you if there is an unavoidable incident.
You just said that you do not care how many people you kill - regardless of whether they are pedestrians, whether they are driving cars or whether they are on the bus. That is what people react to.
Idk, we solve it then. Motor vehicles kill 40,000 Americans a year [1]. I’m willing to cautiously align with Google and maybe even Tesla if they can take a bit out of those numbers.
The point isn't to demonize all corporations, it's to say specifically that a pathology of some megacorporations is broadscale lying to the public about the safety of their products for personal gain.
I am also assuming that a collision involving a Tesla has at fault determinations that are more accurate than other brands, given the 6 or 7 cameras that are recording and should make determining fault easier.
Basically, if the Tesla was more dangerous to drive than a Toyota, because it was a Tesla, then insurance companies would be paying out more for insuring Teslas, and hence insurance companies would be charging higher liability only insurance premiums.
There's plenty we could talk about: i.e. the failure scenarios of shallow reasoning systems, the serious limitations on the resolution and capability of the actual Tesla cameras used for navigation, the failure modes of LIDAR etc.
Instead we got "what if the car calculates the trolley problem against you?"
And observationally, proof a staggering number of people don't know their road rules (since every variant of it consists of concocting some scenario where slamming on the brakes is done at far too late but you somehow know perfectly well there's not a preschool behind the nearest brick wall or something).
I remember running some basic numbers on this in an argument and you basically wind up at, assuming an AI is fast enough to detect a situation, it's sufficiently fast that it would literally always be able to stop the car with the brakes, or no level of aggressive manoeuvring would avoid the collision.
Which is of course what the road rules are: you slam on the brakes. Every other option is worse and gets even worse when an AI can brake quicker and faster if its smart enough to even consider other options.
> if the Tesla was more dangerous to drive than a Toyota, because it was a Tesla, then insurance companies would be paying out more for insuring Teslas
You may be over indexing how much work liability insurers do. I have an umbrella policy. It absolutely doesn’t take into account the fact that I ski and fly a plane, for example. At the end of the day, their liability is capped and it’s usually easier to weed out by claims history than running models on small premiums.
And my entire point is I trust the incentives of the insurer to accurately price risk and determine at fault more than a publication that needs clicks.
> And given Tesla is potentially mucking with the data, the exculpatory value of having all those cameras is diminished.
Does the data from Tesla even come into play for an insurer? They need to pay the damaged parties regardless of whether or not Tesla and its software are at fault. For premium pricing purposes, what Tesla does is irrelevant until after Tesla is found liable.
In the meantime, a collision with a Tesla is the same as any other auto brand’s. I don’t think Ford/Toyota/anyone else’s software comes into play. No auto brand picks up the liability for the driver (except Mercedes in some circumstances, I think), so no automaker is in the picture for payment in the event of an individual collision.
Une fuite de données révèle que Tesla a dissimulé des milliers d'incidents liés à sa conduite autonome. Certains accidents ont été fatals. Un premier verdict condamne le constructeur à verser 243 millions de dollars aux victimes. Une enquête diffusée dans Temps Présent lève le voile sur ces pratiques.
La voiture autonome promettait un rêve, elle se transforme en cauchemar pour certains usagers. Une enquête révèle comment Elon Musk et Tesla ont utilisé les routes comme terrain d'essai en précipitant la mise sur le marché d’un système de conduite autonome par intelligence artificielle.
Le constructeur automobile a passé sous silence des milliers d'incidents graves. Certains ont coûté la vie à des conducteurs et des passagers. D'autres usagers de la route se sont retrouvés impliqués sans le savoir.
L'enquête s'appuie sur une fuite massive de données internes à Tesla. Ces documents révèlent l'ampleur du problème. Le constructeur était conscient depuis des années des défaillances de ses systèmes.
>> Lire aussi : Tesla obligé de rappeler deux millions de véhicules
Les fichiers montrent des milliers de plaintes de clients. Plus de 2400 concernent des accélérations spontanées et le nombre d'accidents dépasse les 1000. Dans de nombreux cas, le statut indiqué était "non résolu".
Vue aérienne d'un accident impliquant une Tesla. [RTS]
Certaines voitures Tesla ont accéléré ou freiné brutalement sans raison. En intelligence artificielle, on appelle ces dysfonctionnements des "hallucinations", comme lorsque ChatGPT donne une réponse complètement fausse.
Sur la route, les conséquences sont désastreuses. Le système de conduite autonome peut mal interpréter son environnement. A grande vitesse, ces erreurs deviennent mortelles.
Je ne savais pas que le pilote automatique existait. Quand je l'ai découvert, je me suis senti comme un cobaye
Le problème touche tous les usagers. Alors que beaucoup n'ont jamais accepté d'être les cobayes de Tesla, ils se retrouvent malgré eux exposés aux défaillances du système "Autopilot".
>> Lire à ce sujet : Les automobilistes encore "cobayes" des systèmes d’aide à la conduite
Naibel Benavides avait 22 ans. Cette simple piétonne est morte dans un accident impliquant une Tesla en mode "Autopilot". Son compagnon Dillon Angulo a survécu avec de graves blessures.
"Je ne savais pas que le pilote automatique existait. Quand je l'ai découvert, je me suis senti comme un cobaye", témoigne Dillon Angulo, qui souffre encore aujourd'hui des conséquences de l'accident.
La famille de Naibel a décidé d'attaquer Tesla en justice. Elle accuse le constructeur d'avoir caché des informations cruciales. Tesla a toujours rejeté la faute sur le conducteur.
Les enquêteurs ont rencontré des obstacles inhabituels. Les données de l'accident auraient dû être disponibles dans la "boîte noire" du véhicule. Or, Tesla a affirmé que ces données étaient corrompues.
Les avocats des victimes ont fait appel à des experts, qui ont réussi à récupérer les données supprimées. Ces informations prouvent que Tesla était au courant de la défaillance dès le soir de l'accident.
L'Autopilot de Tesla repère les obstacles sur la route. [RTS]
La voiture en mode "Autopilot" avait détecté les obstacles. Elle n'a pourtant rien fait pour éviter la collision. Seule une alerte a retenti juste avant l'impact.
Un jury a condamné Tesla à verser plus de 243 millions de dollars de dommages et intérêts. Cette sanction marque une première dans les affaires liées à l'"Autopilot". Les jurés ont jugé que Tesla et le conducteur étaient responsables.
"C'est un jour historique pour la justice", a déclaré l'avocat des victimes. Le verdict montre que les constructeurs ne peuvent pas utiliser les routes publiques comme laboratoire.
Tesla a tenté de faire annuler ce verdict. Fin février, un juge fédéral a confirmé la sanction contre le constructeur. L'entreprise peut encore faire appel.
Tesla fait l'objet de plusieurs enquêtes aux Etats-Unis. Le ministère de la Justice examine si le constructeur a trompé les consommateurs. L'Administration nationale de la sécurité routière enquête également.
>> Lire aussi : Tesla évite un long procès sur sa technologie d'aide à la conduite
Des lanceurs d'alerte ont témoigné auprès des autorités. Ils décrivent une entreprise qui privilégie la rapidité au détriment de la sécurité. La version test de la conduite autonome a été précipitée sur le marché, alors que plusieurs employés avaient alerté la direction sur les dangers de l'"Autopilot".
Les experts s'attendent à ce que d'autres poursuites judiciaires suivent. Le premier verdict ouvre la voie à de nouveaux procès contre Tesla.
Contenu externe
Ce contenu externe ne peut pas être affiché car il est susceptible de collecter des données personnelles. Pour voir ce contenu vous devez autoriser la catégorie Services Tiers.
Accepter Plus d'info
François Roulet