There are obviously other reasons, but I would much rather buy a car from a company that either doesn’t collect my data or doesn’t feel the need to bury me in the media if I screw up.
So the solution to all of this is to lock up more executives who commit fraud or lie to the public.
Recently (after a 10 year battle) two former Volkswagen executives just got prison time for the Dieselgate scandal.
Wish it had come faster, but that's a good start.
https://www.lbc.co.uk/crime/volkswagen-execs-jailed-fraud-de...
Another interesting thing is that this post does not appear on HN front page. Surely it should be relevant here that one of the top AI / Robotics / Tech, whatever you name it, company in the world is doing this kind of stuff in the open. But that may get people curious to actually look at what and how they are doing things and realize it's all smokes and mirrors and maybe there is much more shady business going on there.
I just can't wait when this house of cards collapse.
> You should put yourself in the family’s shoes. If your daughter died in a car crash, you’d want to know exactly what happened, identify all contributing factors, and try to eliminate them to give some meaning to this tragic loss and prevent it from happening to someone else.
> It’s an entirely normal human reaction. And to make this happen in the US, you must go through the courts.
This (especially the very last point) is crucial. Whenever there is any kind of error or mistake by a big corporation, more often than not, its immediately covered up, nothing is admitted publicly. But when a lawsuit is involved, the discovery process will lead to the facts being uncovered, including what the company knows.
I am glad that they were able to uncover this. Someone I know lived in an apartment complex that was made uninhabitable due to an obvious fault on the owner, but they didn't get a straight answer about what happened until they sued the owner and got the details in discovery, a couple of years after the incident. This is the only way to get to the facts.
Tesla is comparatively a bull in a china shop. Raise your hand if you would trust Tesla over Waymo to autonomously drive your young children for 1,000 miles around a busy metro. That's what I thought.
Tesla deserves regulating.
- Honoré de Balzac
"Tesla awards boss Elon Musk $29bn in shares" - https://www.bbc.com/news/articles/cz71vn1v3n4oTesla must pay portion of $329M damages after fatal Autopilot crash, jury says
For context though, note that this crash occurred because the driver was speeding, using 2019 autopilot (not FSD) on a city street (where it wasn't designed to be used), bending down to pick up a phone he dropped on the floor, and had his foot on the gas overriding the automatic braking: https://electrek.co/2025/08/01/tesla-tsla-is-found-liable-in... The crash itself was certainly not Tesla's fault, so I'm not sure why they were stonewalling. I think there's a good chance this was just plain old incompetence, not malice.
Autopilot is cruise control. When you understand this, claiming that Tesla is partially at fault here does not match the existing expectations of other driver assistance tech. Just because Tesla has the capability of disabling it doesn't mean they have to.
This all comes down to an interpretation of marketing speak. If you believe "autopilot" is misleading you'd agree with the jury here, if you don't you wouldn't. I'm no lawyer, and don't know the full scope of requirements for autopilot like features, but it seems that Tesla is subject to unfair treatment here given the amount of warnings you have to completely ignore and take no responsibility for. I've never seen such clear warnings on any other car with similar capabilities. I can't help but think there's maybe some politically driven bias here and I say that as a liberal.
Happy to be convinced otherwise. I do drive a Tesla, so there's that.
> Within ~3 minutes of the crash, the Model S packaged sensor video, CAN‑bus, EDR, and other streams into a single “snapshot_collision_airbag-deployment.tar” file and pushed it to Tesla’s server, then deleted its local copy.
Putting aside the legal implications wrt evidence, etc — what is the ostensible justification for this functionality? To a layperson, it's as bizarre as designing a plane's black box to go ahead and delete data if it somehow makes a successful upload to the FAA cloud server. Why add complexity that reduces redundancy in this case?
My autosteer will gladly drive through red lights, stop signs, etc.
And the fact that we have telemetry at all is pretty amazing. Most car crashes there's zero telemetry. Tesla is the exception, even though they did the wrong thing here.
So this is also a failure of the investigator.
That is - gamble GOP alignment leading to to regulatory capture such that the bar is lowered enough that they can declare the cars safe.
Send the corporation to jail. That means it cannot conduct business for the same amount of time that we would put a person in jail.
It just looks stupid to me in a way that makes me more likely to discount your post.
The fact that Tesla purposely mislead the investigators and hid evidence was why the jury awarded such a large sum.
If you or I did this, do you think a judge would care? No. We would be sitting in jail with a significant fine to boot.
The point is that these businesses consider this a "cost of doing business" until someone is actually put in jail.
Perhaps hiding the data like this _is_ their process.
Cruise had to shut down after less than this but, because Elon has political power over regulation now, a Tesla could drive right through a farmers market and they wouldn't have to pause operations even for an afternoon.
Buying SPY, my mistake. Being incentivized to put money in my 401k... That is a bit harder to solve.
The meme of Hanlon's Razor needs to die. Incompetence from a position of power is malice, period.
Letting people use autopilot in unsafe conditions is contributory negligence. Given their marketing, that's more than worth 33% of the fault.
That they hid this data tells me everything I need to know about their approach to safety. Although nothing really new considering how publicly deceitful Musk is about his fancy cruise-control.
Normal consumers don't understand the difference between "Autopilot" and "FSD".
FSD will stop at intersections/lights etc - Autopilot is basically just cruise control and should generally only be used on highways.
They're activated in the same manner (FSD replaces Autopilot if you pay for the upgrade or $99/month subscription), and again for "normal" consumers it's not always entirely clear.
A friend of mine rented a Tesla recently and was in for a surprise when the vehicle did not automatically stop at intersections on Autopilot. He said the previous one he rented had FSD enabled, and he didn't understand the difference.
IMO Tesla just needs to phase out 2019 AP entirely and just give everyone some version of FSD (even if it's limited), or geofence AP to highways only.
> Update: Tesla’s lawyers sent us the following comment about the verdict:
> Today’s verdict is wrong and only works to set back automotive safety and jeopardize Tesla’s and the entire industry’s efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial. Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator – which overrode Autopilot – as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver – from day one – admitted and accepted responsibility.
---
Personally, I don't understand how people can possibly be happy with such verdicts.
Recently in 2025, DJI got rid of their geofences as well, because it's the operator's responsibility to control their equipment. IIRC, DJI did have support of the FAA in their actions of removing the geofencing limitations. With FAA expressly confirming that geofencing is not mandated.
These sorts of verdicts that blame the manufacturer for operator errors, are exactly why we can't have nice things.
It's why we get WiFi and 5G radios, and boot loaders, that are binary-locked, with no source code availability, and which cannot be used with BSD or Linux easily, and why it's not possible to override anything anywhere anymore.
Even as a pedestrian, I'm glad that Tesla is fighting the good fight here. Because next thing I know, these courts will cause the phone manufacturers to disable your phone if you're walking next to a highway.
Does he still? I wouldn't be so sure.
If you're big on privacy, things like logging incorrect password attempts is a big no-no. We have to "thank" the privacy advocates for that.
How do you think the owner of the car would feel if the file was visible in plain sight to the next owner of the vehicle?
Further, it's one fee for a single accident. Since then there have been 756 more Tesla Autopilot crashes. If each of those got a similar payout, that would be 80% of Tesla's current market cap. Obviously they won't all payout that much, but if on average an autopilot crash cost Tesla $56 Million to handle (settlement + legal expenses + lost sales), that would wipe out the company's profit entirely. There's no possible justification for leaving such a massive liability in place.
The problem is for several years they actively targeted a customer base incapable of understanding the limitations of the mis-named system they advertised. (Those customers capable of understanding it were more likely to buy vehicles from brands who advertised more honestly.) While the current approach of targeting Nazi and friend-of-Nazi customers might eventually change the story (with its own risks and downsides, one imagines), for the time being it seems reasonable that Tesla bear some responsibility for the unsafe customer confusion they actively courted.
And that's exactly why the law is supposed to have a Reasonable Person Standard.
https://en.wikipedia.org/wiki/Reasonable_person
When the majority of Tesla's owners are completely unaware of the viability of autopilot even in 2025, how exactly does it make any sense to blame the marketing when someone was so entrusting in the unproven technology back in 2019? Especially given so many reports of so many people being saved by said technology in other circumstances?
I imagine these things will get better when courts would not be able to find jurors that are unfamiliar with the attention-monitoring nags that Tesla's are famous for.
A bit more nuanced version is that incompetence from a position of power is a choice.
In other words, if you bought the car because you kept hearing the company say "this thing drives itself", you're probably going to believe that over the same company putting a "keep your eyes on the road" popup on the screen.
Of course other companies have warnings that people ignore, but they don't have extremely successful marketing campaigns that encourage people to ignore those warnings. That's the difference here.
the Center for Science in the Public Interest filed a class-action lawsuit
The suit alleges that the marketing of the drink as a "healthful alternative" to soda is deceptive and in violation of Food and Drug Administration guidelines.
Coca-Cola dismissed the allegations as "ridiculous," on the grounds that "no consumer could reasonably be misled into thinking Vitaminwater was a healthy beverage"
"Auto Pilot: a device for keeping an aircraft or other vehicle on a set course without the intervention of the pilot."
"Cruise Control: an electronic device in a motor vehicle that can be switched on to maintain a selected constant speed without the use of the accelerator."
The article says no warnings were issued before the crash.
So which warning did the driver miss?
Lol is this for real? No amount of warnings can waive away their gross negligence. Also, the warnings are clearly completely meaningless because they result in nothing changing if they are ignored.
> Autopilot is cruise control
You're pointing to "warnings" while simultaneously saying this? Seems a bit lacking in self awareness to think that a warning should muster the day, but calling cruise control "autopilot" is somehow irrelevant?
> I can't help but think there's maybe some politically driven bias here
Look only to yourself, Tesla driver.
I've owned two Tesla's ( now a Rivian/Porsche EV owner). Hands down Tesla has the best cruise control technology in the market. There-in lies the problem. Musk constantly markets this as self driving. It is NOT. Not yet at least. His mouth is way way way ahead of his tech.
Heck, stopping for a red light is a "feature", where the car is perfectly capable of recognizing and doing so. This alone should warrant an investigation and one that i completely, as a highly technical user, fell for when i first got my model 7 delivered... Ran thru a red light trying out auto pilot for the first time.
I'm honestly surprised there are not more of these lawsuits. I think there's a misinterpretation of the law by those defending Tesla. The system has a lot of legalese safe-guards and warnings. But the MARKETING is off. WAY OFF. and yes, users listen to marketing first.
and that ABSOLUTELY counts in a court of law. You folks would also complain around obtuse EULA, and while this isn't completely apples to apples here, Tesla absolutely engages in dangerous marketing speak around "auto pilot". Eliciting a level of trust for drives that isn't there, and they should not be encouraging.
So sorry, this isn't a political thing ( and yes, disclaimer, also a liberal).
Signed... former Tesla owner waiting for "right around the corner" self driving since 2019...
And it didn't warn users about this lack of capabilities until it was forced to do so. Those warnings you're talking about were added after this accident occurred as part of a mandated recall during the Biden administration.
1) Embedded systems typically do not allow data to grow without bound. If they were going to keep debugging data, they'd have to limit it to the last N instances or so. In this case N=0. It seems like the goal here was to send troubleshooting data, not keep it around.
2) Persisting the data may expose the driver to additional risks. Beyond the immediate risks, someone could grab the module from the junkyard and extract the data. I can appreciate devices that take steps to prevent sensitive data from falling into the hand of third parties.
Even California's system is lax enough that you can drive a Tesla semi through it.
The MCI Worldcom fraud, which broke shortly after Enron, might also have doomed AA (they were the auditor for both major frauds of 2002). MCI Worldcom filed for bankruptcy before it could be hit with criminal charges, and the SEC ended up operating MCI-W in the bankruptcy, because the fines were so large and are senior to all other debts, so they outmuscled all of the other creditors in the bankruptcy filings. Which was why they weren't hit with criminal charges- they already belonged to the Government. There hasn't been much stomach for criminal charges against a corporation ever since.
The fact that the Supreme Court has spent the past few decades making white collar crimes much harder to prosecute (including with Arthur Andersen, where they unanimously reversed the conviction in 2005) is another major factor. The Supreme Court has always been terrible, and gets far more respect than it deserves.
Make negligence unprofitable and the profit-optimizers will take care of the rest. Then 80 years later when people get too used to "trustworthy" corporationns we can deregulate everything and repeat the cycle
Negative, cheesy, clickbait, rage inducing etc headlines do seem to get more clicks. There is a reason why politicians spend more time trash talking opponents than talking positively about themselves. Same goes with attack ads.
I have not doubt a majority of people will say they despise these pictures like YouTube thumbnails, yet the cold numbers tell the opposite.
This is both because such incompetence costs people's lives, and because they have enough money that they could definitely hire more or better people to re-check and add redundant safety features into their products.
The problem is, they do not want any accountability for claiming that their cars are "self-driving", or for any of their other errors or willful endangerment of the public.
And in both cases they should be held accountable.
Well, I didn't know about this. I didn't consider it. I also would then need to spend a non-tax deductible amount on this. And IIRC Tesla entered the Fortune 500 before 2022 when that TSLS started. Then I'd have to actually do the calculation, and continue doing the calculation as SPY/my 401k adjusted each month or so.
'Easy'
Nevertheless people sometimes still manage to use it in an incorrect way that injures their hands such that it will take a year of treatment and physical therapy before they can use their hands again.
Some people view the point of product liability laws is to make those who are to blame for the injury pay. Under that view I'd not be on the hook.
Another point of view is that it should be about who can most efficiently handle dealing with these injuries. Either someone is going to have to pay for the treatment and therapy to enable these people to use their hands again or they are going to probably end up on disability for the rest of their lives which will be paid for by government (and so indirectly by most of the rest of us).
Who should that someone be?
One candidate is the user's health insurance company.
One problem with that is that in the US there are plenty of people without health insurance. They would be able to get free treatment right after the injury at any emergency room if they can't afford to pay for that treatment, but that would only get them to the point they aren't in any more danger. It would not include the follow ups needed to actually restore function, so there is still a good chance they will end up on disability. Also that "free" emergency room treatment will actually be paid for by higher costs for the rest of us.
Even if the user does have insurance that pays for their treatment and therapy, ultimately that money is coming from premiums of the people that use that insurance company.
This health insurance approach then ultimately comes down to socializing the cost among some combination of two broad groups: (1) those who have health insurance, and (2) taxpayers in general.
Another candidate is me, the gadget maker. Make it so I am liable for these injuries regardless of who was at fault. I know exactly how many of these gadgets are out there. If all injury claims from people using them go through me I'll have full data on injury rates and severity.
That puts me in a good position to figure out how much to raise the price of my gadgets to establish and maintain a fund to pay out for the injuries.
This still socializes the costs, but now instead of socializing it across those two broad groups (everyone with health insurance and taxpayers) it is getting socialized across all purchasers of my gadgets.
The people who favor this strict liability approach also argue that I'm the best candidate for this because I'm in the best position to try to reduce injuries. If health insurance companies were the ones dealing with it and they noticed injury rates are going up there isn't really anything they can do about that other than raise premiums to cover it.
If I'm the one dealing with it and notice injury rates are going up I can deal with it the same way--raise prices so my injury fund can cope with the rising injuries. But I also have the option to make changes to the gadgets such as adding extra safety features to reduce injuries. I might come out ahead going that route and then everybody wins.
It also didn't really seem to impact Tesla's decision to keep pushing Full Self Driving and Robotaxi despite it having obvious severe flaws (because Tesla sees this rollout as something holding up its stock price).
When you get your Tesla and attempt to turn on the features described, it has a dialog with a warning and you have to agree to understanding before proceeding. If you choose to ignore that, and all the other clearly marked and accessible warnings in the manual and not learn how to operate it, is that not on you the licensed driver? Isn't that what regulations and licensure are for?
I'm very much in favor of consumer protections around marketing, but in this case there were none that were clearly defined to my knowledge.
One, you don't need a license to buy a non alcoholic beverage. Two, while the FDA has clear guidelines around marketing and labeling, I'm not aware of any regulatory body having clear guidelines around driver assistance marketing. If they did it wouldn't be controversial.
All an auto pilot on an aircraft does is keep the plane flying in a straight line at a constant speed. It mostly doesn't do obstacle avoidance, or really anything else. Yes, you don't need intervention of the pilot, because it turns out going in a straight line in an airplane is pretty hard to screw up.
From that standard at least, modern cruise controls are more capable than airplane auto pilots. There is a widespread belief on HN, however, that people are generally very dumb and will mistake autopilot for something more like FSD.
That is not how it’s marketed at all.
That is definitely what auto pilot means in the aeronautical and maritime sphere.
But a lot of the general public has a murky understanding of how an auto pilot on a ship or a plane works. So for a lot, probably the majority of them. They will look at the meaning of those two words and land on that auto pilot, means automatic pilot. Which basically ends up beeing self driving.
Sure in a perfect world, they would look up what the term means in the sphere they do not know, and use it correctly, but that is not the world we live in. We do not get the general public, we want, but we have to live with the one we got.
Would Boeing or John Deere be responsible for marketing language or just the instruction manual. We know the latter is true. It's there any evidence of the former? Intuitively I would say it's unlikely we'd blame Boeing if a pilot was mislead by marketing materials. Maybe that has happened but I haven't found anything of that sort (please share if aware).
This is the responsibility of a licensed driver. I don't know how a Mercedes works, but if I crash one because I misused a feature clearly outlined in their user manual, Mercedes is not at fault for my negligence.
That’s not true
> Do I still need to pay attention while using Autopilot?
> … Before enabling Autopilot, you must agree to “keep your hands on the steering wheel at all times” and to always “maintain control and responsibility for your vehicle.” Once engaged, Autopilot will also deliver an escalating series of visual and audio warnings, reminding you to place your hands on the wheel if insufficient torque is applied. If you repeatedly ignore these warnings, you will be locked out from using Autopilot during that trip.
> If you repeatedly ignore the inattentive driver warnings, Autosteer will be disengaged for that trip. If you receive several ‘Forced Autopilot Disengagements’ (three times for vehicles without a cabin camera and five times for vehicles with a cabin camera), Autosteer and all features that use Autosteer will be temporarily removed for approximately one week.
What part of how autopilot is marketed do you find to be gross negligence?
I would ask, what is the existing definition of autopilot as defined by the FAA? Who is responsible when autopilot fails? That's the prior art here.
Additionally if NTSB failed to clearly define such definitions and allowments for marketing, is that the fault of Tesla or the governing body?
I'm pretty neurotic about vehicle safety and I still don't think this clearly points to Tesla as being in the wrong with how they market these features. At best it's subjective.
Are there clear guidelines set for labeling and marketing of these features? If not, I'm not sure how you can argue such. If it was so clearly wrong it should have been outlined by regulation, no?
If that's the case, this is certainly a stronger argument. I thought autosteer and FSD always had this dialog. As far as I know these dialogs go back 10 years and this was April 2019.
Even still find retroactive punishment of this to be dubious. If Tesla is liable to some degree so should the NHTSA, to the extent that anyone who makes the rules can be, for not defining this well enough to protect drivers.
If the data can expose the driver to additional risks, then the driver can be exposed by someone stealing the vehicle and harvesting that data. Again, that can be trivially protected against using encryption which would also protect in the instance that communication was disrupted so that the tar isn't uploaded/deleted.
If I were implementing such a system (and I have), I could see myself deleting the temporary without much thought. I would still have built a way to recreate the contents of the tarball after the fact (it's been a requirement from legal every time I've scoped such a system). Tesla not only failed to do that, but avoided disclosing that any such file was transferred in the first place so that the plaintiffs wouldn't know to request it.
Mixing up who is responsible for driving the car is very much Tesla's fault, starting with their dishonest marketing.
Are you also arguing that Tesla didn’t withhold data, lie, and misdirect the police in this case, as the article claims? Seems to me that Tesla tried to look as guilty as possible here.
I guess you could go even more nuanced and say sometimes incompetence from a position of power is a choice, and I would agree with that, but now the statement seems so watered down as to be almost meaningless.
A corporation can hire people and put processes in place to arbitrarily minimize (or not) the chance of an mistake in areas that matter to them. In this case, they did just that; only the thing being optimized for was “not giving data to the authorities”.
The evidence of this trial does not support an “oopsie poopsie we messed up so sowwy” interpretation of events. Tesla’s paid representatives went out of their way—repeatedly—to lie, mislead, and withhold evidence in order to avoid scrutiny. Fuck them and everyone involved with that.
However, even though Autopilot doesn't obey traffic control devices, it still DOES issue warnings if taking over may be required.
Most Tesla owners I've talked with, are actually completely unaware of the v12 and v13 improvements to FSD, and generally have the car for other reasons than FSD. So, if anything, Tesla is actually quite behind on marketing FSD to the regular folk, even those who are already Tesla owners.
I do like the idea of incentivizing companies to take all reasonable steps to protect people from shooting themselves in the foot, but what counts as "reasonable" is also pretty subjective, and liability for having a different opinion about what's "reasonable" seems to me to be a little capricious.
For example, the system did have a mechanism for reacting to potential collisions. The vehicle operator overrode it by pushing the gas pedal. But the jury still thinks Tesla is still to blame because they didn't also program an obnoxious alarm to go off in that situation? I suppose that might have been helpful in this particular situation. But exactly how far should they legally have to go in order to not be liable for someone else's stupidity?
I agree with you that doesn’t matter when it comes to covering up/lying about evidence.
They could have been 0.5% at fault. Doesn’t mean that was ok.
If those executives valued not being incompetent in any specific given way (especially in the ways that harm the many), they have the power to change that. They can say "no, we need to make sure this never happens again."
The fact that they choose not to do that, in so, so many cases, has a variety of causes, but in the end what it fundamentally boils down to is that they choose not to do it.
For friends they are unlikely going to be randomly malicious while assuming malice for every mistake is quickly going to ruin your friendship. So Hanlon's razor makes sense.
Corporations on the other hand cannot be assumed to have morals or care about you. You are already fungible to them so assuming malice until proven otherwise is not going to make things worse for you. Meanwhile giving corporations the benefit of the doubt allows the truly malicious ones to take advantage of that who, unlike your friends, don't really have any other feedback loops that keep them honest.
* There was no record of a “Take Over Immediately” alert, despite approaching a T-intersection with a stationary vehicle in its path.
* Moore found logs showing Tesla systems were capable of issuing such warnings, but did not in this case.
The article says that soem government agency demanded Tesla to actually geofense the areas Tesla claims their software is incapable to handle. I am not a Tesla owner and did not read the small fonts manual, do Tesla reserve the rights that they might also not sound the alarm when the car is going at speed straight into an other car while a driver is not having the hands on the wheel? sounds bad, the driver is not steering, the car is driving on an area where it is incapable of driving still and it is heading into a obstacle and the alarm is not sounding (still from the article it seemed like this was a glitch that they were trying to hide, and that this was not supposed to happen)
Anyway Tesla was forced to show the data, they did tried to hide it, so even if fanboys will attempt to put the blame `100% on the driver the jurry and Tesla 's actions tell us that the software did not function as adevertised.
Perhaps, but does it hurt more or less than getting life-changing injuries and your partner killed by a Tesla?
> Tesla infamously doesn't have a marketing team
Come on, obviously they do marketing. "We don't have a marketing team" is itself marketing. Here's their most popular YouTube ad, for example: https://www.youtube.com/watch?v=tlThdr3O5Qo
That video, which is called "Full Self-Driving" even though it came out when Autopilot was the only capability the cars had, was coincidentally released 3 days before the crash we're discussing. Do you think an ad like that, which simply shows the car driving itself and links to a page about Autopilot in the description, might lead someone to believe that Autopilot will do what the video shows? Again, remember that there was no separate FSD product at the time.
When I worked on unmanned vehicles, you could have one operator control multiple speedboats because you typically had minutes to avoid collisions. Splitting attention would not be feasible with a car on cruise control, because you are never more than a few seconds away from crashing into something solid.
Actually, the former is true. Courts and juries have repeatedly held that companies can be held responsible for marketing language. They are also responsible for the contents of their instruction manual. If there are inconsistencies with the marketing language it will be held against the company because users aren't expected to be able to reconcile the inconsistencies; that's the company's job. Thus, it's irrelevant that the small print in the instruction manual says something completely different from what all the marketing (and the CEO himself) says.
The "autopilot is limited" argument would have worked 20 years ago. It doesn't today. Modern autopilots are capable of maintaining speed, heading, takeoff, and landing so they're not just pilot assistance. They're literally fully capable of handling the flight from start to finish. Thus, the constant refrain that "autopilot in cars is just like autopilot in planes" actually supports the case against Tesla.
There is a widespread belief on HN, however, that people are generally very dumb and will mistake autopilot for something more like FSD.
I think the error here is that you're underestimating just how rare accidents are. Let's imagine there's some monstrously dangerous feature X that results in fatal collisions 10% of the time when misused. If we assume a person either uses it correctly or consistently misuses it, how many people (1 in N) need to be misusers to double the US fatality numbers?You only need about 1 misuser in every 500-2,000 drivers, depending on how you do the numbers. Now obviously autopilot isn't as dangerous as our hypothetical feature X here, but do you think it's reasonable to argue that a small fraction of a percent of autopilot users might be misled about its capabilities by the name? I think that's a long way from saying "people are generally very dumb".
> ...we have to live with the [the world] we got.
There was nothing inevitable in how we reached this situation, and no reason to let it continue.
>> If you repeatedly ignore the inattentive driver warnings, Autosteer will be disengaged for that trip. If you receive several ‘Forced Autopilot Disengagements’ (three times for vehicles without a cabin camera and five times for vehicles with a cabin camera), Autosteer and all features that use Autosteer will be temporarily removed for approximately one week.
There are videos of people on autopilot without their hands on the wheel...
Drivers need to be paying attention, but is it not possible that Tesla could also do more to make things clear?
That's ridiculous. Tesla chose to make dangerous claims that resulted in the loss of dozens of lives. Tesla alone should be liable for this, not the regulator that eventually forced them to add the disclaimers so that consumers would have at least some modicrum of notice that Tesla's advertising was actually just a package of lies.
The fact that it's not an autopilot is a great start.
>I would ask, what is the existing definition of autopilot as defined by the FAA? Who is responsible when autopilot fails? That's the prior art here.
I don't think the FAA defines terms, and prior art is something specific to patents that has no relevance to the worlds of marketing and product safety.
>Additionally if NTSB failed to clearly define such definitions and allowments for marketing, is that the fault of Tesla or the governing body?
NTSB does not approve of marketing nor does it provide such definitions. On what basis do you have to suggest they did any of the sort that Tesla needed their approval?
>>Additionally if NTSB failed to clearly define such definitions and allowments for marketing, is that the fault of Tesla or the governing body?
It's Tesla's. They marketed a product that does not do what they claim it does. The fact that when it does not do those things it can cause (deadly) harm to others, is why they received such a steep adverse judgment.
>I'm pretty neurotic about vehicle safety and I still don't think this clearly points to Tesla as being in the wrong with how they market these features. At best it's subjective.
Who cares how neurotic you think you are? You haven't come across reasonable in this conversation at all.
> At best it's subjective.
It's objectively not autopilot.
Given storage is a finite resource, removing the tar after it was confirmed in the bucket is pure waste.
For an article that is supposed to at least smell like journalism, it looks so trashy.
Perhaps, but perhaps there is a bigger set of constraints not visible to an outsider which the "buerocrats" are trying to satisfy.
Hsu v. Tesla, Inc. (Los Angeles Superior Court, Case No. 20STCV18473). Autopilot allegedly swerved a Model S into a median on city streets; plaintiff also claimed an unsafe airbag deployment and misrepresentation. Final result: Tesla won (defense verdict). Jury awarded zero damages and found no failure to warn; verdict entered April 21, 2023.
Molander v. Tesla, Inc. (Riverside County Superior Court). Fatal 2019 crash (driver Micah Lee) where plaintiffs said Autopilot suddenly veered off the highway into a palm tree; suit alleged defective Autopilot and failure to warn. Final result: Tesla won (defense verdict). Jury sided with Tesla on October 31, 2023; no plaintiff recovery.
Huang v. Tesla, Inc. (Santa Clara County Superior Court). 2018 Mountain View fatal crash (Walter Huang) while Autopilot was engaged; wrongful death, defect, and failure-to-warn theories. Final result: Settled confidentially in April 2024 before trial.
Estate of Jeremy Banner v. Tesla, Inc. (Palm Beach County; related 4th DCA appeal). 2019 Delray Beach fatal crash where a Model 3 on Autopilot underrode a crossing tractor-trailer; plaintiffs alleged Autopilot was defective and oversold. Appellate development: In Feb. 2025, Florida’s Fourth DCA limited the case by blocking punitive damages that a trial court had allowed. Final result: Settled in July 2025 (confidential) before a compensatory-only trial could proceed
Journalism is a thing of its own; blogs aren't it.
[0] https://www.thedrive.com/news/24025/electreks-editor-in-chie... ("Electrek’s Editor-in-Chief, Publisher Both Scoring $250,000 Tesla Roadsters for Free by Gaming Referral Program": "What happens to objective coverage when a free six-figure car is at play? Nothing good." (2018))
Based on (almost?) all prior cases though.
It happens right away and has nothing to do with any other warnings. If you own a Tesla you have seen this warning over and over.
This is especially the case for something that was in its infancy back in 2019 when this crash happened.
And you know what we have in 2025 because of those restrictions being enforced since then?
In 2025, Tesla's nag drivers so much, for not paying attention to the road, that drivers no longer keep the much safer versions of autopilot engaged at all, when looking for their phones.
Instead, now, because the issue is "fixed", Tesla drivers simply do the same thing what drivers of any other car do in the situation.
They disable autopilot first, and only then stop paying attention to the road, looking for their phone.
How's that safer?
We're precisely less safe because of these regulatory requirements.
(And, add insult to injury, this court is now using the hindsight 20/20, of these warnings subsequently being implemented, as evidence of Tesla's wrongdoing in 2019, at a time before anything like that was thought to be possible? Even though, now that these warnings were implemented, we already have evidence that these nags themselves make everybody less safe, since autopilot is simply turned off when you need to stop paying attention to the road?)
Like, if they have a chance, they are going to keep $300M.
That is obvious.
OP's point is - it is not having any impact on Tesla's reckless decisions.
Stock price is #1 on the list of their concerns. FCF is somewhere much lower in the list.
Maybe when this accident happened it was different, but as far as I know it's always been behind a confirmation dialog.
To operate a motor vehicle in the US, you must be licensed. That surely holds some weight here.
Just like, holds a lot of weight. I'm saying autopilot has a meaning in the world of aircraft and the FAA has some guidance on how it's used. They still place all responsibility on the pilot. So in that sense they are similar.
It's not that I think automakers shouldn't be liable for misleading marketing, it's that in this case I don't think the argument is strong.
> Thus, it's irrelevant that the small print
The driver has to agree to understanding how it works before using the feature. In the manual it's called out in the same way my Subaru calls out warnings for eyesight. In the model S 2019 manual, the car in this accident, it's clearly labeled with multiple warnings and symbols. Saying it's small print is disengenous. Half of the manual in that section is warnings about the limitations of the driver assistance tech.
I find this to be a bit of a rosy take on things.
Autopilots don't take off (which is why Airbus' ATTOL project was a notable thing when an A350 took off "autonomously" [1]). They don't handle ATC (with the tenuously arguable exception of things like Garmin's Autoland), or handle TCAS on what I'd say is a majority of airliners.
Autopilot on planes is still quite "dumb".
1- https://www.airbus.com/en/newsroom/press-releases/2020-01-ai...
Of course they do. You questioned time and money spent compared to warning clarity and the manual. Having no team for marketing implies little time and money spent. I would say they've spent far more money trying to make it safe and clear in the user manual than trying to mislead customers in a few tweets and a video.
> might lead someone to believe that Autopilot will do what the video shows?
It does do what it shows? The driver is attentive not placing their foot on the pedal and not taking their eyes off the road. I believe that is exactly how FSD operated until forced to implement the nag 4? years after this video was made.
Afaik this is the video congress sent to the FTC, under the awesome Lina Khan, and they chose to do nothing about it. I agree Tesla should do better, but that's different than finding them liable for wrongful death in this specific case.
Germany did something about it. California DMV did something about it. If anything it seems the NHTSA and FTC dropped the ball. That seems to be in Tesla's favor legally.
I don’t follow what you mean here? Are you confusing me with someone else?
> There are videos of people on autopilot without their hands on the wheel...
You can definitely remove your hands momentarily. I’ve seen people apply a weight to the steering wheel to fool it too. Not sure how people defeating the safety features would be Tesla’s fault.
Even in that case though, you would still have a way to produce the data because it would have been specced in the requirements when you were thinking about the broader organizational context.
I do think that needs to happen. That's pretty central to my point. Holding Tesla liable when the regulators haven't done so, seems like a unfair judgment even if Tesla and so many other automakers can and should do better. But liability should be clearer than a disagreement over the word autopilot in marketing materials.
> You haven't come across reasonable in this conversation at all.
This is a discussion. We can disagree. No need to attack me.
Fundamentally, flash memory is a bunch of pages. Each page can be read an infinite number of times but there are quite relevant limits on how many times you can write it.
In the simplistic system lets say you have 1000 pages, 999 hold static data and the last one keeps getting a temporary file that is then erased. All wear occurs on page 1000 and it doesn't last very long.
In the better system it notes that page 1000 is accumulating a lot of writes and picks whatever page has the least writes, copies the data from that page to page 1000 and now uses the new page for all those writes. Repeat until everything's worn down. Note the extra write incurred copying the page over.
In the real world a drive with more space on it is less likely to have to resort to copying pages.
Should companies be intentionally misleading to law enforcement about what data they have on fatal accidents like Tesla?
I've made no contention, but if I had, it would be that whoever signed off on this design had better not have a PE license that they would like to keep, and we as an industry would be wise not to keep counting on our grandfather-clause "fun harmless dorks" cultural exemption now that we manufacture machines which obviously kill people. If that by you is conspiracy theory, you're welcome.
Let's not be naive, or deceptive, about a malpractice from a multi billion company owned by a multi billionaire.
They definitely care about the FCF impact of this fine and what it says about future liability.
But, I don't think it's unreasonable that some of the 5% of US adults who have never been on a plane might not understand what autopilot is in aviation. I don't think it's likely that the 8.4% of US adults who score below the lowest measurable level of PIAAC literacy have a good understanding of the warning messages when you enable Tesla's L2 features, or are digging through the owner's manual to understand them. It seems unlikely that the 3% of adults with <70% IQs are reasoning out the limitations of the system from technical definitions. Hopefully the idea is obvious here. You only need one person out of thousands to make a massively dangerous system. I don't think it's an obviously ridiculous argument that one person out of thousands doesn't fully understand and consider the complicated limitations of such a system.
This comment takes as fact a claim made by the police, which might be wrong, either by error or purpose.
One thing is for certain though, the police’s initial investigation was criminal. That is to say it was to establish the fact of who was the driver, etc. It was totally separate from the civil litigation that later established Tesla to be at 30% fault for the wreck using computer records establishing that ADAS was engaged.
Leaving Tesla out of it, suppose there was some other automaker with some other problem. A cop comes to them and says “what do I need to ask you to establish the facts about an accident?” Since when do police just accept “oh, the potentially adversarial lawyer gave me advice on what they can tell me without getting a subpoena. Guess that’s all I can do?” That’s absurd.
Don’t talk to the police: https://youtu.be/d-7o9xYp7eE
Don't Talk to the Police: https://www.youtube.com/watch?v=d-7o9xYp7eE
The real question is why don’t you feel the same?
The point is the EXPECTATION set by decades of basically flat-out lying about the capabilities of "Autopilot" and "Full Self Driving".
Hell, just the names alone entirely point to no-limitations full self driving. Plus selling $10k upgrades to use your car as a driverless taxi with the next software upgrade, which years later, has never happened. The fine print is BS; the message screams nothing but capable driving.
Add to that the instant data collection and real-time deleting it at the scene, and Tesla is 100% wrong. (and I used to admire Tesla and aspire to own one)
And this is 100% Tesla's own fault. They did NOT have to market it that way. Had they marketed it as "Advanced Driving Assistant" or "Super Lane-Keeping", or something, and left the data available to the driver, they likely could have won this case easily. "The guy wasn't even looking, we never even implied it could take over all driving, what the hell was he thinking?".
You can't build a large dynamite factory in a residential neighborhood either even if you don't intend for it to blow up.
Or are you talking about self-published numbers by the company that is proven to withhold, lie, and misdirect in even official police investigations, subpoenas, and trials where it is actively illegal to do so?
Are we talking numbers with a degree of scientific rigor unfit for publication in a middle school science fair, let alone the minimum standard of scientifically rigorous that members of their team had to achieve to get their degrees, yet somehow fail to do when detailing systems that are literally responsible for the life and death of humans?
A consumer is not expected, nor required, to resolve this conflict that the company created for itself through its own choice of conflicting language. Tesla was able to get away with it longer than expected, but now the floodgates are open to reason again.
I'm not attacking you, it's a direct response to your frequent appeals to yourself as some sort of authority for reason and sensibility in this discussion, when your responses clearly indicate that you are being neither reasonable nor sensible.
> The FAA does define how autopilot can and should be used,
Yeah... in airplanes.
ETA: Restate your conspiracy theory in the hypothetical case that they had used `tar | curl` instead of the intermediate archive file. Does it still seem problematic?
This is partly because a large portion of the Tesla buyers were more intentional in their purchase (doubly so if they bought FSD). Accidents happen, and FSD surely reduces the frequency, but a change in the marketing (or the NAME lol, the pearl clutching) would not make a gram of difference.
Zoning laws are a complete non sequitur. The issue with building a large dynamite factory in a residential neighborhood is the threat to the people of the neighborhood who not only didn't consent to live near it but specifically chose to live in an area zoned so that such things could not be built. Building a dynamite factory wherever you want is not something you have the innate right to do. That said, you probably can get a permit (assuming you have the proper licenses and insurance) to build a dynamite factory in an appropriately zoned area.
*: I can't work out from the article whether this file was erased, or just unlinked from the filesystem: they quote someone as saying the latter, but it looks like it was actually the former.
Most people are actually very dismissive of autopilot, and are completely misinformed of the benefits / drawbacks / differences of "FSD" versus "Autopilot".
Most are completely unaware of the improvements of v12 or v13, or differences between HW3 or HW4, or which one they have, or that "autopilot" is free, or circumstances under which autopilot can be used etc.
I talked to some guy last year (mid 2024) who was actually paying $199/mo for FSD v12, before the price drop to $99/mo, and swearing how great it was, yet he has never tried the parking feature, even though it's been released several months prior. He's a software engineer. That's just one example.
So, if anything, Tesla's marketing is nowhere near as successful as these naysayers would make you believe. Because the vast majority of Tesla's own customers are actually far behind on autopilot or FSD buy-in, and are NOT aware of the progress.
I'm not going to argue with someone who throws gratuitous insults. Rejoin me outside the gutter and we'll continue, if you like. But the answer to your question is trivially yes, that is as professionally indictable, as might by now have been clarified had you sought conversation rather than - well, I suppose, rather than whatever this slander by you was meant to be. One hopes we'll see no more of it.
Indeed, and perhaps this is part of the problem. A reasonable person would find that through licensure, there is a expectation that you know how to operate and take responsibility for the operation of the death machine you step into.
If Tesla's marketing is so dangerous why hasn't the FTC acted even once? FTC has been pinged by the NHTSA and Congress. At least the first time was before this accident. It took years for NHTSA to implement the nag. NHTSA could have recalled autopilot before this accident happened.
Tesla did not get away with anything. The agencies failed to address it in a self certification model set by them. It's their job to ensure drivers are safe. Meanwhile Germany and the California DMV did do something. If Tesla is to blame, so are the FTC and NHTSA.
We're having different conversations.
How does calling a feature "autopilot" then give consumers the impression that they can completely hand over operation of the car. Driving a car is a serious task and this driver was extremely negligent in executing that task.
Where's your data that these nags make everyone safer, when it's widely known that they simply result in people turning off the entire autopilot/FSD when the operator needs to stop paying attention to the road, to avoid the nags and the penalty strikes?
Where's all the news reports about the crashes without the autopilot engaged? If they were as rare as the autopilot ones, surely we'd have seen some of them covered by the media, right? Or are they so rare that not a single one has happened yet, hence, the lack of any reports being available?
If you've ever worked at any company where security and privacy are taken seriously, you'd be fully aware that things like logging incorrect password attempts is a straight up CVE waiting to happen, even though it's something that a legitimate user might as well want to be happening to find out who's trying to break into their system. Thank the privacy advocates.
But I fail to see who exactly is misled by the marketing, ESPECIALLY given the non-stop negative media attention Tesla has always had.
I mean, it's literally already called FSD Supervised, and previously it's been FSD Beta. How exactly is that not self-explanatory?
But if you already a conclusion, the name is irrelevant. I mean, did you ever look at the Autopilot article in Wikipedia? It's about the aircraft system. Why do not we not FAA complaining to Boeing and Airbus that their airplanes have this super-misleading "autopilot", even though the pilots must still be supervising the tech?
My problem with all of this is that the regulatory agencies that cover safety in this country did jack shit to prevent this. Blaming Tesla is a scapegoat. The NHTSA and the FTC could have prevented this, but they're just pointing fingers at Tesla. They dropped the ball.
You are the one claiming “We're precisely less safe because of these regulatory requirements.”
Support your assertion with scientifically rigorous, statistically sound evidence.
And no, your ignorance of safety problems is not evidence of safety despite your attempts to argue as such. That was not a valid argument when the cigarette companies made it and it is not valid now.
Ok, and what is your point? Do you have a point or was that just a random observation related to nothing and implying nothing? Because it seems like your implication, given the comment you responded to, it's that Telsa's incompetence is not malicious in it's incompetence. That it is just one of those things that the powerful also have incompetence.
But Telsa decides where to put funding and resources, so if they put funding into covering up and hiding data, and they don't put funding into safety systems and testing that their "autopilot" engages and disengages properly, that is malice.
And again, if that is not the implication of your comment, please just let me know what your intent was, and I will correct myself.
He first started talking publicly about "Autopilot" or "Full Self Driving" in 2013, so 1.2 would be referred to as plural decades. (I didn't have the exact number on hand, but knew it was 1+, and used the proper form; you prompted me to lookup the proper number)
Autopilot on a plane does actually drive the plane. You can go as long as hours without any human input requirement. Pilots can eat, go to the bathroom, what have you.
Of course we have two pilots, just in case, but this isn't necessary - some countries are pushing for one pilot because the vast majority of flying is done by the plane.
That doesn't mean that autopilot systems on planes are more sophisticated. It just means that automating a plane is much, much easier than automating a car.
We also have fully autonomous trains.
You entirely miss the distinction between trained professional and ignorant consumer, "industrial use only" equipment and materials vs everyday consumer goods, things that require technical training and even certification to use properly, vs goods sellable to any consumer, prescription drugs vs otc.
The technical goods carry a much higher risk and must be used with specific training and context. Using them outside those contexts is likely to harm or kill people, and creates legal liability.
In contrast, consumer goods must be engineered to be safe in ORDINARY circumstances with UNTRAINED people using them.
Tesla is trying to paper over those differences and use technical terms in a common environment and unleash tools requiring skilled supervision to prevent death into a consumer environment, for profit. You are either failing to make the distinction or consciously going along with it.
Tesla's "Full Self Driving" leaves no such room. The only exception would be to consider Tesla to be lying.
"Full", adjective Containing all that is normal or possible. "a full pail." Complete in every particular. "a full account."
That's just the first DDG search. Go to ANY other dictionary and show me where "Full" in ordinary American English usage means anything other than complete, no exceptions, etc.
"Full Self Driving" literally means "It FULLY drives itself".
It does NOT mean "You and the automobile co-drive the car", or "Mostly self-driving", or "Sometimes self-driving", or "Self driving until it can't figure it out or fcks up", the latter of which seems the most accurate.
If Tesla had called it ANY of those other things, they would be fine. And would be honest.
But instead, Musk and Tesla decided to lie in plain language. And for that, they lost the case, and will likely lose many others.
For some people, honesty matters. And honesty is not splitting hairs over differences between detailed technical meaning vs colloquial understanding ("autopilot"), or using the end-goal a decade+ away as the official name and description of the feature set ("Full Self Driving"). That is dishonest.
Regardless, this car did not have FSD, it had "autopilot".
You can't just keep ignoring the "Supervised" bit as if it's not there. Just because you think it's a stupid name, doesn't make it a lie. Have you even tried it yourself? I've tried v12, and it's amazing. It does fully self-drive. Why would they call it "mostly" if the idea has always been that it'd be "full" when it's done and out of beta? And as robotaxi shows, it's literally almost there.
I've just tried searching "FSD site:tesla.com" in Google Search, and basically every single result is "Full Self-Driving (Supervised)", with very few exceptions.
Because "Mostly..." is the truth, and then when it is actually "Full..." they can come out and announce that fact with great fanfare. and they would have been honest.
Hell, if they simply called it "Supervised Self Driving", it would be honest, and actually match even your glowing description.
But they do not. Your and Tesla's idea that using the added tagline "(Supervised)" as a legal weasel-word does not work either. "Full Self-Driving (Supervised)" is literally an oxymoron. A thing either drives itself fully, or it requires supervision. One contradicts the other.
IIRC, the "(Supervised)" bit was added well after the first fanfare with only "Full Self Driving" alone, when problems started to appear. And the common initials are "FSD".
Even if the reality of the feature set meets your glowing description, the problem is the small percentage of cases where it fails. I'm sure the guy in Florida who was decapitated when his Tesla failed to notice a semi-trailer turning across in front of him was similarly confident, and the same for the guy in California who was impaled on a construction traffic barrier. The problem is that it is NOT FULL, it is only full self driving until it fails.
>>And as robotaxi shows, it's literally almost there.
NO, it shows the exact opposite.
Nearly two months after the much-heralded rollout of (fully) self-driving taxis, Tesla still cannot put a single car on the road for a single meter without a supervising safety driver. Moreover, there have been numerous reported instances of the cars making dangerous errors such as left turns into traffic, etc.
>>basically every single result is "Full Self-Driving (Supervised)", with very few exceptions.
Again, that wording is a literally meaningless oxymoron, containing two mutually contradictory statements ("Full" vs "Supervised"), thus open to whatever interpretation the listener latches onto. Moreover, the emphasis is on the first word — "Full" — which is the lie.
Piloting an aircraft with an autopilot requires a MINIMUM of 1500 hours of instruction and experience as well as multiple levels of certification (VFR, IFR, multi-engine, and specific type certification).
You are seriously trying to claim that these are even remotely similar activities?
Yes, drivers SHOULD split hairs over the technical operation of the vehicle.
Should and Is/Are/Do are NOT the same thing. Particularly when the company founder and prime spokesperson brays about how it will do everything for you and constantly overpromises stuff that won't be available for a decade (if ever) as if it were here already
sheesh
Your implicit claim is that there exists a resale market for wrecked Teslas, meriting such product and engineering interest from the company that the car's data storage, in the immediate wake of a disabling collision, preemptively and correctly destroys information of use to crash investigators, so that the company can cannibalize the wrecked hulk for items which Tesla then goes on to sell as new OEM repair parts.
Isn't it embarrassing to go to all this? It certainly seems like it should feel that way, for as degrading as it looks from the outside. If you can explain it, I would like to understand what makes it seem worth your while.
What? A full private pilot license only requires 35 or 40 hours of flight time (depending on school type); a sport license only requires 20 hours. Airplanes flyable by new pilots of either type very often have autopilots today.
Did you use v12 or v13 FSD? I've used v12 last year (which is far behind v13, and yet more behind robotaxi). I'd enable it as soon as I'm out of the garage, and it'd safely drive me right to the destination.
How exactly is that not "Full"? Why would they call it anything else when it can drive itself from point A to point B without any interventions most of the time?
And exactly ZERO of those licenses allow you to get anywhere near an autopilot system. Those licenses are for Visual Flight Rules only. You are not even allowed to fly in conditions that require you to use instruments.
To get to use an autopilot, at MINIMUM is required an instrument rating which is 10+ hours past Private Pilot AND a Commercial Pilot License which is 250 hours minimum, assuming you can find a plane with an autopilot that requires that minimum and included that in your learning program (so hours counted towards both commercial and type certification). Then, you need to be rated on the particular autopilot system.
On top of that, you need to follow RULES about when the autopilot can be engaged, e.g., not below certain altitudes, conditions, etc.
The point stands that the training required just to be able to understand an aircraft autopilot sufficiently to be allowed to use it is a level of serious specialized expertise not encountered in ordinary civilian life.
Taking technical terms of art from areas of specialized expertise and abusing common misunderstandings to apply them to broad marketing and then relying on legal technicalities to try to say "we told you so" when you did no such thing it is just an advanced form of lying. Musk is fooling you too.
Why?
Because the FACT is that you must put in the caveat MOST OF THE TIME, or someone is likely to die.
If they were honest they would call it "Supervised Mostly Self Driving". Even "Supervised Self Driving — you mostly supervise, not drive!" would be accurate.
Again, go to any dictionary and find the definition of "Full". Websters:
>>1 containing as much or as many as is possible or normal
>>2a complete especially in detail, number, or duration
>>2b lacking restraint, check, or qualification
>>2c having all distinguishing characteristics : enjoying all authorized rights and privileges
>>2d not lacking in any essential : perfect
The problem is Tesla and you are attempting to literally change the definition of "Full" to mean "Not Full".
This is lying or deceiving yourself, and deceiving others.
Recognize facts and that language actually has meaning, and stop being part of the problem. Or continue watching your hero lose lawsuits.
Okay, then your contention appears that Tesla is content with not bothering at all to refurbish the junkyard pulls it sells through OEM channels as OEM repair parts, thus denying themselves any opportunity to manually wipe sensitive data - potentially to include video of the prior owner's violent death! - as part of any remanufacturing or even basic QC process. But - as has now been made a matter of public record, in consequence of their fighting and losing this wrongful death case - Tesla do make sure to keep a copy for themselves, one of an apparently large collection of same which they went far out of their way for years to keep anyone else from discovering even exists.
Is there anything you care to add to that? Feel free to take your time. You've said quite a lot already.
Oh, obviously.
Like, I'm not making it up. So no need to pretend to be daft about it. You each time referred to yourself as someone particularly capable of rendering a reasoned opinion on the matter.
> Blaming Tesla is a scapegoat.
Not really.
>They dropped the ball.
Tesla's actions are incredibly reckless. No other company is behaving like them so it's an absurd proposition to say its the fault of the regulators. It's not a market problem. It's a tesla problem.
This is childish and does nothing for the discussion. It's not about me. I'm not claiming to be an authority, nor am I using myself as one.
> Tesla's actions are incredibly reckless. No other company is behaving like them so it's an absurd proposition to say its the fault of the regulators. It's not a market problem. It's a tesla problem.
The FTC hasn't once acted on Tesla's marketing. That's tacit approval at this point.
The NHTSA has issued two? recalls on Tesla's ADAS and that wasn't until late 2023.
Relying on lawsuits from crash victims, based completely on unclear or lack of regulations, is far from the ideal situation. It's only a Tesla problem insofar they make up the vast majority of miles driven with this kind of ADAS. The NHTSA just isn't keeping up.
Is Tesla pushing the envelope? Yeah, of course they are. That's by design of the self certifying approach that the NHTSA landed on. Tesla could do a lot more safety testing, but that doesn't mean the regulators haven't dropped the ball.
https://www.washingtonpost.com/transportation/2024/05/04/tes...
If it's not about you why did you reference yourself as an authority twice? Calling me childish is projecting.
>The FTC hasn't once acted on Tesla's marketing. That's tacit approval at this point.
Asinine logic.
>Relying on lawsuits from crash victims, based completely on unclear or lack of regulations, is far from the ideal situation. It's only a Tesla problem insofar they make up the vast majority of miles driven with this kind of ADAS. The NHTSA just isn't keeping up.
Who is relying? Tesla is doing what they want. No one is forcing them to sell this dreck.
> Asinine logic.
Do you want to make an argument, or is your goal just to dunk with these low effort responses and waste my time?
Multiple automakers are making similar marketing claims as Tesla as their level 2 classified technology improves. Tesla is not even remotely alone in any of this. They're just ahead. Meanwhile FTC has done nothing on this front for a decade.
Is a decade not long enough to take action? What's acceptable in your opinion?
> Who is relying?
Everyone that cares about autonomous vehicle safety and thinks Tesla is not doing enough to protect drivers. You'll find plenty of lawyers stating that tort law is the only way to settle ADAS cases right now.
In this particular 2019 crash, the jury found that Tesla had a defect for not disabling autopilot in this area. This is something that NTSB specifically called on the NHTSA to define better in 2017 prior to this fatality. In 2020, after this traffic fatality, the NHTSA responded for the first time saying this would be impractical to impossible to do and doubling down that the driver is responsible, to which the NTSB calls out the NHTSA for not doing enough to prevent these types of fatalities with Tesla.
https://data.ntsb.gov/carol-main-public/sr-details/H-17-038
So we have a situation where Tesla and every other automaker is operating under the ill defined autonomous safety regulations of the NHTSA, a silent FTC, and a safety review board claiming that NHTSA isn't doing enough prior to and after this accident.
Dunk on Tesla all you want, but that's not going do anything to further road safety. I suspect that Tesla will lean heavily on FTC inaction and NHTSA's 2020 response in their appeal. Rightly so given the jury is contradicting NHTSA's own statement on liability for level 2 system. Hence the use tort law.
Lol, it's not my position. It's the law. They were held liable. You kept making the focus on yourself, bringing yourself up for no reason, and then even more perplexingly, denying that you did it.
>Multiple automakers are making similar marketing claims as Tesla as their level 2 classified technology improves. Tesla is not even remotely alone in any of this. They're just ahead. Meanwhile FTC has done nothing on this front for a decade.
I haven't seen that and no one seems to be close to what Tesla outright implies with "autopilot".
>Is a decade not long enough to take action? What's acceptable in your opinion?
The fact that regulators should take action is not incongruous with the fact that Tesla is reckless and negligent. Both can be true. No one forced Tesla to do anything so no one else is responsible for their behavior.
>Everyone that cares about autonomous vehicle safety and thinks Tesla is not doing enough to protect drivers. You'll find plenty of lawyers stating that tort law is the only way to settle ADAS cases right now.
Jeez. I guess it's everyone else's fault but Tesla's when it's clear that everyone else thinks Tesla is behaving poorly.
>Dunk on Tesla all you want, but that's not going do anything to further road safety.
Yawn. This comment thread isn't doing anything to "further road safety" how obnoxious of you.