https://www.cnbc.com/2026/01/22/musk-tesla-robotaxis-us-expa...
Tesla CEO Elon Musk said at the World Economic Forum in Davos that the company’s robotaxis will be “widespread” in the U.S. by the end of 2026.
Though maybe the safety drivers are good enough for the major stuff, and the software is just bad enough at low speed and low distance collisions where the drivers don't notice as easily that the car is doing something wrong before it happens.
We are still a long, long, long way off for someone to feel comfortable jumping in a FSD cab on a rainy night in in New York.
In some spaces we still have rule of law - when xAI started doing the deepfake nude thing we kind of knew no one in the US would do anything but jurisdictions like the EU would. And they are now. It's happening slowly but it is happening. Here though, I just don't know if there's any institution in the US that is going to look at this for what it is - an unsafe system not ready for the road - and take action.
Yet it is quite odd how Tesla also reports that untrained customers using old versions of FSD with outdated hardware average 1,500,000 miles per minor collision [1], a literal 3000% difference, when there are no penalties for incorrect reporting.
For those complaining about Tesla's redactions - fair and good. That said, Tesla formed its media strategy at a time when gas car companies and shorts bought ENTIRE MEDIA ORGs just to trash them to back their short. Their hopefulness about a good showing on the media side died with Clarkson and co faking dead batteries in a roadster test -- so, yes, they're paranoid, but also, they spent years with everyone out to get them.
I'm curious how crashes are reported for humans, because it sounds like 3 of the 5 examples listed happened at like 1-4 mph, and the fourth probably wasn't Tesla's fault (it was stationary at the time). The most damning one was a collision with a fixed object at a whopping 17 mph.
Tesla sucks, but this feels like clickbait.
Tesla needs their FSD system to be driving hundreds of thousands of miles without incident. Not the 5,000 miles Michael FSD-is-awesome-I-use-it-daily Smith posts incessantly on X about.
There is this mismatch where overly represented people who champion FSD say it's great and has no issues, and the reality is none of them are remotely close to putting in enough miles to cross the "it's safe to deploy" threshold.
A fleet of robotaxis will do more FSD miles in an afternoon than your average Tesla fanatic will do in a decade. I can promise you that Elon was sweating hard during each of the few unsupervised rides they have offered.
There's no real discussion to be had on any of this. Just people coming in to confirm their biases.
As for me, I'm happy to make and take bets on Tesla beating Waymo. I've heard all these arguments a million times. Bet some money
Given the way Musk has lied and lied about Tesla's autonomous driving capabilities, that can't be much of a surprise to anyone.
if Tesla drops the ego they could obtain Waymo software and track record on future Tesla hardware
>The new crashes include [...] a crash with a bus while the Tesla was stationary
Doesn't this imply that the bus driver hit the stationary Tesla, which would make the human bus driver at fault and the party responsible for causing the accident? Why should a human driver hitting a Tesla be counted against Tesla's safety record?
It's possible that the Tesla could've been stopped in a place where it shouldn't have, like in the middle of an intersection (like all the Waymos did during the SF power outage), but there aren't details being shared about each of these incidents by Electrek.
>The new crashes include [...] a collision with a heavy truck at 4 mph
The chart shows only that the Tesla was driving straight at 4mph when this happened, not whether the Tesla hit the truck or the truck hit the Tesla.
Again, it's entirely possible that the Tesla hit the truck, but why aren't these details being shared? This seems like important data to consider when evaluating the safety of autonomous systems - whether the autonomous system or human error was to blame for the accident.
I appreciate that Electrek at least gives a mention of this dynamic:
>Tesla fans and shareholders hold on to the thought that the company’s robotaxis are not responsible for some of these crashes, which is true, even though that’s much harder to determine with Tesla redacting the crash narrative on all crashes, but the problem is that even Tesla’s own benchmark shows humans have fewer crashes.
Aren't these crash details / "crash narrative" a matter of public record and investigations? By e.g. either NHTSA, or by local law enforcement? If not, shouldn't it be? Why should we, as a society, rely on the automaker as the sole source of information about what caused accidents with experimental new driverless vehicles? That seems like a poor public policy choice.
No idea how these things are being allowed on the road. Oh wait, yes I do. $$$$
4x worse than humans is misleading, I bet it's better than humans, by a good margin.
In before, 'but it is a regulation nightmare...'
``` The incidents included a collision with a fixed object at 17 miles per hour, a crash with a bus while the Tesla vehicle was stopped, a crash with a truck at four miles per hour, and two cases where Tesla vehicles backed into fixed objects at low speeds. ```
so in reality one crash with fixed object, the rest is... questionable, and it's not a crash as you portrait. Such statistic will not even go into human reports, as it goes into non driving incidents, parking lot etc.
I'm probably not the average consumer in this situation but I was in Austin recently and took both Waymo and Robotaxi. I significantly preferred the Waymo experience. It felt far more integrated and... complete? It also felt very safe (it avoided getting into an accident in a circumstance where I certainly would have crashed).
I hope Tesla gets their act together so that the autonomous taxi market can engage in real price discovery instead of "same price as an Uber but you don't have to tip." Surely it's lower than that especially as more and more of these vehicles get onto the road.
Unrelated to driving ability but related to the brand discussion: that graffiti font Tesla uses for Cybertruck and Robotaxi is SO ugly and cringey. That alone gives me a slight aversion.
I don't know what a clear/direct way of explaining the difference would be.
That ain't true [1].
Teslas are really cheaply made, inadequate cars by modern standards. The interiors are terrible and are barebones even compared to mainstream cars like a Toyota Corolla. And they lack parking sensors depending on the version you bought. I believe current models don’t come with a surround view camera either, which is almost standard on all cars at this point, and very useful in practice. I guess I am not surprised the Robotaxis are also barebones.
So this number is plausible.
Elecktek is just summarizing/commenting.
I think they do. That's the whole point of brand value.
Even my non-tech friends seem to know that with self-driving, Waymo is safe and Tesla is not.
A small number of humans bring a bad name to the entire field of regular driving.
> The average consumer isn't going to make a distinction between Tesla vs. Waymo.
What's actually "distinct?" The secret sauce of their code? It always amazed me that corporate giants were willing to compete over cab rides. It sort of makes me feel, tongue in cheek, that they have fully run out of ideas.
> they will assume all robotic driving is crash prone
The difference in failure modes between regular driving and autonomous driving is stark. Many consumers feel the overall compromise is unviable even if the error rates between providers are different.
Watching a Waymo drive into oncoming traffic, pull over, and hear a tech support voice talk to you over the nav system is quite the experience. You can have zero crashes, but if your users end up in this scenario, they're not going to appreciate the difference.
They're not investors. They're just people who have somewhere to go. They don't _care_ about "the field". Nor should they.
> dangerous and irresponsible.
These are, in fact, pilot programs. Why this lede always gets buried is beyond me. Instead of accepting the data and incorporating it into the world view here, people just want to wave their hands and dissemble over how difficult this problem _actually_ is.
Hacker News has always assumed this problem is easy. It is not.
the issue is that these tools are widely accessible, and at the federal level, the legal liability is on the person who posts it, not who hosts the tool. this was a mistake that will likely be corrected over the next six years
due to the current regulatory environment (trump admin), there is no political will to tackle new laws.
> I just don't know if there's any institution in the US that is going to look at this for what it is - an unsafe system not ready for the road - and take action.
unlike deepfakes, there are extensive road safety laws and civil liability precedent. texas may be pushing tesla forward (maybe partially for ideological reasons), but it will be an extremely hard sell to get any of the major US cities to get on board with this.
so, no, i don't think you will see robotaxis on the roads in blue states (or even most red states) any time soon.
Getting this to a place where it is better than humans continuously is not equivalent to fixing bugs in the context of the production of software used on phones etc.
When you are dealing with a dynamic uncontained environment it is much more difficult.
Consumer supervision is having all the controls of the car right there in front of you. And if you are doing it right, you have your hands on wheel and foot on the pedals ready to jump in.
Also as a disclaimer I need to know if you were long the stock at the time. Too much distortion caused by both shorts and longs. I wasn't on either side but I learned after many hard years that so much on /r/teslamotors and /r/realtels was just pure nonsense.
Are you being sarcastic due to Elon buying Twitter to own/control the conversation? He would be a poster child for the bad actions you are describing.
“13781-13644 Street, Heavy truck, No injuries, Proceeding Straight (Heavy truck: parked), 4mph, contact area: left”
Your context sucks, and it's good as a lie.
>Waymo reports 51 incidents in Austin alone in this same NHTSA database, but its fleet has driven orders of magnitude more miles in the city than Tesla’s supervised “
Seems like there's zero benefit to this, then. Being required to pay attention, but actually having nothing (ie, driving) to keep my engaged seems like the worst of both worlds. Your attention would constantly be drifting.
Robotaxis market is much broader than the submersibles one, so the effect of consumers' irrationality would be much bigger there. I'd expect an average customer of the submarines market to do quite a bit more research on what they're getting into.
Once Elon put himself at the epicenter of American political life, Tesla stopped being treated as a brand, and more a placeholder for Elon himself.
Waymo has excellent branding and first to market advantage in defining how self-driving is perceived by users. But, the alternative being Elon's Tesla further widens the perception gap.
In the specific case of grok posting deepfake nudes on X. Doesn't X both create and post the deepfake?
My understanding was, Bob replies in Alice's thread, "@grok make a nude photo of Alice" then grok replies in the thread with the fake photo.
[citation needed]
Historically hosts have always absolutely been responsible for the materials they host, see DMCA law, CSAM case law...
Truly baffled by this genre of comment. "I don't think you will see <thing that is already verifiably happening> any time soon" is a pattern I'm seeing way more lately.
Is this just denying reality to shape perception or is there something else going on? Are the current driverless operations after your knowledge cutoff?
That’s the problem right there.
It’s EXTREMELY hard.
Waymo has very carefully increased its abilities, tip-toeing forward little by little until after all this time they’ve achieved the abilities they have with great safety numbers.
Tesla appears to continuously make big jumps they seem totally unprepared for yelling “YOLO” and then expect to be treated the same when it doesn’t work out by saying “but it’s hard.”
I have zero respect for how they’ve approached this since day 1 of autopilot and think what they’re doing is flat out dangerous.
So yeah. Some of us call them out. A lot. And they seem to keep providing evidence we may be right.
Accident rates under traditional cruise control are also extremely below average.
Why?
Because people use cruise control (and FSD) under specific conditions. Namely: good ones! Ones where accidents already happen at a way below-average rate!
Tesla has always been able to publish the data required to really understand performance, which would be normalized by age of vehicle and driving conditions. But they have not, for reasons that have always been obvious but are absolutely undeniable now.
That was the case when they first started the trial in Austin. The employee in the car was a safety monitor sitting in the front passenger seat with an emergency brake button.
Later, when they started expanding the service area to include highways they moved them to the driver seat on those trips so that they can completely take over if something unsafe is happening.

Tesla has reported five new crashes involving its “Robotaxi” fleet in Austin, Texas, bringing the total to 14 incidents since the service launched in June 2025. The newly filed NHTSA data also reveals that Tesla quietly upgraded one earlier crash to include a hospitalization injury, something the company never disclosed publicly.
The new data comes from the latest update to NHTSA’s Standing General Order (SGO) incident report database for automated driving systems (ADS). We have been tracking Tesla’s Robotaxi crash data closely, and the trend is not improving.
Tesla submitted five new crash reports in January 2026, covering incidents from December 2025 and January 2026. All five involved Model Y vehicles operating with the autonomous driving system “verified engaged” in Austin.
The new crashes include a collision with a fixed object at 17 mph while the vehicle was driving straight, a crash with a bus while the Tesla was stationary, a collision with a heavy truck at 4 mph, and two separate incidents where the Tesla backed into objects, one into a pole or tree at 1 mph and another into a fixed object at 2 mph.
As with every previous Tesla crash in the database, all five new incident narratives are fully redacted as “confidential business information.” Tesla remains the only ADS operator to systematically hide crash details from the public through NHTSA’s confidentiality provisions. Waymo, Zoox, and every other company in the database provide full narrative descriptions of their incidents.
Buried in the updated data is a revised report for a July 2025 crash (Report ID 13781-11375) that Tesla originally filed as “property damage only.” In December 2025, Tesla submitted a third version of that report upgrading the injury severity to “Minor W/ Hospitalization.”
This means someone involved in a Tesla “Robotaxi” crash required hospital treatment. The original crash involved a right turn collision with an SUV at 2 mph. Tesla’s delayed admission of hospitalization, five months after the incident, raises more questions about its crash reporting, which is already heavily redacted.
With 14 crashes now on the books, Tesla’s “Robotaxi” crash rate in Austin continues to deteriorate. Extrapolating from Tesla’s Q4 2025 earnings mileage data, which showed roughly 700,000 cumulative paid miles through November, the fleet likely reached around 800,000 miles by mid-January 2026. That works out to one crash every 57,000 miles.
The irony is that Tesla’s own numbers condemn it. Tesla’s Vehicle Safety Report claims the average American driver experiences a minor collision every 229,000 miles and a major collision every 699,000 miles. By Tesla’s own benchmark, its “Robotaxi” fleet is crashing nearly 4 times more often than what the company says is normal for a regular human driver in a minor collision, and virtually every single one of these miles was driven with a trained safety monitor in the vehicle who could intervene at any moment, which means they likely prevented more crashes that Tesla’s system wouldn’t have avoided.
Using NHTSA’s broader police-reported crash average of roughly one per 500,000 miles, the picture is even worse, Tesla’s fleet is crashing at approximately 8 times the human rate.
Meanwhile, Waymo has logged over 127 million fully driverless miles, with no safety driver, no monitor, no chase car, and independent research shows Waymo reduces injury-causing crashes by 80% and serious-injury crashes by 91% compared to human drivers. Waymo reports 51 incidents in Austin alone in this same NHTSA database, but its fleet has driven orders of magnitude more miles in the city than Tesla’s supervised “robotaxis.”
Here’s a full list of Tesla’s ADS crashes related to the Austin Robotaxi service:
| # | Date | Speed | Crash With | Movement | Injury Severity | Submitted | New? |
|---|---|---|---|---|---|---|---|
| 1 | Jul 2025 | 2 mph | SUV | Right Turn | Minor W/ Hospitalization* | Aug 2025 | |
| 2 | Jul 2025 | 0 mph | SUV | Stopped | Property Damage | Aug 2025 | |
| 3 | Jul 2025 | 8 mph | Fixed Object | Other | Minor W/O Hospitalization | Aug 2025 | |
| 4 | Sep 2025 | 6 mph | Fixed Object | Left Turn | Property Damage | Sep 2025 | |
| 5 | Sep 2025 | 6 mph | Passenger Car | Straight | Property Damage | Sep 2025 | |
| 6 | Sep 2025 | 0 mph | Cyclist | Stopped | Property Damage | Sep 2025 | |
| 7 | Sep 2025 | 27 mph | Animal | Stopped | No Injury Reported | Oct 2025 | |
| 8 | Oct 2025 | 18 mph | Other | Straight | Property Damage | Dec 2025 | |
| 9 | Nov 2025 | 0 mph | Other | Stopped | No Injury Reported | Nov 2025 | |
| 10 | Dec 2025 | 17 mph | Fixed Object | Straight | Property Damage | Jan 2026 | Yes |
| 11 | Jan 2026 | 4 mph | Heavy Truck | Straight | Property Damage | Jan 2026 | Yes |
| 12 | Jan 2026 | 0 mph | Bus | Stopped | Property Damage | Jan 2026 | Yes |
| 13 | Jan 2026 | 2 mph | Fixed Object | Backing | Property Damage | Jan 2026 | Yes |
| 14 | Jan 2026 | 1 mph | Pole / Tree | Backing | Property Damage | Jan 2026 | Yes |
We keep updating this story because the data keeps getting worse. Five more crashes, a quietly upgraded hospitalization, and total narrative redaction across the board, all from a company that claims its autonomous driving system is safer than humans.
Tesla fans and shareholders hold on to the thought that the company’s robotaxis are not responsible for some of these crashes, which is true, even though that’s much harder to determine with Tesla redacting the crash narrative on all crashes, but the problem is that even Tesla’s own benchmark shows humans have fewer crashes.
The 14 crashes over roughly 800,000 miles yield a crash rate of one crash every 57,000 miles. Tesla’s own safety data indicate that a typical human driver has a minor collision every 229,000 miles, whether or not they are at fault.
By the company’s own numbers, its “Robotaxi” fleet crashes nearly 4 times more often than a normal driver, and every single one of those miles had a safety monitor who could hit the kill switch. That is not a rounding error or an early-program hiccup. It is a fundamental performance gap.
What makes this especially frustrating is the lack of transparency. Every other ADS company in the NHTSA database, Waymo, Zoox, Aurora, Nuro, provides detailed narratives explaining what happened in each crash. Tesla redacts everything. We cannot independently assess whether Tesla’s system was at fault, whether the safety monitor failed to intervene in time, or whether these were unavoidable situations caused by other road users. Tesla wants us to trust its safety record while making it impossible to verify.
The craziest part is that Tesla began offering rides without a safety monitor in Austin in late January 2026, just after it experienced 4 crashes in the first half of the month.
As we reported in our status check on the program yesterday, the service currently has roughly 42 active cars in Austin with below 20% availability and the rides with safety monitor are extremely limited and not running most of the time, but it’s still worrisome that Tesla would even attempt that knowing its crash rate is still higher than human drivers with a safety monitor in the front passenger seat.
The fact that regulators are not getting involved tells you everything you need to know about the state of the US/Texas government right now.
FTC: We use income earning auto affiliate links. More.
My suspicion is that these kinds of minor crashes are simply harder to catch for safety drivers, or maybe the safety drivers did intervene here and slow down the car before the crashes. I don't know if that would show in this data.
> What makes this especially frustrating is the lack of transparency. Every other ADS company in the NHTSA database, Waymo, Zoox, Aurora, Nuro, provides detailed narratives explaining what happened in each crash. Tesla redacts everything. We cannot independently assess whether Tesla’s system was at fault, whether the safety monitor failed to intervene in time, or *whether these were unavoidable situations caused by other road users*. Tesla wants us to trust its safety record while making it impossible to verify.
[1] https://www.fastcompany.com/91491273/waymo-vehicle-hit-a-chi....
I would also love to see every car brand have full autonomous driving. It seems like you think you must be in one camp or another, and that one has to "beat" the other - but that's not true. Both can be successful - wouldn't that be a great world?
While I was living in NYC I saw collisions of that nature all the time. People put a "bumper buddy" on their car because the street parallel parking is so tight and folks "bump" the car behind them while trying to get out.
My guess is that at least 3 of those "collisions" are things that would never be reported with a human driver.
Totally rational.
They advertise and market a safety claim of 986,000 non-highway miles per minor collision. They are claiming, risking the lives of their customers and the public, that their objectively inferior product with objectively worse deployment controls is 1,700% better than their most advanced product under careful controls and scrutiny when there are no penalties for incorrect reporting.
https://www.rubensteinandrynecki.com/brooklyn/taxi-accident-...
Generally about 1 accident per 217k miles. Which still means that Tesla is having accidents at a 4x rate. However, there may be underreporting and that could be the source of the difference. Also, the safety drivers may have prevented a lot of accidents too.
[1] https://www.businessinsider.com/musks-claim-teslas-appreciat...
Almost there. Humans kill one person every 100 million miles driven. To reach mass adoption, self-driving car need to kill one every, say, billion miles. Which means dozens or hundreds of billions miles driven to reach statistical significance.
Heard this for a decade now, but I’m sure this year will be different!
Genuine question though: has Waymo gotten better at their reporting? A couple years back they seemingly inflated their safety numbers by sanitizing the classifications with subjective “a human would have crashed too so we don’t count it as an accident”. That is measuring something quite different than how safety numbers are colloquially interpreted.
It seems like there is a need for more standardized testing and reporting, but I may be out of the loop.
My comment was aimed at the implication that the data might be untrustworthy because they were the ones reporting it.
So I pointed out it wasn’t their data.
As for “spin“ Elon has been telling us for a long time that FSD is safer than humans and will save lives. We appear to have objective data that counters that narrative.
That seems worth reporting on to me.
if you think i said otherwise, please quote me, thank you.
> Historically hosts have always absolutely been responsible for the materials they host,
[citation needed] :) go read up on section 230.
for example with dmca, liability arises if the host acts in bad faith, generates the infringing content itself, or fails to act on a takedown notice
that is quite some distance from "always absolutely". in fact, it's the whole point of 230
Where grok is at risk is not responding after they are notified of the issue. It’s trivial for grock to ban some keywords here and they aren’t, that’s a legal issue.
Driving around in good weather and never on freeways is not much of an achievement. Having vehicles that continually interfere in active medical and police cordons isn't particularly safe, even though there haven't been terrible consequences from it, yet.
If all you're doing is observing a single number you're drastically under prepared for what happens when they expand this program beyond these paltry self imposed limits.
> Some of us call them out.
You should be working to get their certificate pulled at the government level. If this program is so dangerous then why wouldn't you do that?
> And they seem to keep providing evidence we may be right.
It's tragic you can't apply the same logic in isolation to Waymo.
Any engineering student can understand why LIDAR+Radar+RGB is better than just a single camera; and any person moderately aware of tech can realize that digital cameras are nowhere as good as the human eye.
But yeah, he's a genius or something.
for the rest of us aligned to a single reality, robotaxis are currently only operating as robotaxis (unsupervised) in texas (and even that's dubious, considering the chase car sleight of hand).
of course, if you want to continue to take a weasely and uncharitable interpretation of my post because i wasn't completely "on brand", you are free to. in which case, i will let you have the last word, because i have no interest in engaging in such by-omission dishonesty.
Technology is just not there yet, and Elon is impatient.
The only problem is, it doesn't work.
At least once every few days, it would do something extremely dangerous, like try to drive straight into a concrete median at 40mph.
The way I describe it is: yeah, it’s self-driving and doesn’t quite require the full attention of normal driving, but it still requires the same amount of attention as supervising a teenager in the first week of their learning permit.
If Tesla were serious about FSD safety claims, they would release data on driver interventions per mile.
Also, the language when turning on FSD in vehicle is just insulting—the whole thing about how if it were an iPhone app but shucks the lawyers are just so silly and conservative we have to call it beta.
I wonder if these newly-reported crashes happened with the employee positioned in e-brake or in co-pilot mode.
Important correction “kill one or less, per billion miles”. Before someone reluctantly engineers an intentional sacrifice to meet their quota.
You can prove Tesla's system is a joke with a magnitude of metrics.
So the average driver is also likely a bad driver by your standard. Your standard seems reasonable.
The data is inconclusive on whether Tesla robotaxi is worse than the average driver.
Unlike humans, Waymo does report 1-4 mph collisions. The data is very conclusive that Robotaxi is significantly worse than Waymo.
People have an expectation that self driving cars will be magical in ability. Look at the flac waymo has received despite it's most egregious violations being fender bender equivalents
Externalized risks and costs are essential for many business to operate. It isn't great, but it's true. Our lives are possible because of externalized costs.
I think Tesla's egg is cooked. They need a full suite of sensors ASAP. Get rid of Elon and you'll see an announcement in weeks.
They need to be around parity. So a death every 100mm miles or so. The number of folks who want radically more safety are about balanced by those who want a product in market quicker.
It's basically a few light bumps going at snails pace and probably caused by other cars. The articles headline reads as if it mowed down a group of school children.
Note that I'm not asking for perfection. However if someone does manage to create child porn (or any of a number of currently unspecified things - the list is likely to grow over the next few years), you need to show that you have a lot of protections in place and they did something hard to bypass them.
the same is true if the webapp has a blank "type what you want I'll make it for you" field and the user types "CP" and the webapp makes it.
LIDAR gives Waymo a fundamental advantage.
> What this really reflects is that Tesla has painted itself into a corner. They've shipped vehicles with a weak sensor suite that's claimed to be sufficient to support self-driving, leaving the software for later. Tesla, unlike everybody else who's serious, doesn't have a LIDAR.
> Now, it's "later", their software demos are about where Google was in 2010, and Tesla has a big problem. This is a really hard problem to do with cameras alone. Deep learning is useful, but it's not magic, and it's not strong AI. No wonder their head of automatic driving quit. Karpathy may bail in a few months, once he realizes he's joined a death march.
> ...
https://news.ycombinator.com/item?id=14600924
Karpathy left in 2022. Turns out that the commenter, Animats, is John Nagle!
Waymo could be working on camera only. I don’t know. But it’s not controlling the car. And until such a time they can prove with their data that it is just as safe, that seems like a very smart decision.
Tesla is not taking such a cautious approach. And they’re doing it on public roads. That’s the problem.
This is awkward for any technologies where we've made it boring but not safe and so the humans must still supervise but we've made their job harder. Waymo understood that this is not a place worth getting to.
OSAH also has regulation to mitigate risk ... tag and lock out.
Both mitigate external risks. Good regulation mitigates known risk factors ... unknown take time to learn about.
Apollo program learned this when the door locks were bolted on and the pure oxygen environment burned everyone alive inside. Safety first became the base of decision making.
If you have a large fleet, say getting in 5-10 accidents a year, you can't buy a policy that's going to consistently pay out more than the premium, at least not one that the insurance company will be willing to renew. So economically it makes sense to set that money aside and pay out directly, perhaps covering disastrous losses with some kind of policy.
A self-driving car that merely achieves parity would be worse than 98% of the population.
Gotta do twice the accident-free mileage to achieve parity with the sober 98%.
The difference is that accidents on a freeway are far more likely to be fatal than accidents on a city street.
Waymo didn't avoid freeways because they were hard, they avoided them because they were dangerous.
Beyond even the cameras themselves, humans can move their head around, use sun visors, put on sunglasses, etc to deal with driving into the sun, but AVs don't have these capabilities yet.
For me it looks like they will reach parity at about the same time, so camera only is not totally stupid. What's stupid is forcing robotaxi on the road before the technology is ready.
“robotaxi” is a generic term for (when the term was coined, hypothetical) self-driving taxicabs, that predates Tesla existing. “Tesla Robotaxi” is the brand-name of a (slightly more than merely hypothetical, today) Tesla service (for which a trademark was denied by the US PTO because of genericness). Tesla Robotaxi, where it operates, provides robotaxis, but most robotaxis operating today are not provided by Tesla Robotaxi.
No reason to assume that. A toddler that is increasing in walk speed every month will never be able to outrun a cheetah.
Legal things are amoral, amoral things are legal. We have a duty to live morally, legal is only words in books.
Yikes! I’d be a nervous wreck after just a couple of days.
I don't think so.
The deaths from self-driving accidents will look _strange_ and _inhuman_ to most people. The negative PR from self-driving accidents will be much worse for every single fatal collision than a human driven fatality.
I think these things genuinely need to be significantly safer for society to be willing to tolerate the accidents that do happen. Maybe not a full order of magnitude safer, but I think it will need to be clearly safer than human drivers and not just at parity.
1 in a billion might be a conservative target. I can appreciate that statistically, reaching parity should be a net improvement over the status quo, but that only works if we somehow force 100% adoption. In the meantime, my choice to use a self-driving car has to assess its risk compared to my driving, not the drunk's.
Maybe. We don’t know for sure.
You seem to frame that a bit like Waymo is cheating or padding their numbers.
But I see that as them taking appropriate care and avoiding stupid risks.
Anyway as someone else pointed out they recently started doing freeways in Austin so we’ll know soon.
You can solve this by having multiple cameras for each vantage point, with different sensors and lenses that are optimized for different light levels. Tesla isn't doing this mind you, but with the use of multiple cameras, it should be easy enough to exceed the dynamic range of the human eye so long as you are auto-selecting whichever camera is getting you the correct exposure at any given point.
Sure, in this context the person who mails the item is the one instigating the harassment but it's the postal network that's facilitating it and actually performing the "last mile" of harassment.
Photon counting is a real thing [1] but that's not what Tesla claims to be doing.
I cannot tell if what they are doing is something actually effective that they should have called something other than "photon counting" or just the usual Musk exaggerations. Anyone here familiar with the relevant fields who can say which it is?
Here's what they claim, as summarized by whatever it is Google uses for their "AI Overview".
> Tesla photon counting is an advanced, raw-data approach to camera imaging for Autopilot and Full Self-Driving (FSD), where sensors detect and count individual light particles (photons) rather than processing aggregate image intensity. By removing traditional image processing filters and directly passing raw pixel data to neural networks, Tesla improves dynamic range, enabling better vision in low light and high-contrast scenarios.
It says these are the key aspects:
> Direct Data Processing: Instead of relying on image signal processors (ISPs) to create a human-friendly picture, Tesla feeds raw sensor data directly into the neural network, allowing the system to detect subtle light variations and near-IR (infrared) light.
> Improved Dynamic Range: This approach allows the system to see in the dark exceptionally well by not losing information to standard image compression or exposure adjustments.
> Increased Sensitivity: By operating at the single-photon level, the system achieves a higher signal-to-noise ratio, effectively "seeing in the dark".
> Elimination of Exposure Limitations: The technique helps mitigate issues like sun glare, allowing for better visibility in extreme lighting conditions.
> Neural Network Training: The raw, unfiltered data is used to train Tesla's neural networks, allowing for more robust, high-fidelity perception in complex, real-world driving environments.
The IMX490 has a dynamic range of 140dB when spitting out actual images. The neural net could easily be trained on multiexposure to account for both extremely low and extremely high light. They are not trying to create SDR images.
Please lets stop with the dynamic range bullshit. Point your phone at the sun when you're blinded in your car next time. Or use night mode. Both see better than you.
It's far from clear that the current HW4 + sensor suite will ever be sufficient for L4.
hm yes i can see where the confusion lies
Tesla FSD is crap. But I also think we wouldn't see quite so much praise of Waymo unless Tesla also had aspirations in this domain. Genuinely, what is so great about a robo taxi even if it works well? Do people really hate immigrants this much?
Nah, Waymo is much safer than Tesla today, while Tesla has way-mo* data to train on and much more compute capacity in their hands. They're in a dead end.
Camera-only was a massive mistake. They'll never admit to that because there's now millions of cars out there that will be perceived as defective if they do. This is the decision that will sink Tesla to the ground, you'll see. But hail Karpathy, yeah.
* Sorry, I couldn't resist.
I kept it for a couple months after the trial, but canceled because the situations it’s good at aren’t the situations I usually face when driving.
We're speaking in hypotheticals about stuff that has already happened.
> I think these things genuinely need to be significantly safer for society to be willing to tolerate the accidents that do happen
I used to as well. And no doubt, some populations will take this view.
They won't have a stake in how self-driving cars are built and regulated. There is too much competition between U.S. states and China. Waymo was born in Arizona and is no growing up in California and Florida. Tesla is being shaped by Texas. The moment Tesla or BYD get their shit together, we'll probably see federal preëmption.
(Contrast this with AI, where local concerns around e.g. power and water demand attention. Highways, on the other hand, are federally owned. And D.C. exerting local pressure with one hand while holding highway funds in the other is long precedented.)
I like to quip that error-rate is not the same as error-shape. A lower rate isn't actually better if it means problems that "escape" our usual guardrails and backup plans and remedies.
You're right that some of it may just be a perception-issue, but IMO any "alien" pattern of failures indicates that there's a meta-problem we need to fix, either in the weird system or in the matrix of other systems around it. Predictability is a feature in and of itself.
It seems reasonable that the deaths and major injuries come highly disproportionally from excessively high speed, slow reaction times at such speeds, going much too fast for conditions even at lower absolute speeds. What if even the not very good self-driving cars are much better at avoiding the base conditions that result in accidents leading to deaths, even if they aren't so good at avoiding lower-speed fender-benders?
If that were true, what would that mean to our adoption of them? Maybe even the less-great ones are better overall. Especially if the cars are owned by the company, so the costs of any such minor fender-benders are all on them.
If that's the case, maybe Tesla's camera-only system is fairly good actually, especially if it saves enough money to make them more widespread. Or maybe Waymo will get the costs of their more advanced sensors down faster and they'll end up more economical overall first. They certainly seem to be doing better at getting bigger faster in any case.
To be clear, I'm not arguing for what it should be. I'm arguing for what it is.
I tend to drive the speed limit. I think more people should. I also recognise there is no public support for ticketing folks going 5 over.
> my choice to use a self-driving car has to assess its risk compared to my driving, not the drunk's
All of these services are supply constrained. That's why I've revised my hypothesis. There are enough folks who will take that car before you get comfortable who will make it lucrative to fill streets with them.
(And to be clear, I'll ride in a Waymo or a Cybercab. I won't book a ride with a friend or my pets in the latter.)
However notification plays a role here, there’s a bunch of things the post office does if someone tries to use them to do this regularly and you ask the post office to do something. The issue therefore is if people complain and then X does absolutely nothing while having a plethora of reasonable options to stop this harassment.
https://faq.usps.com/s/article/What-Options-Do-I-Have-Regard...
You may file PS Form 1500 at a local Post Office to prevent receipt of unwanted obscene materials in the mail or to stop receipt of "obscene" materials in the mail. The Post Office offers two programs to help you protect yourself (and your eligible minor children).
What’s so great about a robotaxi even if it works well? It’s neat. As a technology person I like it exists. I don’t know past that. I’ve never used one they’re not deployed where I live.
Or did he "resign" since Elon insists on camera-only and Karpathy says i cant do it?
The only way that ion thruster might save the toddler is if it was used to blast the cheetah in the face. It would take a pretty long time to actually cause enough damage to force the cheetah to stop, but it might be annoying enough and/or unusual enough to get it to decide to leave.
agreed. this also provides an explanation for the otherwise surprising fact that prey animals in the savannah have never been observed to naturally evolve ion thrusters.