The cops cannot be held accountable because the laws basically give them immunity. The politicians cannot be held accountable beyond being tossed out at the next election, because the laws otherwise give them immunity. The people operating the system cannot be held accountable, because the systems are marketed as authoritative despite being black boxes and lacking in transparency; they trusted the system just as they were told to, and thus cannot be held accountable.
And so when every human in the chain cannot be held accountable for these things, and the law prevents victims from receiving apologies, let alone recourse, then the tool and its maker is the only thing we can hold accountable. By deflecting blame away from the tools ("it wasn't AI, it was facial recognition"; "the human had to sign off on it"; "humans made the arrest, not machines"), you're protecting quite literally the only possible entity that could still potentially be held accountable: the dipshits making these stupid things and marketing them as superior and authoritative when compared to humans.
You want accountability? Start holding capital to account, and this shit falls away real fucking fast. Don't get lost in technical nuance over very real human issues.
If I am a cop in another jurisdiction and I see that in this case of error, the facial recognition company was held to account but not the police or municipality, I will be more likely to blindly trust the software assuming that they either patched it or will take responsibility.
We should demand accountability for both.
Absolutely ridiculous, I hope she wins her civil case.
Of course this would have been bad enough if this had happened where she lived but the holding for 5 months adds a whole 'nother level of insight into brokeness of the legal system. I'd be interested in hearing more about why that happened. Was it just a matter of that happens sometimes if you have a public defender?
Yeah, there is a whole lot more to this story.
/i
End qualified immunity and see how fast cops start to do their jobs with care.
Winning a lawsuit literally ends in your own community members (not the cops) paying the bill.
That doesn't mean we should accept it from AI. We should fight the blind yielding to the facade of authority regardless of whether the decision was made by an AI or an insect landing on a teleprinter at the wrong time.
Lets see the pig that called for her arrest and wasted 4 months of her life spend 4 months in jail.
The real problem isn't that the AI made a mistake—it's that everyone in the chain deferred to it. The technology became an excuse to stop thinking critically. "The algorithm said so" is the new "I was just following orders."
We need less faith in these tools, not more.
This is bank fraud case, for god’s sake, not an armed robbery. I don’t know the scale of it, but still, no one said she was a danger to anyone. She was a suspect, not a convict, and she was held at gunpoint while babysitting young children. What in the fucking world?
The US is so fucked up lately. People should chill the fuck out.
We have literally caught up to the world depicted in the movie "Brazil"
Hang on -- ignoring AI completely, how is that possible / legal / anything? Surely, if she was misidentified, there was an interview and arrest and due process?
There are also some cases of law enforcement successes caused by ML true positives, e.g. a RAF (Red Army Faction - https://en.wikipedia.org/wiki/Red_Army_Faction) terrorist gone into hiding was identified by a social media photo: https://web.archive.org/web/20240305044603/https://www.nytim... (although the success in law enforcement was actually not carried out by the police, but by investigative journalists/podcasters.)
The question we need to answer as a society is if we are willing to tolerate any innocent people to go to jail as the "price" of catching a few more criminals.
"“The pace of oppression outstrips our ability to understand it. And that is the real trick of the Imperial thought machine.”" -- Andor.
Some tech company illegally scanned people's photos on social media and now is using them with our complicit legal system to randomly put people behind bars. Now I need to worry that any day now due to a dice roll I will be sent away in a the middle of f'ing nowhere for months or years. Now the government wants to use these same dumb systems to make automated killing machines. FML!
I see a lot of comments trying to attribute blame to the cops, the lawyers, the police chief, the marshals, the tech bros, etc, but it is all of them and all of us that are guilty. We are so complicit in this sick system we live in. We are stuck in a collective action deadlock.
That fear you have in the back of your mind that says next time it might be you is counteracted by the thought "well thank goodness it wasn't me or a loved one," so you don't act. We are all doing this, that is why nothing changes.
The only people able to act these days are the most insane. The narcissistic corrupt power hungry politician, the psychopathic tech bro billionaire, and the jacobins are the only ones with the energy wade through this cesspool and that is why everything is so dystopian.
https://thecivilrightslawyer.com/2026/03/11/ai-software-tell...
In the video, it shows a police officer blindly trusting a casino's AI software, even when a cursory investigation should have given any reasonable person enough of a reason to question whether the man he arrested was the same man accused of a crime. (And then even after it was confirmed he was not, the prosecutor continued to charge him for trespassing!)
> Her bank records showed she was more than 1,200 miles away, at home in Tennessee at the same time police claimed she was in Fargo committing fraud.
> Unable to pay her bills from jail, she lost her home, her car and even her dog
> Once they were in hand, Fargo police met with him and Lipps at the Cass County jail on Dec. 19. She had already been in jail for more than five months. It was the first time police interviewed her.
How is this the fault of AI? It flagged a possible match. A live human detective confirmed it. And the criminal justice system, for reasons that have nothing to do with AI, let this woman sit in jail for 5 months before doing even interviewing her or doing any due diligence.
There's a reason why we don't let AI autonomously jail people. Instead of scapegoating an AI bogeyman, maybe we should look instead at the professional human-in-the-loop who shirked all responsibility, and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt.
Also me, reading further: Uh-oh.
The chief of police also resigned today; wouldn't be shocked if this was part of the reasoning.
Unfortunately we'll probably see a trend of people using AI and then blaming AI for cases where they mis-used AI in roles it's not good for or failed to review or monitor the AI.
https://nob.cs.ucdavis.edu/classes/ecs153-2019-04/readings/c...
There are also a few questions that remain unanswered:
- Did she have previous arrests, and did they use booking photos to identify her? I found someone named Angela Lipps who was arrested in 2001, 2003, 2017, and 2019. The 2017 arrest was for a probation violation: https://archive.ph/CpmXu The 2019 arrest was for public intoxication: https://archive.ph/yjFL9
- Another confusing detail is that she was in jail for four months without being extradited. That is quite unusual, unless the local authorities were holding her on unrelated charges.
So this news story seems to have nothing to do with AI. It is also very light on details about the case and what actually happened. And actual criminal case here.
This type of incident isn't new and is only going to get worse. The problem is our governments are doing absolutely nothing about it. I'll give two examples:
1. Hertz implemented a system where they falsely reported cars as being stolen. People were arrested and went to jail for rental cars that were sitting in the Hertz lot. Hertz ultimately had to pay $168 million in a settlement [1]. That's insufficient. If I, as an ordinary citizen, make a false police report that somebody stole my car I can be criminally charged. And rightly so. People should go to jail for this and it will continue until they do. These fines and settlements are just the cost of doing business; and
2. The UK government contracted Fujitsu to produce a new system for their post offices. That system was allowed to produce criminal charges for fraud that were completely false. People committed suicide over this. This went on for what? A decade or more? But resuted in a parliamentary inquiry and settlements. It's known as the British Post Office scandal [2]. Again, people should go to jail for this.
The choice we as a society face is whether to have automation improve all of our lives by raising everyone's standard of living and allowing us to do less work and less menial work or do we allow automation to further suppress wages so the Epstein class can be slightly more wealthy.
[1]: https://www.npr.org/2022/12/06/1140998674/hertz-false-accusa...
[2]: https://en.wikipedia.org/wiki/British_Post_Office_scandal
I smell a lawsuit
https://www.ndcourts.gov/lawyers/06020 https://www.linkedin.com/in/jay-greenwood-57360b86/
https://www.theguardian.com/us-news/2026/mar/12/tennessee-gr... - Another article on this without a paywall.
It's annoying that both articles are calling this AI error. This was human error, the police did the wrong thing and the people of Fargo will end up paying for this fuckup.
Pretend the tool is 99.999999% specific. If it searches every face in the USA you're still getting about 3 false positives PER SEARCH.
You will never have a criminal AI tool safe enough to apply at a national scale.
You forgot one: capital cannot be held accountable for making a tool used in a crime. It is a simple generalization of the Protection of Lawful Commerce in Arms Act (PLCAA), passed in 2005, which largely bars civil lawsuits against gun makers and sellers when their products are later used in crime.
https://www.youtube.com/watch?v=lPUBXN2Fd_E
as an aside how small the world is: I know-a-guy who knows-that-guy.
Every time I see something like this I can never quite believe this sort of stuff happens. Complete, life ruining incompetence, with no consequences for the idiots that caused this to happen. Ignoring the AI input, which to me has nothing to do with this (it was used as a tool to identify a potential suspect), this woman went to jail for 5 months on the opinion of someone with no other evidence. Only in America.
Though, the question remains: are the tools built in such a way as to deceive the user into a false sense of trust or certainty?
_Some_ of the blame lies on the UX here. It must.
Sales will sell the dream, who cares if the real world outcomes don’t align?
Yes, it's critical to remember that multiple parties can be at fault. In a case like this, it is true that
a) law enforcement misused a tool and demonstrated extreme negligence
b) the judiciary didn't catch this, which suggests systemic negligence there too when it comes to their oversight responsibilities
c) the company selling/providing this AI tool should have known it was likely to be misused and is responsible for damages caused by such predictable usage
We cannot have a just world until our laws and norms result in loss of jobs and legitimacy as punishment for this sort of normalized failure, from all three parties. Immunity is a failed experiment.
The software identified the person as Angela Lipps. According to the court documents, the Fargo detective working the case then looked at Lipps' social media accounts and Tennessee driver's license photo.
In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.
The software worked exactly as intended. It's a filtering tool that sifts through data for common patterns to provide leads, not matches. It raises a flag on persons of interest. You can be a "match" anywhere between 0 and 100% and only relative to some specific input (like that picture taken from the top of the woman at the teller). In that sens mismatches are within acceptable parameters and have been known to happen.
A "match" is a pronouncement ultimately made by the humans that uses the tool, after they've checked out the leads. Someone slept at the wheel here.
Not a fan of DA’s offices in general (they are the “evil twin” to my particular line of work after all), but realistically this one isn’t on them.
https://www.inforum.com/news/north-dakota/no-jail-time-for-m...
https://www.kvrr.com/2025/11/03/no-jail-time-for-man-accused...
https://apnews.com/general-news-fff59b609215476a9251edb91923...
https://www.valleynewslive.com/2021/05/21/moorhead-man-sente...
https://en.wikipedia.org/wiki/Ray_Holmberg
https://www.valleynewslive.com/2025/11/18/former-clay-county...
It is going to get bad in every skilled area of human managed bureaucracy.
The number of legal filings found to include AI confabulations is just the obvious surface.
Huh. I thought they were actively accelerating the process. Hoping you are right and I am wrong.
This is from the Sixth Amendment. Where the rubber hits the road is what “speedy” means.
Happens with a lot of topics of interest.
Are AI code assist tools built in such a way as to deceive the user into a false sense of trust or certainty? Very much so (even if that isn't a primary objective).
Does any part of the blame lie on the UX if a dev submits a bad change? No, none.
You are ultimately, solely responsible for your work output, regardless of which tool you choose to use. If using your tool wrong means you make someone homeless, car-less, and also you kill their dog, then you should be a lot more cautious and perform a lot more verification than the average senior engineer.
No, the tools work perfectly as they were design to work. The problem is that the tools are flawed.
Ultimately, every single of these decisions should be approved by a human, which should be responsible for the fuck up no matter what the consequences are.
> _Some_ of the blame lies on the UX here. It must.
No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.
Who stole her dog?!
which the citizens end up footing the bill for. yay.
Because we're seeing the first instances of what reality looks like with AI in the hands of the average bear. Just like the excuse was "but the computer said it was correct," now we're just shifting to "but the AI said it was correct."
Don't underestimate how much authority and thinking people will delegate to machines. Not to mention the lengths they'll go to weasel out of taking responsibility for a screw up like this (saw another comment in this thread about the Chief of Police stepping down but it being framed as "retirement").
This keeps happening, and the reason it keeps happening is that shithead cops have these tools and are using them. Until we can find a reliable way to prevent this from happening, which may or may not be possible, cops who may or may not be shitheads should not have access to these tools.
Man gets pulled over on an expired plate. They search based on this fact, find a pill bottle (for Irritable Bowel Syndrome) and magically find he’s trafficking cocaine and fentanyl.
Months later a lab test exonerates the poor guy.
https://www.wyff4.com/article/deputies-falsely-identify-ibs-...
Most humans cannot distinguish AI from actual intelligence. When you combine that with bureaucrats innate tendency to say, "Computer said so," you end up with bizarre situations like this. If a person had made this facial match, another human would have relentlessly jeered him. Since a computer running AI did it, no one even cared to think about it.
Computers are wildly dangerous, not because of anything innate but because of how humans act around them.
AI is being used by bureaucrats and enforcers to justify lazy, harmful conclusions. You don't live in the real world if you think "just punish the bureaucrats, don't make it about AI" is going to remotely rectify this toxic feedback loop and ecosystem.
It could be the fault of the company that's selling this service. They often make wildly inaccurate claims about the utility and accuracy of their systems. [0]
> There's a reason why we don't let AI autonomously jail people.
Yes we do. [1]
> and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt.
Her guilt was assessed. That's why she had no bail. It assessed it incorrectly, but the error is more complicated than your reaction implies.
[0]: https://thisisreno.com/2026/03/lawsuit-reno-police-ai-polici...
[1]: https://projects.tampabay.com/projects/2020/investigations/p...
The false positive rate combined with scanning millions of pictures might make the chance of arresting the wrong person really high.
The reason everyone rushes to defend the tool's use is because holding humans accountable would mean throwing these tools out entirely in most cases, due to internal human biases and a decline in basic critical and cognitive thinking skills. The marketing has been the same since the 80s: the tool is superior (until it isn't), the tool shall be trusted completely (until it fails), the tool cannot make mistakes (until it does).
If folks actually listened to the victims of this shit, companies like Flock and Palantir would be gutted and their founders barred from any sort of office of responsibility, at minimum. The fact so many deflect blame from the tool like the marketing manual demands shows they don't actually give a shit about the humans wrapped up in the harms, or the misuse and misappropriation of these tools by persons wholly unaccountable under the law, but only about defending a shiny thing they personally like.
There is far too little skepticism around the magic box that solves all problems which is causing issues like this. It's not the fault of the AI (as if it could be assigned liability) for being misused, but this kind of misuse is far too common right now so scare stories like this are helpful and we should highlight the use of AI in mistakes like this.
“A computer can never be held accountable, therefore a computer must never make a management decision.”
(https://www.ibm.com/think/insights/ai-decision-making-where-...)
Modern AI seems incapable of any respectable amount of accuracy or precision. Trusting that to destroy somebody's life is even more farcical than the oppressive police in "Brazil".
> In Tennessee, she was given a court appointed lawyer for the extradition process. To fight the charges, she was told she would have to go to North Dakota.
> Officers from North Dakota did not pick up Lipps from her jail cell in Tennessee until Oct. 30 — 108 days after her arrest. The next day she made her first appearance in a North Dakota courtroom to fight the charges.
> "If the only thing you have is facial recognition, I might want to dig a little deeper," said Jay Greenwood, the lawyer representing Lipps in North Dakota.
But imo this is why local police departments should not have access to this kind of tool. It is too powerful, and the statistical interpretation is too complicated for random North Dakota cops to use responsibly. Neither the company nor the PD have an incentive to be careful.
The cops need to be held accountable.
But it’s glaringly obvious that if you build tools like this and give them to the US police this is the outcome you will get. The toolmakers deserve blame too.
> No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.
The person who approved the tools might've understood, but that doesn't mean the user understands. _Some_ of the reason why the user doesn't understand the shortcomings of the tool might be because of misleading UX.
This was 2023 https://www.youtube.com/watch?v=lPUBXN2Fd_E&t=19s
A dude in the usa was arrested in a casino by police because the casino's facial recognition software said he had been trespassed before. He hadn't. I think there was height differences and eye colour difference. The police still arrested him, booked him. I think the prosecutors took it to trial.
But what else can (identification via) face recognition be (safely) used for? Absolutely nothing. It's tech that's just made for surveillance.
Even if AI facial recognition gets really really good, and is 99.999% accurate, if you use it in this way you are going to arrest more innocent people than guilty people.
If you find a suspect, who has a lot of evidence pointing to them being the criminal and you run a test that is 99.999% accurate and it tells you they are guilty, they are probably guilty.
But if you take that same test and run it against the entire population of the country, it is going to find 3500 people that match with "99.999% certainty" That gives you a 0.02% of the person being guilty.
People don't think like this, though, so they think the person must be guilty.
Also, her guilt was not assessed in any common meaning of the term. The requirement for holding a person in custody, with or without bail, is probable cause. The only thing assessed was did law enforcement present a statement to a Judge that was possible to be believed in the light most favorable to the prosecution.
This is literally the plot of most of those books and the way they differ is in how everything falls apart. In some of them the AI supplants us entirely and kills us all. In others it gets taught to kill us all. In others it gets really good at giving us what we ask for until everything falls apart. But it’s taken as a given that unless we change something innate in our culture AI will be our downfall.
The glaringly obvious problem here is that our justice system should not be constructed in such a way so as to be reliant on someone's coworker shaming him. That is not a sensible check against a systemic failure. We're supposed to have due process. If someone skips or otherwise subverts due process the justifications don't matter. The root issue is that due process was skipped. Why was that even possible to begin with?
If an engineer signs off on an obviously faulty building plan and people die as a result we hold him accountable. This is no different.
The magical past where people had critical thinking skills never existed. We put a lot of trust in tools is because people are unfucking reliable. Hence why in most cases actual physical evidence does a far better job than witness testimony.
This said, people are lazy. It is one of our greatest and worst traits. When we are allowed to be lazy, especially with tools bad things happen.
New LLM-related AIs are all supremely confident in every assertion, no matter how wrong.
Danish police had to redo 20.000 DNA tests with a larger set of markeres begin tested, because they jailed someone based solely on a DNA test and did consider that they might have gotten the wrong person, despite the DNA match. It's essentially a human hash collision.
Identification by AI is going to be the same, except worse, because it's frankly less scientific. Law enforcement, the judicial system and especially the public is simply to uninterested in learning the limitations of these types of systems. Even in the more civilized part of the world police would love to just have the computer tell them who to pick up and where.
Why are cops not treated the same way? OP is right, AI is totally irrelevant in this story.
If the point is "cops can't be trusted". Why do they have GUNS?! AI is the least of your problems.
I feel like I'm going crazy with this narrative.
So it wasn't just the pill bottle, it was multiple other drug tests. I think you could make a reasonable argument that drug use shouldn't constitute a crime in and of itself - although it probably should if you're driving a car, for legitimate traffic safety reasons, I don't find DUI laws objectionable. Or you could make an argument that the criminal justice system shouldn't interfere with peoples' decision to use and sell drugs. I'm sympathetic to this myself, but I think especially in the case of opioids like fentanyl, the situation where government paternalism makes it illegal to sell opioids probably discourages enough destructive use of these drugs by unwise or already-addicted people that it's still net-positive in terms of human welfare. I suspect a society where it was simply legal to use and sell opioids would have a lot more human suffering in it than our own (possibly because in the absence of laws banning open opioid dealing, people who are close to severe opioid addicts might simply commit vigilante murders of suspected opioid dealers, and be left unconvicted by sympathetic juries). And once you hold the position that it's legitimate for the government to legally restrict the sale and use of these drugs, then you necessarily have to have something like police and something like a criminal justice system that investigates whether a person might be actually using and selling opioids and then lying about it.
The fact that the guy was in fact once addicted to some drug and "was working at rehab and addiction centers in Florida at the time of his arrest." is additional evidence that he might have returned to drug use, and there's no way to make cops who investigate opioid-related crimes not think this.
That said, it's portrayed as a retirement, and doesn't seem to give any hints that it's connected.
It isn’t much of a salve, but the particulars do matter when trying to assess fault to the proper parties (who are still clearly the Fargo cops in this particular tragedy).
>...Abby Tiscareno, a licensed daycare provider in Utah, was wrongfully convicted of felony child abuse when a child under her care suffered brain hemorrhaging. After calling emergency services, subsequent medical tests supported these findings. However, during her trial, requested medical records from the Utah Division of Child and Family Services (DCFS) were not provided. It wasn’t until a civil suit that Ms. Tiscareno saw pathology reports suggesting the injury could have occurred outside of her care. She was granted a new trial and acquitted. Her subsequent lawsuit for due process violations, alleging that DCFS failed to provide exculpatory evidence, was dismissed due to lack of precedent indicating DCFS’s obligation to produce such evidence.
https://innocenceproject.org/news/what-you-need-to-know-abou...
It was a literal bug in the computer. Metaphor as humor!
Wrongly arrested individuals with mental disabilities have a history of physical abuse in jail potentially to the point of death.
> Unable to pay her bills from jail, she lost her home, her car and even her dog.
If this is the system "working", then the system is broken.
Edit: wording, formatting
https://abovethelaw.com/2016/02/criminally-yours-indicting-a...
You can be arrested, indicted, and held in jail on pretrial, and there is literally no recourse. There are many other ways jail can happen without due process. Where I live:
* Civil contempt. Absolutely immunity. No due process. Record is about 16 years. Having a bad day? Judge can toss you in jail.
* "Dangerous." Half a year. No due process. He-said she-said.
* "Insane." Psychiatric hold. Three days. Due process on paper, not in practice. Police in my town can and do use this if they don't like you.
Absolutely no recourse. You come out with a gap in income, employment, and, if you missed rent/mortgage, no home. Landlords will simply throw your stuff away too.
You're also basically damned if things do move forward, since from jail, you have no access to evidence, to internet (for legal research), and no reasonable way to recruit a lawyer (and, for most people, pay for one).
Can happen to anyone. Less common if you're rich and can afford a good lawyer, but far from uncommon.
In the end, the detective compared the booking photo with the camera footage and concluded they were the same person, then presented that to the judge.
I also wonder what her “probation” was for. Maybe she once wrote a bad check and got into trouble, which might have made the detective more inclined to believe it was her.
Anyway, this does not appear to be an AI issue at all.
But it is nice scary story to remind us not to be lazy and trust it unconditionally.
When I load this URL I get "One more step Please complete the security check to access" and I cannot get past the archive.is computational paywall.
But the guardian article actually has text! Thanks.
That's setting aside the tendency for police to hire from the left side of the bell curve to avoid independent thinkers that might question authority, refuse to do bad shit, etc.
Even if it also output a score, that score depends on how the model was trained. And the cops might ignore it anyways.
We're only getting warmed up. There are programmers on HN that will take the output of their favorite AI, paste it and run it. And we're supposed to be the ones that know better.
What do you think an ordinary person is going to do in the presence of something that they can not relate to anything else except for an oracle, assuming they know the term? You put anything in there and out pops this extremely polished looking document, something that looks better than whatever you would put together yourself with a bunch of information on it that contains all kinds of juicy language geared up to make you believe the payload. And it does that in a split second. It's absolutely magical to those in the know, let alone to those that are not.
They're going to fall for it, without a second thought.
And they're going to draw consequences from it that you thought could use a little skepticism. Too late now.
Cops are already susceptible to confirmation bias, and for "efficiencies" they are delegating part of their job to apparently magical tools that will only increase their confirmation bias. And because it is for efficiency you can bet they won't be given extra time to validate the results.
What or who is at fault isn't either/or, it's a bunch of compounding factors.
The purpose of using AI to identify suspects in criminal cases is to ease the burden of manual searching for a suspect (or insert whatever the purpose of statement you want). Ok, but we're getting false positives that are damaging people's lives already in the early stages. And I don't want to hear "trust me bro, it will get more accurate" as an excuse to not regulate it.
At a minimum, we should enshrine the right to appeal AI and have limits on how it can be used for probable cause.
This isn't even the only recent case of this happening. There was another case of mistaken identity due to AI. [0] Sure 4 hours isn't the same as 5 months, but still this guy wanted to show multiple forms of ID to prove who he was! The bodycam footage was posted a few months back but never got traction here.
Like if the police officer can't read numbers, they can't do breathalyzer tests on people. If the AI can't be used responsibly, then it can't be used at all.
You might even argue that's the purpose of the inscrutable black box.
But, then what good is facial recognition for? Would it have been okay for this woman’s life to have been merely invaded because she matched a facial recognition system? Maybe they can just secretly watch you so you’re not consciously aware of being investigated? Should that be our new standard, if a computer thinks you look like a suspect you can be harassed by police in a state you’ve never even been in?
I just don’t see a legitimate way for AI to empower officers here without risking these new harms. That’s why I lean towards blaming the AI tech, rather than historically intractable problems like the reality of law enforcement.
Unfortunately, a lot of people are certain it won't happen to them, and it has been practically impossible to establish any kind of accountability. It has only gotten worse since 2020.
Now that they can blame "AI" no specific officer(s) will take the blame, ever. If no one is responsible there will be many more false positives.
And false positives destroy lives
This woman lost most of her material possessions, was terrorised by "goons"... The police do this stuff regularly, as black people, immigrants, "white trash" etcetera know well. Another opportunity, presented BY AI models for more routine police oppression
As the wise singer said: "Fuck the police!"
As a concrete example:
> And the criminal justice system, for reasons that have nothing to do with AI, let this woman sit in jail for 5 months before doing even interviewing her or doing any due diligence.
Let me state what should be obvious: without AI (as in, the facial recognition systems involved in this case), this woman would not have sat in jail for 5 months, or indeed for any length of time at all. So saying that it has "nothing to do with AI" is totally ridiculous.
>> How is this the fault of AI?
> This particular "AI bogeyman" isn't just AI; it's cops with AI
You can’t separate the thing from how it will be used. It’s like arguing that cars on their own aren’t particularly dangerous, but the point of buying a car is to use it thus risking the general public.
This is on us as voters. If we didn’t piss our pants every time a police union sneezed, we’d realize wholesale restarting police departments is precedents in even our largest cities.
It's absolutely absurd. The argument that AI is the problem is literally the people arguing against AI shedding responsibility to the machines. The people arguing that AI is the problem are essentially (philosophically) the same people who will say it was the AIs fault.
The thing that it most reminds me of is people trying to stop the deaths and injuries that come as a result of "swatting" by being really angry at people who "swat" and proposing the harshest punishment for it that they can come up with (or outdoes anyone else in the thread.)
The problem with swatting is that police were showing up to the houses of harmless people based on anonymous phone tips and murdering them. You guarantee swatting will work indefinitely when you indemnify the cops.
You don't need AI for injustice in the US justice system. There is literally no part of the US justice system that makes sense at all, and even in the best case scenario when the guilty are caught, tried, and punished, it is tremendously wasteful, cruel, and ass-backwards. Juries are basically the AI of the US justice system, allowing the prosecutorial and enforcement apparatus to be infinitely cruel, illogical, self-serving and incompetent. 12xFull AGI. AI couldn't do any worse.
> I feel like I'm going crazy with this narrative.
You're not alone.
A friend of mine was committed longer than 3 days without council or the ability to represent themselves in the hearing. Apparently the whole process of being committed is ex parte in practice in some states.
And there are definitely insane people who are a threat to themselves and others who wander around, making the streets and public transit systems unsafe and unpleasant, who need to be put into something like a psychiatric hold by something like the police.
And if you don't have police and a criminal justice system that are willing and able to impose psychiatric holds, you wind up with a bunch of incidents where a crazy mentally-ill vagrant kills someone in a public (the Iryna Zarutska murder, or any of the various cases where a homeless person randomly shoves someone into the path of an oncoming train at a public transit station); or incidents where someone else gets railroaded by the criminal justice system for intervening in a crazy mentally-ill person threatening people around them (the Daniel Penny incident - many people, even nominal anti-carceralists, are upset that he was not successfully convicted and incarcerated for murder). Not to mention all the less-newsworthy incidents where insane people walking the streets and public transit systems systematically ruin them for everyone else, either through vandalism or theft or simply screaming incoherently at people as they try to use the public commons.
It's certainly possible for the police to abuse psychiatric holds if they don't like you; on the other hand, the existence of large numbers of people who should be in some kind of psychiatric hold but aren't disrupting and vandalizing the public commons is one of the biggest quality of life and physical safety problems in my region and in many other American urban areas.
Of course, that depends on sane non-politicized courts which you may rightfully doubt exist right now - but assuming the system works anywhere near as designed outsourcing a decision to AI wouldn't change liability.
For DC fans Harvey Dent would similarly not be free from liability for actions taken after a coin flip even if that coin could be viewed in a certain light to have the power to potentially force or prevent certain actions. An AI box that tells Harvey whether to shoot or spare would be similarly irrelevant to his liability - and a scenario in which Harvey points the gun at someone and then walks away giving the AI control over the trigger is essentially no different. Harvey in all cases is responsible for constructing the scenario that (potentially) leads to someone's death and, more over, even if the gun wasn't fired because the AI decided to spare the person Harvey would be on the hook for attempted murder.
I'm not dismissing the rest of what you are saying, but I don't think you should dismiss appeals to authority being a factor, either.
What if no one would want to work as a policemen and you end up alone against local gang?
As long as you keep electing clowns that let the police do whatever they want, the police will... Do whatever they want.
Qualified immunity protects individuals, not departments, from liability.
The particular thread (in this thread) that I was responding to:
>> I hope she wrings at least several million dollars out of the government.
> With all the lovely qualified immunity doctrine? That's wishful thinking.
I was responding to the claim that qualified immunity protected the government, it does not.
Just because you disagree with the outcome doesn't mean that due process wasn't given.
By that logic the “I” in Siri is 2x more intelligent.
It's an unfortunate story because it sounds like he was having relapse trouble, and the cops were predisposed to do the worst to him that they could (mis)justify, when he needed to cool off and then get back to the professionals helping him with recovery.
The issue is that facial recognition is just not very reliable. Not for humans and not for machines. If you look at millions of people, some of them just look incredibly similar. Yet police apparently thought that was all the evidence they will ever need. A case so watertight there's no point in even talking to the suspect
Also, their whole job is dealing with people who constantly lie to them.
There was a human doing that in this case; AI doesn’t inititiate charges. “In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.”
How do you arrive at that conclusion? Because it happened, and it wasn't an AI overseeing (the lack of) due process. The police identifying suspects is part of their job. So are arrest warrants and all the rest of it. I honestly don't see what AI had to do with anything here. All I see is a gaping systemic issue that could have happened regardless of AI if the wrong person got the wrong idea or had a personal vendetta.
Suppose ICE busts down someone's door, drags them off, holds them in an internment camp for months, and then finally goes "oh, oops, guess you were a citizen all along sorry about that" and releases them. We don't blame the source of their faulty hit list. We blame the systemic practices and legal apparatus that permitted it all to happen in the first place.
You might as well blame the SUV manufacturer because without vehicles the police wouldn't hav been able to drive over to make the arrest, right?
Of course there's a balance that has to be struck so that police are empowered enough to act. So perhaps something like settlements against the police being 30% borne by the police pension fund and 70% by taxpayers is sufficient. I think this will also make police very enthusiastic about bodycams and holding each other accountable.
This is by far the worst argument. What if we held doctors accountable for malpractice and no one wanted to be a doctor? What if we held engineers liable for faulty designs that break and kill people and no one wanted to be an engineer? What if we held OCCUPATION accountable for DOING JOB BADLY / BREAKING THE LAW? Its a nonsense argument.
What would happen is that only the people that intended to be bad police would not want to the job and/or the people that were bad police (intentional or otherwise) get kicked out of the police force. Same as with every other profession. This is a fantastic outcome and we should do it immediately.
I'm not sure what the solution is here. Forbid police from unionizing? That would probably have some bad consequences too.
it sucks.
I think there's probably one major exception: civil rights violation investigations. But even then, the people doing the investigating seem to be biased toward the LEOs.
The GP's linked article doesn't seem to even talk about this, so not sure why that's there.
A judge should have to recuse themselves if they are acting as witness to the supposed infraction.
You'd think, but watching how many millions my local police department and city paid out every single year leads me to believe they just don't care.
https://www.washingtonpost.com/investigations/interactive/20...
To define the line between the two, calculate the percentage of cases when mainstream CPUs return anything but integer 4 after addition of integer 2 and integer 2, and use that as the threshold to define "reliable".
What you're stating is your wishful thinking. Don't get me wrong. I'd also like what you say to be true. It very much is not. Quite the opposite, which is why salespeople "work".
The amount of AI bullshit Senior+ level developers just paste to me as truth is astonishing.
"Among his accomplishments has been establishing the department’s Real Time Crime Center that leverages technology and data to support officers in responding more effectively to incidents," the city's release said. "Zibolski also prioritized officer wellness initiatives to strengthen mental health resources and resilience within the department. He reinstituted the Traffic Safety Team to focus on roadway safety and proactive enforcement, and ... played an active role in statewide discussions on various issues affecting law enforcement."
From the same article... He spearheaded a push to "leverage technology and data to support officers in responding more effectively to incidents", then that same technology mistakingly ruins a woman's life by passing along a hit to an officer who compared with her FB photos and said "sure, seems right".The technology seems highly relevant here. Plus, as we've seen in the software world, when a mandate comes from the top to use the shiny new magic AI tools as much as possible, the officer may have felt pressured to make arrests using the new system they paid a bunch of money for instead of second guessing whatever it spits out.
If the results were swapped and this had said “13 dead after being misidentified” vs “1 jailed for 5 months in post office scandal” I’m supposed to believe you’d be all “well, at least they’re doing something about the 13 dead”?
I think we both know you’re just talking a load of crap. Admit you were wrong to downplay a large tragedy and move on.
Police get raises and recognition for closing cases. In general they don't care if you're guilty or not, that's someone else's problem. Same with the detective, same with the DA. The more cases they close they 'tougher they are on crime'.
The next thing occurring is
https://abcnews.com/US/court-oks-barring-high-iqs-cops/story...
Who is this "someone"? OP's article and the discussion here are absolutely not neglecting the human factors and general institutional failure that made this possible. But it's also true that without these "AI" tools, it would never have happened.
The point that you're missing is that, in a system where such abuses are possible, many of us really don't want one more tool in their box for them to fuck us with.
Like, they already prove themselves incompetent- giving the power to track anyone in the US via a distributed ALPR system just makes them more dangerous. Giving them all these "AI" based tools does the same.
Same deal here, if something “becomes a problem” because of the introduction of AI, it’s AI that is the root case of the resulting issues. Many people are tempted to argue that flawed humans can’t implement the perfect system that is Anarchy, Communism, Recycling programs, or whatever but treating systems as needing to operate on the real world is productive where complaining about humans isn’t.
Because it's beyond obvious? How would this woman have ended up in jail if she hadn't been misidentified by the facial recognition software in use by the Fargo police? She lives 3 states over; would be a hell of a coincidence if some other avenue of investigation led them to her.
> I honestly don't see what AI had to do with anything here.
You honestly don't see what facial recognition software had to do with a woman being misidentified by facial recognition software?
> Suppose ICE busts down someone's door, drags them off, holds them in an internment camp for months, and then finally goes "oh, oops, guess you were a citizen all along sorry about that" and releases them. We don't blame the source of their faulty hit list.
I actually am completely willing to blame any entity that supplies ICE with the names of people it can reasonably assume will be targeted for "enforcement action" due to said entity representing said names as being legitimate targets for said enforcement action, without taking reasonable care to ensure said representation is correct in each and every case.
What you don't seem to understand is that these abuses of law enforcement authority are predicated on at least an appearance of legitimacy, which can be provided by (e.g.) an app with (presumably) a very official looking logo that agents can point at somebody to get a 'CITIZEN' or 'NOT CITIZEN' classification. It is upon this kind of basis that they perform illegal arrests. All parties—the app vendor and ICE, as well as the people who are meant to be overseeing ICE and providing accountability—are complicit enablers in these crimes. To absolve the vendors who provide the software knowing full well what it will be used for, what its limitations are, and how unlikely it is that ICE personnel will understand those limitations and work around them to keep everything legal, is totally absurd.
I don't know if I'd go so far to say she won't find any relief, but it probably still could be a pretty tough Monell claim against the department (although it's hard to tell from the sparse details in the article):
"[A] local government may not be sued under [42 U.S.C.] § 1983 for an injury inflicted solely by its employees or agents. Instead, it is when execution of a government's policy or custom, whether made by its lawmakers or by those whose edicts or acts may fairly be said to represent official policy, inflicts the injury that the government, as an entity, is responsible under § 1983." [1]
I could see a problem if there was a policy/custom of relying on AI facial recognition alone without any other corroborating evidence (would be a really stupid practice, but I'm sure stupider things have become part of a police department's systemic practices). Or if there was a failure to sufficiently train detectives about the erroneous tendencies of this technology. Maybe the needlessly prolonged detention without bail could be an issue if there was a lack of adequate protocols to expedite in a reasonable amount of time.
Either way, still seems hard to say this a slam dunk case for her, unfortunately. But also seems too risky for the city of Fargo to not settle, at least nominally.
[1] Monell v. Department of Soc. Svcs., 436 U.S. 658 (1978), https://supreme.justia.com/cases/federal/us/436/658/
Now to you nonsense that I somehow downplayed the post office scandal by pointing out your attempt to compare apples to oranges (aka false equivalence, as in youre engaging in a logical fallacy); utter ridiculous nonsense, not even a good attempt at straw man.
Either way you're no worth engaging with further. You've nothing to add, and engaging further would risk breaking the rules of this site...
This statement should make you uncomfortable. It makes me uncomfortable because it is a pure expression of the power of the state. But it's still due process.
If a few cities/states were to default due to debts coming from such cases, the others would start to take notice...
No, everybody does not want police accountability. Half the population will fall on a grenade to prevent that. They know that the purpose of the police is to keep the undesirables in line, and they never envision that they will ever fall in that category.
The brutality is the point for them.
(Nor does the omission in the article of other names and procedural details change the fact that for there to be actual criminal charges, an arrest warrant, extradition, and incarceration, a number of other people had to sign their names to official acts, including, among others, at least one public prosecutor, and more than one judge.)
* Nobody can find a police department that administers any kind of general cognitive test.
* There are large states with statewide written police aptitude tests that are imperfect but correlated to general cognitive ability, and maximizing scores on that test is the universal correct strategy.
* It's a luridly stupid policy and most municipalities aren't luridly stupid.
I think this happened like, once or twice, in one or two of the 20,000 police departments across the United States, many of which are like one goober and his sidekick (no offense to them; just, you live in gooberville, you're a goober), and now it's an Internet meme that police departments specifically hire for midwittery. Nah.
No. That's gaslighting, and totally misplaced political activation.
To your example, technology changes and that necessitates infrastructure changing. That doesn't mean that fault for mishaps in the meantime can be attributed to the new technology. A user operating the new technology in an obviously unsafe manner is solely at fault for his own negligence.
Death by adverse horse encounter was very common before the 1920s. Not sure how many of those deaths can be blamed on poor quality road engineering. But putting a bunch of humans, carts, and excitable half-ton animals in the same crowded streets seems like poor engineering practice.
It doesn't matter in the slightest by what means she was selected to "win" this particular lottery. The tool rolling the dice isn't to blame. Tools (and people!) will occasionally return spurious results. Any system needs to be set up to deal with that.
So no, I honestly don't see what facial recognition software has to do with gross negligence and process failure on the part of multiple government agencies.
> without taking reasonable care to ensure said representation is correct in each and every case.
Only if that was part of the contract. Was the product delivered according to specification or not?
What if ICE used FOSS tools to put together the list themselves? Are the tools still to blame? That would obviously be absurd.
The only way the provider (never the tool) could be at fault would be something such as willful negligence or knowingly and intentionally attempting to manipulate the user's actions to some end.
What you don't seem to understand is that human negligence can't be foisted off on tools. Of course an abuser will try to play his actions off as legitimate. That isn't the fault of the tool, it's the fault of the abuser. It isn't up to an app to determine the legitimacy of LEO agent actions. Neither is it the responsibility of an arbitrary, fungible government contractor to oversee ICE.
I think you're confusing the morality of participating in a broader ecosystem with moral culpability for the process failure associated with a specific event. You can advance a reasonable argument that AI companies that choose to do business with ICE are making an at least moderately immoral decision. However that doesn't place them at fault for the specific process failures of any particular event that happens.
1 - 38 million between 2017 and 2022.
2 - 29 million in 2023.
3 - 12 million in settlements in 2025.
Dare I keep going?
[1]https://www.wdrb.com/in-depth/louisville-payouts-for-police-...
[2]https://www.aol.com/louisville-paid-least-29m-settle-1030450...
[3]https://www.courier-journal.com/story/news/local/2026/02/04/...
Waymo hitting a cat is obviously less tragic, but if it can hit a cat, what else can it hit? A toddler? A human? The wall of your kitchen? This is a problem that has no known solution; furthermore, it's a problem that the engineers at Waymo don't seem overly keen on solving quickly.
Failing to acknowledge cars as the root cause may be comforting, but it blinds you to viable solutions.
Indoor shopping malls for example solve many of the issues with cars by forcing people to move around on foot in a little island surrounded by a sea of very low density parking. They are’t perfect solutions, but they still saved a lot of lives and time.
Saying people are misusing a new technology is just another way of saying that technology is flawed. This doesn’t mean you can’t utilize it, but pretending flaws don’t exist has no value.
They certainly don't seem to use any of that technology well, as you yourself have admitted.
I suppose what I don't understand is why giving them access to more and easier-to-abuse technology would be a "good" thing.
To be clear, I understand that it's the people who kill folks, not guns, and that at the end of the day it's people who need to be held accountable, not the technology. Personally, I do a lot of shooting with a bunch of other queer and trans anarchist folks lately...
But giving more tech to the folks who are already misbehaving without mechanisms to enforce good behavior seems dumb to me.
After vast improvements in safety ~1.3% of American deaths are still coming from automobile accidents. Horses were never close to that, meanwhile back in 1970 cars where around twice as likely to kill you.
And the first article you link proves that people are already worried about it. You think they can safely 10x that?
The problem is that the mass media sets the framing of acceptable discourse, and that mass media is in large part an ideological monoculture. And even when it's not, it is happy to present absolutely insane batshit lunacy as 'one of the two sides' of an issue.
The LST isn't; it's a domain-specific occupational exam.
If you find a place that (1) uses the Wonderlic and (2) has recently (like, not all the way back in 2000) claimed there was a high-end cut-off for applicants, you'll have disproven my claim. I don't think giving general cognitive tests to prospective police officers is common; this is why there are things like the LST, the PELLETB, and the POST.
Great, let's just apply that logic to Waymo as well and call it a day (see how silly that sounds?). Waymo has engineers..so does the Department of transportation.
If you'd like, you can replace the term "suspect" in my post with "person of interest", which colloquially implies a lot less suspicion but isn't practically any different in terms of how the police interacts with you.
I see. It's clear that you're ignoring the whole reason police exist which is to prevent crime. Of course a handicapped police force would prevent less crime than a well-resourced one. That's why it would be a good thing to give them more and easier to abuse technology.
The question is where the right balance is. Maybe having cars is OK because it helps them prevent more crime than what they cause by, say, running people over. Whereas having guns could be a net negative because more people are shot by police than protected by them with their guns. But without data, it's just opinions, probably formed from whatever bias the news has. The fact that you named an individual case suggests your opinion is based on biased news instead of data.
https://horse-canada.com/horses-and-history/the-poo-conferen...
The facial recognition tool didn't arrest her. It holds no authority, has no will of its own, and does not possess a corporeal form with which to enact change in the world. The only parties that could possibly be at fault here are various government agents who clearly acted with negligence, failing to uphold their duty to the law and the people.
If you're unable to rebut my point then perhaps you should consider that you might be in the wrong? If you're unwilling to entertain such a possibility then I wonder why you're posting here to begin with. What is your goal?
It's a fair point and easy to handwave away "it's only $100 per resident." But it's a lot of money still. And yet that city is shutting down schools and selling off school properties to make budget this year. I bet they'd love to have those wasted millions.
> You think they can safely 10x that?
I have no idea the reason for this question. The OP said cities learn after a couple million dollar suits. I'm showing that no, they do not. If anything suits are increasing.
But that level of logic follows from what you're writing here, so it's not surprising you think that...
You, yesterday:
> I honestly don't see what AI had to do with anything here.
???
> You seem to be intentionally ignoring the point I made.
I completely understand your point. You are saying that if a mentally ill high schooler manages to acquire a gun and kills 20 people at their school, we should a) punish the shooter, and b) understand the gun as a neutral object that simply popped into existence and was misused, rather than a machine whose design purpose is to kill humans, and whose manufacturer(s) (and other organizations who profit from the easy availability of guns) are actively engaged in a broad effort to preserve the status quo which allowed a mentally ill high schooler to acquire a gun and massacre 20 of their classmates/teachers.
I think it's a terrible opinion, and I vehemently disagree with it. But if you are willing to engage in the sort of rhetorical contortions highlighted at the top of this comment, there is no point in expressing my disagreements to you, because you will evidently say literally anything in response. I may as well have a debate about toilet tank design with `cat /dev/urandom`.
> If you're unable to rebut my point then perhaps you should consider that you might be in the wrong?
Try looking in the mirror, buddy. Sheesh.
Well it does make sense, in the full context of the thread. I'll let future readers decide.