> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
It just created a situation in which a bunch of people with guns were told that some teen had a gun. That's a very unsafe situation that the system created, out of nothing.
And some teen may be traumatized. Again, unsafe.
Incidentally, the article's quotes make this teen sound more adult than anyone who sold or purchased this technology product.
Another false positive by one of these leading content filters schools use - the kid said something stupid in a group chat and an AI reported it to the school, and the school contacted the police. The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time. They are suing Gaggle, who claims they never intended their system to be used that way.
These kinds of false positives are incredibly common. I interviewed at one of their competitors (Lightspeed), and they actually provide a paid service where they have humans review all the alerts before being forwarded to the school or authorities. This is a paid addon, though.
> Gaggle’s CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said.
“I wish that was treated as a teachable moment, not a law enforcement moment,” said Patterson.
It's entirely predictable that schools will call law enforcement for many of these detections. You can't sell to schools that have "zero tolerance" policies and pretend that your product won't trigger those policies.
Exactly. In a saner world, we could use fallible AI to call attention to possible concerns that a human could then analyze and make an appropriate judgment call on.
But alas, we don't live in that world. We live in a world where there will be firings, civil, and even criminal liability for those who make wrong judgments. If the AI says "possible gun", the human running things who alerts a SWAT team faces all upside and no downside.
Hmm, maybe this generation's version of "nobody ever got fired for buying IBM" will become "nobody ever got fired for doing what the AI told them to do." Maybe humanity is doomed after all.
I can't say that I think it would be a saner world to have the equivalent of a teacher or hall monitor sitting in on every conversation, even if that computer chaperone isn't going to automatically involve the cops. I don't think you can build a better society where everyone is expected to speak and behave defensively in every circumstance as if their words could be taken out of context by a snitch - computer or otherwise.
There is still liability there and it should be even higher when the decisions to implement so callously bad processes. Doubly so since this has demonstrably happened once.
At least the current moment, the increasing turn to using autonomous weaponry against one’s citizens - I don’t think it says so much about humanity so much as the US. I think US foreign policy is a disaster but turning the AI-powered military against the citizenry does look like it’s going to be quite successful, presumably because the US leadership is fighting an enemy incapable of defending itself. I think it’s unsustainable though economically speaking. AI won’t actually create value once it’s a commodity itself (since a true commodity has its value baked into its price). Rates of profit will continue to fall. The ruling class will become increasingly desperate in its search for growth. Eventually an economy that resorts to techno-fascism implodes. (Not before things turning quite ugly of course.)
Holy shitballs. In my experience such pain addons have very cheap labor attached to them, certainly not what you would expect based on the sales pitch.
The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time.
All he wanted was a Pepsi. Just one Pepsi. And they wouldn't give it to him.
They actually don't have all the rights of a person and they do have those same responsibilities.
If this company was a sole proprietorship, the only recourse this kid would have is to sue the owner, up to bankruptcy.
Since it's a corporation, his recourse is to sue the company, up to bankruptcy.
As for corporations having rights, I can explain it further if necessary but the key understanding is that the singular of "corporations are people" is "a corporation is people" not "a corporation is a person".
You can't put a corporation in prison. But a person you can. This is one of the big problems. The people making the decisions at corporations are shielded from personal consequences by the corporation. A corporation can be shut down but it rarely happens.
Even when Boeing knowingly caused the deaths of hundreds (especially the second crash was entirely preventable if they would have been honest after the first one), all they got were some fines. Those just end up being charged back to their customers, a big one being the government who fined them in the first place.
I'm sure the top leadership was well aware of what happened after the first crash yes. They should have immediately gone public and would have prevented the second crash.
Don't forget that hiding MCAS from pilots and the FAA was a conscious decision. It wasn't something that 'just happened'. The decision to not make it depend on redundant AoA sensors by default too.
My point is, I can imagine that the MCAS suicidal side-effect was something unexpected (it was a technical failure edge-case in a specific and rare scenario) and I get that not anticipating it could have been a mistake, not a conscious decision. But after the first crash they should have owned up to it and not waited for a second crash.
You need a judge and jury for prison sentences for criminal convictions.
If the government decides to prosecute the matter as a civil infraction, or doesn't even bother prosecuting but just has an executive agency hand out a fine, that's not a matter of the corporation shielding people, that's a matter of the government failing to prosecute or secure a conviction.
If the company is a sole proprietorship, you can sue the person who controls it up to bankruptcy, which will affect their personal life significantly. If the company is a corporation/LLC, you can sue the corporate entity up to the bankruptcy of the corporate entity, while the people controlling the company remain unaffected.
This gets even more perverse. If you're an individual you actually can't just set up an LLC to limit your own liability. There's no manner for an individual to say "I'm putting on a hat and acting solely as the LLC" - rather as the owner you need to find and employ enough judgement-proof patsies that the whole thing becomes a "group project" and you can say you personally weren't aware of whatever problem gave rise to liability. In other words, the very design of corporations/LLCs encourages avoiding responsibility.
You're correct with the nitpick about the Supreme Council's justification, but that justification is still poor reasoning. Corporations are government-created liability shields. How they can direct their employees should be limited, to avoid trampling on those individuals' own natural rights. A person or group of people who want to exercise their personal natural rights through hired employees can always forgo the government-created liability shield and go sole proprietorship / gen partnership.
> it is much harder to hold a corporation responsible
In some ways, yes. In most ways, no. In most cases, a massive fine aligns interests. Our problem is we've become weak kneed at levying massive fines on corporations.
Unlike a person, you don't have to house a corporation to punish it. Your fine simply wipes out the owners. If the enterprise is a going concern, it's born under new ownership. If it's not, its assets are redistributed.
> Jail is a great deterrent for natural persons
Jail works for executives who defraud. We just, again, don't do it. This AI could have been sold by a billionaire sole proprietor, I doubt that would suddenly make the rules more enforceable.
Engineer: hey I made this cool thing that can help people in public safety roles process information and make decisions more efficiently! It gives false positives but you save more time than it takes less time to weed through them.
Someone nearby: well what if they use it to replace human thinking instead of augment it?
Engineer: well they would be ridiculous. Nobody would ever think that’s a good idea.
Marketing Team: it seems like this lands best when positioning it as a decision-making tool. Let’s get some metrics on how much faster it is at making decisions than people are.
Sales Rep: ok, Captain, let’s dive into our flagship product, DecisionMaker Pro, the totally automated security monitoring agent…
::6 months later—some kid is being held at gunpoint over snacks.::
Lack of Accountability as-a-Service! Very attractive proposition to negligent and self-serving organizations. The people in charge don't even have to pay for it themselves, can just funnel the organization money to the vendor. Encouraging widespread adoption helps normalizes the practice. If anyone objects, shut them down as not thinking-of-the-children and something-must-be-done (and every other option is surely too complicated/expensive).
Nice fantasy, but the reality is that the "people in public safety roles" love using flimsy pretenses to harass and abuse vulnerable populations. I wish it was just overeager sales and marketing, but you're view of humanity is way too naive especially as masked thugs are disappearing people in the street as we type.
delegating decision to AI, excluding human from the "human in the loop" is kind of unexpected as a first step, as in general it was expected that exclusion will start from the other end. Sideway i wonder how is that going to happen on the battlefield.
for this civilian use case the next step - AR googles worn by police with that AI will be projecting onto the googles where that teenager has his gun (kind of Black Mirror style), and the next step after that is obviously excluding the humans even from the execution step.
In any system, there are false positives and false negatives. In some situations (like a high recall disease detection) false negatives are much worse than false positives, because the cost of a false positive is a more rigorous screening.
But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.
Aside from improving the quality of the detection model, we should try to reduce the “cost” of both failure modes as much as possible. Putting a human in the loop or having secondary checks are ways to do that.
In this case false positives are far, far worse than false negatives. A false negative in this system does not mean a tragedy will occur, because there are many other preventative measures in place. And never mind the fact that this country refuses to even address the primary cause of gun violence in the first place: the ubiquity of guns in our society. So systems like this is what we end up with when we ignore to address the problem of guns and choose to deal the downstream effects of that instead.
> the primary cause of gun violence in the first place: the ubiquity of guns in our society
I would have gone with “a normalized sense of hopelessness and indignity which causes people to feel like violence is the only way they can have any agency in life” considering “gun” is the adjective and “violence” is the actual thing you're talking about.
Both are true. The underlying oppressive, lonely, pro-bullying culture creates the tension. The proliferation of high lethality weapons makes it more likely that tension will eventually release in the form of a mass tragedy.
Improvement in either area would be a net positive for society. Improvement in both areas is ideal but solving proliferation seems a lot more straightforward than fixing the generally miserable society problem.
To be clear, the false negative here would be a student who has brought a gun to a school and the computer ignores it. That is a situation where potentially multiple people can be killed in a short amount of time. It is not far, far worse to send cops.
Depends on the false positive rate doesn't it. If police are being sent to storm a school every week due to a false positive, that is quite bad. And people will become conditioned to not care about reports of a gun at a school because of all the false positives.
For what I’m saying, no it doesn’t because I’m just comparing a single instance of false positive to a single instance of false negative. Neither is desirable.
> But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.
Given the probability of police officers in the USA taking any action as hostile and then ending up shooting him a false positive here is the same as swatting someone.
The system here sent the police off to kill someone.
Yep. Think of it as the new exciting version of swatting. Naturally, one will still need to figure out common ways to force a specific misattribution, but, sadly, I think there will be people working on it ( if there aren't already ).
Sure. But school shootings are also common in the US. A student who has brought a gun to a school is very likely not harmless. So false negatives aren’t free either.
I was swatted once. Girlfriend's house. Someone called 911 and said they'd seen me kill a neighbor, drag their body into the house, and was now holding my gf's family hostage.
We answered the screams at the door to guns pointed at our faces, and countless cops.
It was explained to us that this was the restrained version. We got a knock.
Unfortunately, I understand why these responses can't be neutered too much. You just never know.
In this case, though, you COULD know, COULD verify with a human before pointing guns at people, or COULD not deploy a half finished product in a way that prioritizes your profit over public safety.
Happened to a friend of mine by an ex GF who said he was on psych meds (true though he is nonviolent with no history) and that he was threatening to kill his parents. NYPD SWAT no-knock kicked the door down to his apartment which terrorized his elderly parents as they pointed guns at their son (in his words, "machine guns".) BUT because he has psych issues and on meds he was forced into a cop car in front of the whole neighborhood to get a psych evaluation. He only received an apology from the cops who said they have no choice but to follow procedure.
Do the cops not ever get tired of being fooled like this? Or do they just enjoy the chance to go out in their army-surplus armored cars and pretend to be special forces?
I had convos with cops about swatting, the good ones aren't happy to go kick down someone's door who isn't about to harm someone but feel they can't chance making a fatally wrong call when it isn't swatting, also they have procedures to follow and if they don't the outcome is on them personally and potentially legally.
As for bad cops they look for any reason to go act like aggro billy badasses.
This is a really good question. Sadly the answer is that they think it's how the system is meant to work. Well that seems to be the answer that I see coming from police spokespeople
Its likely procedure that they have to follow (see my other post in this thread.)
I hate to say this but I get it. Imagine a scenario happens where they decide "sounds phony. stand down." only for it to be real and people are hurt/killed because the "cops ignored our pleas for help and did nothing." which would be a horrible mistake they could be liable for, never mind the media circus and PR damage. So they treat all scenarios as real and figure it out after they knock/kick in the door.
To that end, we should all have a cop assigned to us. One cop per citizen, with a gun pointed at our head at all times. Imagine a scenario happens where someone does something and that cop wasn't there? Better to be safe.
I don't think you know how policing works in America. To cops, there are sheep, sheepdogs, and wolves; they are sheepdogs protecting us sheep from the criminals. Nobody needs to watch the sheepdogs!
But lets think about their analogy a little more: sheepdogs and wolves are both canines. Hmm.
Also "funny" how quickly they can reclassify any person as a "wolf", like this student. Hmm.
Maybe we should move beyond binary thinking here. Yeah, it's worth sending someone to investigate but also making some effort to verify who the call is coming from - to get their identity, and to ask them something simple like to describe the house (in this example) so the arriving cops will know they go to the right address. Now of course you can get a description of the house with Google Street Maps, but 911 dispatchers can solicit some information like what color car is currently parked outside or suchlike. They could also look up who occupies the house and make a phone call while cops are on the way.
Everyone knows swatting is a real thing that happens and that it's problematic, so why don't police departments have procedures in place which include that possibility? Who benefits from hyped-up police responses to false claims of criminal activity?
Cops don't have a duty to protect people, so "cops ignored our pleas for help and did nothing" is a-ok, no liability (thank you, qualified immunity). They very much do not treat all scenarios as real; they go gung-ho when they want to and hang back for a few hours "assessing the situation" when they don't.
I'm a paramedic, who has personally attended a swatting call where every single detail was so egregiously wrong, but police still went in, no-knock, causing thousands of dollars damage, that, to be clear, they have absolutely zero liability for, but thankfully no injuries.
"I can see them in the upstairs window" - of a single story home.
"The house is red brick" - it was dark grey wood.
"No cars in the driveway" - there was two.
Cops still said "hmm, still could be legit" and battered down the front door, deployed flashbangs.
There are more options here than "do nothing" and "go in guns blazing".
Establishing the probable trustworthiness of the report isn't black magic. Ask the reportee for details, question the neighbours, look in through the windows, just send two plain clothed officers pretending to be salesmen to knock on the door first? Continously adjust the approach as new information comes in. This isn't rocket science, ffs.
It doesn't make sense. If you were holding people hostage, you'd have demands for their release. Windows could be peeked into. If you dragged a dead body into a house, there'd be evidence of that.
False positives can effectively lead to false negatives too. If too many alarms end in teens getting swatted (or worse) for eating chips, people might ignore the alarm if an actual school shooter triggers it. Might assume the AI is just screaming about a bag of chips again.
I think a “true positive” is an issue as well if the protocol to manage it isn’t appropriate. If the kid was armed with something other than nacho cheese, the provocative reaction could have easily set off a tragic chain of events.
Reality is there are guns in schools every day. “Solutions” like this aren’t making anyone safer. School shooters don’t fit this profile - they are planners, not impulsive people hanging out at the social event.
More disturbing is the meh attitude of both the company and the school administration. They almost engineered a tragedy through incompetence, and learned nothing.
The danger is that it's as clear as day that in the future someone is gonna be killed. That's not just a possibility. It's a certainty given the way the system is set up.
This tech is not supposed to be used in this fashion. It's not ready.
Did you want to emphasize or clarify the first danger I mentioned?
My read of the "Um" and the quoting, was that you thought I missed that first danger, and so were disagreeing in a dismissive way.
When actually we're largely in agreement about the first danger. But people trying to follow a flood of online dialogue might miss that.
I mentioned the second danger because it's also significant. Many people don't understand how safety works, and will think "nobody got shot, so the system must have worked, nothing to be concerned about". But it's harder for even those people to dismiss the situation entirely, when the second danger is pointed out.
I’d argue the second danger is worse, because shooting might be incidental (and up to human judgement) but being traumatized is guaranteed and likely to be much more frequent.
Americans are killed by police all the time, and by other Americans. We've already decided as a society that we don't care enough to take the problem seriously. Gun violence, both public and from the state, is accepted as unavoidable and defended as a necessary price to pay to live in a free society[0]. Having a computer call the shots wouldn't actually make much of a difference.
Hell, it wouldn't even move the needle on racial bias much because LLMs have already shown themselves to be prejudiced against minorities due to the stereotypes in their training data.
[0]Even though no other free society has to pay that price but whatever.
Guns are actually easier to control and significantly reduce ability to target multiple people at once. There are a lot of countries successfully controlling guns.
To the argument that then only criminals have guns - in India at least, criminals have very limited access to guns. They have to resort to unreliable handmade guns which are difficult to procure. Usually criminals use knives and swords due to that.
> The danger is that it's as clear as day that in the future someone is gonna be killed.
This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.
I'm not downplaying the risks. I'm saying that we should remember that almost everything has risks and benefits, and as a society we decide for or against using/doing them based mostly on the ratio between those two things.
So we need some data on the rates of false vs. true detections here. (A value judgment is also required.)
> This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.
huh, i can think of at least one recent example of a popular figure using this kind of argument to extreme self-detriment.
Is HN really this ready to dive into obvious logical fallacies?
My original comment, currently sitting at -4, has so far attracted one guilt-by-association plus implied-threat combo, and no other replies. To remind readers: My horrifying proposal was to measure both the risks and the benefits of things.
If anyone genuinely thinks measuring the risks and benefits of things is a bad idea, or that it is in general a good idea but not in this specific case, please come forward.
sorry for being glib; it was low hanging fruit. my actual point should have been more clearly stated: measuring risk/benefit is really complicated because there's almost never a direct comparison to be made when balancing profit, operational excellence and safety.
Stuff like this feels like some company has managed to monetize an open source object detection model like YOLO [1], creating something that could be cobbled together relatively easily, and then sold it as advance AI capabilities. (You'd hope they've have at least fine-tuned it / have a good training dataset.)
We've got a model out there now that we've just seen has put someone's life at risk... Does anyone apart from that company actually know how accurate it is? What it's been trained on? Its false positive rate? If we are going to start rolling out stuff like this, should it not be mandatory for stats / figures to be published? For us to know more about the model, and what it was trained on?
"“We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident,” the principal wrote. “Our counselors will provide direct support to the students who were involved.”"
Make them pay money for false positives instead of direct support and counselling. This technology is not ready for production, it should be in a lab not in public buildings such as schools.
This assumes no human verification of the flagged video. Maybe the bag DID look like a gun. We'll never know, because modern journalism has no interest in such things. They obtained the required emotional quotes and moved on.
Human verified the video -> human was the decision-maker. No human verified the video -> Human who gave a blank check to the AI system was the decision-maker. It's not really about the quality of journalism, here.
We're talking about who should be charged with a crime. I sincerely hope we're going to do more discovery than "ask Dexerto to summarize what WBAL-TV 11 News said".
Superintendent approved a system that they 100% knew would hallucinate guns on students. You assert that if the superintendent required human-in-the-loop before calling the police that the superintendent is absolved from deploying that system on students.
You are wrong. The superintendent is the person who decided to deploy a system that would lead to swatting kids and they knew it before they spent taxpayer dollars on that system.
The superintendent also knew that there is no way a school administrator is going to reliably NOT dial SWAT when the AI hallucinates a gun. No administrator is going to err on the side of "I did not see an actual firearm so everything is great even though the AI warned me that it exists." Human-in-the-loop is completely useless in this scenario. And the superintendent knows that too.
In your lifetime, there will be more dead kids from bad AI/police combinations than <some cause of death we all think is too high>. We are not close to safely betting lives on it, but people will do it immediately anyway.
So, are you implying that if humans surveil kids at random and call the SWAT team if a frame in a video seems to imply one kid has a gun, that then it's all OK?
Those journalists, just trying to get (unjustified, dude, unjustified!!) emotes from kids being mistakenly held at gun point, boy they are terrible.... They're just covering up how necessary those mistakes are in our pursuit of teh crime...
If security sees someone carrying a gun in surveillance video, on a gun free campus, and policy verify it, then yes, that's justified, by all aspects of the law. There are countless examples of surveillance of illegal activity resulting in police action.
Nobody saw a gun in a video. Nobody even saw something that looked like a gun. A chip bag, at most, is going to produce a bulge. No reasonable human is going to look at a kid with a random bulge in their pocket and assume gun. Otherwise we might as well start sending our kids to school naked; this is the kind of paranoia that brought us the McMartin Preschool nonsense.
They didn't see that, though. They saw a kid with a bulge over their pants pocket, suggesting that something was in the pocket. The idea that any kind of algorithm can accurately predict that an amorphous pocket bulge is a gun is just bonkers stupid.
(Ok, ok, with thin, skin-tight, light-colored pants, maybe -- maybe -- it could work. But if it mistook a crumpled-up Doritos bag as a gun, clearly that was not the case here.)
The taxpayers collectively pay the money, the officers involved don't (except for that small fraction of their income they pay in taxes that increase as a result).
The question is whether that Doritos carrying kid is still alive, instead of being shot at by the violent cops (who typically do nothing when an actual shooter is roaming a school on a killing spree; apropos the Uvalve school shooting, when hundreds of cops milled around the school in full body armor, refusing to engage the shooter on killing spree inside the school, and they even prevented the parents from going inside to rescue their kids) based on a false positive about a gun (and the cops must have figured that it's likely a false positive, because it is info from AI surveillance), only because he is white?
Before clicking on the article, I kinda assumed the student was black. I wouldn't be surprised if the AI model they're using has race-related biases. On the contrary, I would be surprised if it didn't.
> Make them pay money for false positives instead of direct support and counselling.
Agreed.
> This technology is not ready for production
No one wants stuff like this to happen, but nearly all technologies have risks. I don't consider that a single false positive outweighs all of its benefits; it would depend on the rates of false and true positives, and the (subjective) value of each (both high in this case, though I'd say preventing 1 shooting is unequivocally of more value than preventing 1 innocent person being unnecessarily held at gunpoint).
I think I’ve said this too many times already but the core problem here and with the “AI craze” is that nobody realy wants to solve problems, what they want is a marketable product and AI seems to be the magic wrench that fits in all the nuts and since most of the people doesn’t really knows how it works and what are its limitations the happily buy the “magic dust”.
> nobody realy wants to solve problems, what they want is a marketable product
I agree, but this isn't specific to AI -- it's what makes capitalism work to the extent that it does. Nobody wants to clean a toilet or harvest wheat or do most things that society needs someone to do, they want to get paid.
Absolutely, but I don’t believe the responsibility falls in the hands of those looking to make a profit but rather into of those in charge of regulating how those profits should be made, after all thieves want to make a profit too but we don’t allow them to, at least not if it’s not a couple of millions.
I get that people are uncomfortable with explicit quantification of stuff like this, but removing the explicitness doesn't remove the quantification, it just makes it implicit. If, say, we allow people to drive cars even though car accidents kill n people each year, then we are implicitly quantifying that the value of the extra productivity society gets by being able to get places quickly in a car is worth the deaths of those people.
In your example, if terrorists were the only people killing people in the US, and police (a) were the only means of stopping them, and (b) did not benefit society in any other way, the equation would be simple: get rid of the police. There wouldn't need to be any value judgements, because everything cancels out. But in practice it's not that easy, since the vast majority of killings in the US occur at the hands of people who are neither police nor terrorists, and police play a role in reducing those killings too.
I expect that John Bryan -- who produces content as The Civil Rights Lawyer https://thecivilrightslawyer.com -- will have something to say about it.
He frequently criticizes the legality of police holding people at gunpoint based on flimsy information, because the law considers it a use of force, which needs to be justified. For instance, he's done several videos about "high risk" vehicle stops, where a car is mistakenly flagged as stolen, and the occupants are forced to the ground at gunpoint.
My take is that if both the AI automated system and a "human in the loop" police officer looked at the picture and reasonably believed that the object in the image was a gun, then the stop might have been justified.
But if the automated system just sent the officers out without having them review the image beforehand, that's much less reasonable justification.
The article says the police later showed the student the photo that triggered the alert. He had a crumpled-up Doritos bag in his pocket. So there was no gun in the photo, just a pocket bulge that the AI thought was a gun... which sounds like a hallucination, not any actual reasonable pattern-matching going on.
But the fact that the police showed the photo does suggest that maybe they did manually review the photo before going out. If that's the case, do wonder how much the AI influenced their own judgment, though. That is, if there was no AI involved, and police were just looking at real-time surveillance footage, would they have made the same call on their own? Possibly not: it feels reasonable to assume that they let the fact of the AI flagging it override their own judgment to some degree.
Or they'll tell us police have started shooting because an acorn falls, so they shouldn't be expected to be held to higher standards and are possibly an improvement.
Ah, the coming age of Palantir's all seeing platform; and Peter Thiel becoming the shadow Emperor. Too bad non-deterministic ml systems are prone to errors that risk lives when applied wrongly to crucial parts of society. But in an authoritarian state, those will be hidden away anyway, so there's nothing to see here: move along folks. Yes, surveillance and authoritarianism go hand in hand, ask China. It's important to protest these methods and push lawmakers to act against them; now, before it's too late.
It's still not helpful to wander into threads to talk about your favorite topic without making effort to provide some context on why your comments are relevant. When random crazy people come up to you spouting their theories in public places, the problem is not that their concerns are necessarily incoherent or invalid; the problem is that they broadcasting their thoughts randomly with no context, and their audience has no way of telling whether they just need to verbalize what's bothering them or have mistaken a passer-by for one of the villains in their psychodrama.
tl;dr if you want to make a broad point, make the effort to put it in context so people can appreciate it properly.
That may be the case, but only one of them is actually responsible for armed police swarming this student and it wasn't Palantir. It seems very strange that you're so eager to give a free pass to the firm who actually was at fault here.
I'm pretty sure that some people will continue to apply the term "soapbox ranting" to all opposition against the technofascism even when victims of its false positives will be in need of coroners, not psychologists.
So you just live a reactionary life? Nothing matters until it affects you personally? Should we get rid of free speech if jason-phillips doesn't have anything to say?
Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
prioritize your own safety by not attending any location fitted with such a system, or deemed to be such a dangerous environment that such a system is desired.
Calling it today. This company is going to get innocent kids killed.
How many packs of Twizzlers, or Doritos, or Snickers bars are out there in our schools?
First time it happens, there will be an explosion of protests. Especially now that the public knows that the system isn't working but the authorities kept using it anyway.
This is a really bad idea right now. The technology is just not there yet.
And then there's plenty of bullies who might put a sticker of a picture of a gun on someone's back, knowing it will set off the image recognition. It's only a matter of time until they figure that out.
That's a great and terrifying idea. When that inevitably happens, you'll then have a couple of 13-year-olds: one dead, and one shell-shocked kid in disbelief that a stupid prank idea he cooked up in 60 seconds is now claimed as the root cause why someone was killed. That one may be charged with a crime or sued, though the district who installed this idiotic thing is really to blame.
The technology literally can NEVER be there. It is completely impossible to positively identify a bulge in clothing as a handgun. But that doesn’t stop irresponsible salesmen from making the claim anyway.
>First time it happens, there will be an explosion of protests.
Why do you believe this? In the US, cops will cower outside of a school with an armed gunman actively murdering children, forcibly detain parents who wish to go in if the cops wont, and then re-elect everyone involved
In the US, an entire segment of the population will send you death threats claiming you are part of some grand (democrat of course) conspiracy for the crime of being a victim of a school shooting. After every school shooting, republican lawmakers wear an AR-15 pin to work the next day to ensure you know who they care about.
Over 50% of the country blamed the protesting students at Kent state for daring to be murdered by the national guard.
Cops can shoot people in broad daylight, in the back, with no justification, or reasonable cause, or can even barge into entirely the wrong house and open fire on homeowners exercising their basic right to protect themselves from strangers invading their homes, and as long as the people who die are mostly black, half the country will spout crap like "They died from drugs" or "they once sold a cigarette" or "he stole skittles" or "they looked at my wife wrong" while the cops take selfies reenacting the murder for laughs and talk about how terrified they are by BS "training" that makes them treat every stranger as a wanted murderer armed to the teeth. The leading cause of death for cops is still like heart disease of course.
Trump sent unmarked forces to Portland and abducted people off the street into unmarked vans and I think we still don't really know what happened there? He was re-elected.
Why did they waste time verifying? The police should have eliminated the threat before any harm could be done. Seconds count when you're keeping people safe.
I get that you're being sarcastic and find the police response appalling, but the sad reality of Poe's Law is that there are a lot of people who would unironically say this and would have cheered if the cops had shot this kid, either because they hate black people or because they get off on violence and police shootings are a social sanctioned way to indulge that taste.
* Even hundreds of cops in full body armor and armed with automatic guns will not dare to engage a single "lone wolf" shooter doing a killing spree in a school; the heartless cowards may even prevent the parents from going inside to rescue their kids: Uvalde school shooting incident
* Cop on a ego trip, will shoot down a clearly harmless kid calmly eating a burger in his own car (not a stolen car): Erik Cantu incident
All of your examples are well known not because its normal and accepted but because they are exceptions. For every one bad example there are thousand good ones, that's humans for you.
Doesn't mean they are perfect or shouldn't criticised but claiming that's all they are doing isn't reasonable either.
If you look at actual per capita statistics you will easily see this.
The dispatch relayer and responding officers should at least have ready access to a screen where they can see a video/image of the raw footage that triggered the AI alert. If it is a false alarm, they will better see it and react accordingly, and if it is a real threat they will better understand the initial context and who may have been involved.
According to a news article[1], a human did review the video/image and flagged it as a false positive. It was the principal who told the school cop, who then called other cops:
> The Department of School Safety and Security quickly reviewed and canceled the initial alert after confirming there was no weapon. I contacted our school resource officer (SRO) and reported the matter to him, and he contacted the local precinct for additional support. Police officers responded to the school, searched the individual and quickly confirmed that they were not in possession of any weapons.
What's unclear to me is the information flow. If the Department of School Safety and Security recognized it as a false positive, why did the principal alert the school resource officer? And what sort of telephone game happened to cause local police to believe the student was likely armed?
Good lord, what an idiot principal. If the principal saw how un-gun-like it looked, he could have been brave enough to walk his lazy ass down to where the student was and said "Hey (Name), check this out. (show AI detection picture) The AI camera thought this was a gun in your pocket. I think it's wrong, but they like to have a staff member sign off on these since keeping everyone safe from violence is a huge deal. Can I take a picture of what it actually is in your pocket?"
Sounds like a "better safe than sorry" approach. If you ignore the alert on the basis that it's a false positive, then it turns out it really was a gun and the person shoots somebody, you're going to get sued into the ground, fired, name plastered all over the media, etc. On the other hand, if you call in the cops and there wasn't a gun, you're fine.
> "On the other hand, if you call in the cops and there wasn't a gun, you're fine."
Yeah, cause cops have never shot somebody unarmed. And you can bet your ass that the possible follow-up lawsuit to such a debacle's got "your" name on it.
It might be, depending on the integrity of "the system".
I can make a system that flags stuff, too. That doesn't mean it's any good. If they can show there was no reasonable cause then they've got a leg to stand on.
Reports on child welfare, it is often illegal to release the name of the tipster. Commonly taken advantage of by disgruntled exes or in custody dusputes.
This may be mean, but we should really be careful about just handing AI over to technically illiterate people. They're far more likely to blind trust the LLM/AI output than someone who may be more experienced and take a beat. AI in an agentic-state society (what we have in America at least) is an absolute ticking time bomb. Honestly, this is what AI safety teams should be concentrated on: making sure people who think the computer is infallible understand that, no, it isn't, and you shouldn't just assume what it tells you is correct.
Walking through TSA scanners, I always get that unnerving feeling I will get pulled aside. 50% of the time they flag my cargo pants because of the zipper pockets - There is nothing in them but the scanner doesn't like them.
Now we get the privilege of walking by AI security cameras placed in random locations, hoping they don't flag us.
There's a ton of money to be made with this kind of global frisking, so lots of pressure to roll out more and more systems.
To be fair, at least you can choose not to wear the cargo pants.
A friend of mine once got pulled aside for extra checks and questioning after he had already gone through the scanners, because he was waiting for me on the other side to walk to the gates together and the agent didn't like that he was "loitering" – guess his ethnicity...
I have shoes that I know always beep on the airport scanners, so if I choose to wear them I know it might take longer or I might have to embarrassingly take them off and put them on the tray. Or I can not wear them.
Yes, in an ideal world we should all travel without all the security theatre, but that's not the world we live in, I can't change the way airports work, but I can wear clothes that make it faster, I can put my liquids in a clear bag, I can have bags that make it easy to take out electronics, etc. Those things I can control.
But people can't change their skin color, name, or passport (well, not easily), and those are also all things you can get held up in airports for.
Not sure, but I bet they were looking at their feet kinda dusting off the bottoms, making awkward eye contact with the security guard on the other side of the one way door.
Get Precheck or global entry. I only do a scanner every 5 years or so when I get pulled at random for it. Otherwise it's metal detector only. Unless your zippers have such chunky metal that they set that off you'll be fine. My belt and watch don't.
Note: Precheck is incredibly quick and easy to get; and GE is time consuming and annoying, but has its benefits if you travel internationallly. Both give the same benefits at TSA.
Second note: let's pretend someone replied "I shouldn't have to do that just to be treated...blah blah" and that I replied, "maybe not, but a few bucks could still solve this problem, if it bothers you enough that's worth it to you."
...maybe not, but a few bucks could still solve this problem
Sure, can't argue with that. But doesn't it bug you just a little that (paying a fee to avoid harassment) doesn't look all that disimilar from a protection racket? As to whether it's a few bucks or many, now you're just a mark negotiating the price.
I already adjust my clothing choices when flying to account for TSA's security theater make-work bullshit. Wonder how long before I'm doing that when preparing to go to other public places.
(I suppose if I attended pro sports games or large concerts, I'd be doing it for those, too)
I was getting pulled out of line in the 90’s for having long hair. I don’t dress in shitty clothes or fancy ones, I didn’t look funny, just the hair, which got regular compliments from women.
I started looking at people trying to decide who looked juicy to the security folks and getting in line behind them. They can’t harass two people in rapid succession. Or at least not back then.
The one I felt most guilty about, much later, was a filipino woman with a Philippine passport. Traveling alone. Flying to Asia (super suspect!). I don’t know why I thought they would tag her, but they did. I don’t fly well and more stress just escalates things, so anything that makes my day tiny bit less shitty and isn’t rude I’m going to do. But probably her day would have been better for not getting searched than mine was.
Email the state congressman and tell them what you think.
Since (pretty much) nobody does this, if a few hundred people do it, they will sit up and take notice. It takes less people than you might think.
Since coordinating this with a bunch of strangers (I.e. the public) is difficult, the most effective way is to normalise speaking up in our culture. Of course normalising it will increase the incoming comm rate, which will slowly decrease the effectiveness but even post that state, it’s better than where we are, which is silent public apathy
If that's the case, why do people in Congress keep voting for things their constituents don't like? When they get booed at town halls they just dismiss it as being the work of paid activists.
I got pulled aside because I absentmindedly showed them my concealed carry permit, not my driver's license. I told them I was a consultant working for their local government and was going back to Austin. No harm no foul.
If the system used any kind of logic whatsoever a CCW permit would not only allow you to bypass airport security but also carry in the airport (Speaking as both a pilot and a permit holder)
Would probably eliminate the need for the TSA security theater so that will probably never happen.
You can carry in the airport in AZ without a permit, in the unsecured areas. I think there was only one broo-ha-ha because some particularly bold guy did it openly with a rifle (can't remember if there's more to the story).
The point of the security theater is to assuage the 95th percentile scared-of-everything crowd, they're the same people who want no guns signs in public parks.
You're right not a lot of people objected to TSA ending the no shoes safety rule, and it's a shame. I certainly objected and tried to make my objections known, but apparently 23 or 24 years of the iconic custom of taking shoes off went to waste because the TSA decided to slack off
Right from the beginning it was a handout to groups who built the scanning equipment, who were basically personal friends with people in the admin. We paid absurd prices for niche equipment, a lot of which was never even deployed and just sat in storage.
Several of the hijackers were literally given extended searches by security that day.
A reminder that what actually stopped hijackings (like, nearly entirely) was locking the cabin door, which was always doable, and has not ever been breached. Not only did this stop terrorist hijackings, it stopped more casual hijackings that used to be normal, it could also stop "inside man" style hijackings like that one with a disgruntled FedEx pilot, it was nearly free to implement, always available, harms no one's rights, doesn't turn airport security into a juicy bombing target, doesn't slow down an important part of the economy, doesn't invent a massive bureaucracy and LEO in the arms of a new american agency that has the goal of suppressing domestic problems and has never done anything useful. Keep in mind, shutting the cockpit door is literally how the terrorists themselves protected themselves from being stopped and is the reason Flight 93 couldn't be recovered.
TSA is utterly ineffective. They have never stopped an attack, regularly fail their internal audits, the jobs suck, and they pay poorly and provide minimal training.
Getting pulled aside by TSA for secondary screening is nowhere in the ball park of being rushed at gunpoint as a teenager and told to lay down on the ground where one false move will get you shot by a trigger happy cop that probably won’t face any consequences - especially if the innocent victim is a Black male.
In fact, they will probably demonize the victim to find sn excuse why he deserved to get shot.
I wasn't implying TSA-cargo-pant-groping is comparable. My point is to show escalation in public facing systems. We have been dealing with TSA. Now we get AI Scanners. What's next?
You have no evidence to suggest this, just bias. Unless you are aware of the AI algorithm, then it's a pointless discussion that only causes strife and conjecturing.
How many audit the police videos have you seen on Youtube? There are an insufferable amount of "white" people getting destroyed by the cops. If you replace the "white" people in these videos with "black" then 99% of viewers would assume the cops are hardcore racist, when in fact, they are just bad cops - very bad cops, that have some deep psychological issues - probably rooted from a traumatic childhood.
I'm sure CLEAR is already having giddy discussion on how they can charge you to get pre-verified access to walk around in public. We can all wear CLEAR certified dog tags so the cops can hastle the non-dog-tagged people.
He could have been easily murdered. It's not the first time by a far margin that a bunch of overzealous cops murder a kid. I would never ever in my life set foot in a place that sends me armed cops so easily. That school is extremely dangerous.
I think the reason the school bought this silly software is because it's a dangerous school, and they're grasping at straws to try and fix the problem. The day after this false positive, a student was robbed.[1] Last month, a softball coach was charged with rape and possession of child pornography.[2] Last summer, one student was stabbed while getting off the bus.[3] Last year, there were two incidents where classmates stabbed each other.[4][5]
That certainly sounds bad, but it's all relative; keep in mind this school is in Baltimore County, which is distinct from the City of Baltimore and has a much different crime profile. This school is in the exact same town as Eastern Tech, literally the top high school in Maryland.
I skimmed through all the articles linked in GP and finding them pretty relevant to whatever decision might have been made to utilize the AI system (not at all to comment on how badly the bad tip was acted on).
Hailing from and still living in N. California, you could tell me that this school is located in Beverly Hills or Melrose Place, and it would still strike me as a piece of trivia. If anything, it'd just be ironic?
For context, Baltimore (City) is one of the most dangerous large cities in the US. Between the article calling the school "Kenwood High School in Baltimore" and the GP's crime links, a casual reader could mistakenly picture a dangerous inner-city school. But in reality it's located in a low-rise suburb in the County. Granted, it's an inner-ring blue collar suburb, but it's still a night-and-day difference from the worst neighborhoods in the city. And the schools in those bad neighborhoods tend to have far worse crimes than what was listed above.
So my point was that while the list of incidents is definitely not great, it's still way less severe than many inner-city schools in Baltimore. And honestly these same types of incidents happen at many "safe" large suburban high schools in "nice" areas throughout the US... generally less often than at this particular school, but not an order-of-magnitude difference.
Basically, I'm saying that GP's assertion of it being a "dangerous school" is entirely relative to what you're comparing to. There are much worse schools in that metro area.
I doubt that. I moved around a lot as a kid, so I went to at least eight different public schools from Alabama to Washington. One school was structurally condemned while I attended it. Some places had bullying, and sometimes a couple of people fought, but never with weapons, and there was never an injury severe enough to require medical attention.
I also know several high school teachers and the worst things they've complained about are disruptive/stupid students, not violence. And my friends who are parents would never send their kids to a school that had incidents like the ones I linked to. I think this sort of violence is limited to a small fraction of schools/districts.
> I think this sort of violence is limited to a small fraction of schools/districts.
No, definitely not. I went to a decently-well-ranked suburban school district, and still witnessed violent incidents... no weapon used, but still multiple cases where the victim got a concussion. And there were arrests, a gun found in a kid's locker, etc. This stuff was unfortunately relatively normal, at least in the 90s. Not quite as often as at the school in the article, but still.
Based on your reporting, that's one violent crime per year, and one alleged child rapist. [0]
The crime stats seem fine to me. In a city like Baltimore, the numbers you've presented are shockingly low. When I was going through school, it was quite common for bullies to rob kids... even on campus. Teachers pretty much never did anything about it.
[0] Maybe the guy is a rapist, and maybe he isn't. If he is, that's godawful and I hope he goes to jail and gets his shit straight.
If false positives are known to happen, then you design a system where the image is vetted before telling the cops the perpetrator is armed. The company is basically swatting, but I'm sure they'll never be held liable.
Actually, if a system has too many false positives or false negatives, it's basically useless. There will eventually be doubts amongst the operators of it and the whole thing will implode, which is the best possible outcome.
We already went through this years ago with all those terrorism databases and we (humanity) have learned nothing--any database will have a percentage of erroneous data, it is impossible to eliminate erroneous data completely. Therefore, any database used to identify <fill in the blank> will have erroneous conclusions. It's been observed over and over again and governments can't help themselves that "this time it will be different because <fill in the blank> e.g. AI.
> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
> Baltimore County Public Schools echoed the company’s statement in a letter to parents, offering counseling services to students impacted by the incident.
We blame AI here but what's up with law enforcment that comes with loaded guns in hand and send someone to the ground and cuff him before actually doing any check?
That is the real issue.
Police force anywhere else in the world that know how to behave would have approched the student, have had a small chat with him, found out all he had in hands was a bag of doritos, maybe would have asked politely to see the content of his bag, explaining the search has been triggered by an autodetection system that may lead to occasional errors and wished him a good day.
A bunch of companies and people invested unimaginable amounts of money in these technologies in the hope they will multiply that money. They will showe it down our throats no matter what, this isn't about security and making the world a better place, saving lives or preventing bad things to happen, this is strictly about those people and companies making as much money as possible, or at least for now not losing the money they invested.
Not sure nothing will happen. Some trial lawyers would love to sue a city, a school system, and an AI surveillance company over "trauma, anxiety, and mental pain and suffering" caused by the incident. There will probably be a settlement that nobody ever hears about.
The world is doing fairly ok, thank you. The US however I’m not so sure as people here are apparently more concerned by the AI malfunction than with the idea it’s somehow sensible to live monitor high schools for gun threat.
Its not just the US. China runs the same level of surveillance, its being implemented all throughout Europe, Africa and Asia. This is becoming the norm.
Because if the "gun threat" system isn't accurate, then it's a system for false positives and false negatives and it's actually worse than having no such system. Maybe that's what you meant?
No, I think it’s crazy that people somehow think it’s rational to video monitor kids and be worried they have actual fire arms.
I think that’s a level of f-ed up which is so far removed from questioning AI that I wonder why people even tolerate it and somehow seem to view the premise as normal.
It's a system that was sold to a legally risk-averse school district or city or whatever. It's sales job and the non-technical people buy it because they aren't equipped to even ask the right questions about it. They created even more problems for themselves than the problems they purportedly attempted to solve! This is modern life in a nutshell.
Law enforcement officers, judicial officials, social workers, and similar generally maintain qualified immunity from liability in the course of their work. This case for example in which judges and social workers allegedly failed to properly assess a mother's fitness for child custody despite repeated indicators suggesting otherwise. The child was ultimately placed in the mother's care, and later was killed in an execution style (not due to negligence).
This case happened in the county I reside in and my sister-in-law is an attorney for the county in CP, although this was not her case directly. I can tell you what led to this: The COVID lockdowns! They stopped doing all the usual home visits and follow ups because everyone was too scared to do their jobs.
This case was a horrifying failure of the entire system that up until that point had fairly decent results for children who end up having to be taken away from their parents and later returned once the Mom/Dad clean up their act.
Not applicable - As a society we’ve countless times chosen to favour the right of the mother to keep children above the rights of other humans. Most children are killed in the home of the mother (i.e. either by the mother, or where partner choice would have avoided that, while the father was available), or even worde in the Anders Breivik situation (father available with stable job and perspectives in life, but custody refused, child grew up a mass murderer as always).
That was my first thought as well. A worry is police officers make mistakes which leads to anywhere from hapless people getting terrorized, harmed or killed. The bad thing about AI is it'll allow police to escape responsibility. Perhaps also where a human will realize it made a mistake they can admit it and everything is okay. But if AI says you had a gun, it won't walk that back. AI said he had a gun. But when we checked, he didn't have it anymore.
In the Menezes case the cops were playing a game of telephone that ended up with him being shot in the head.
Q: What was the name of the Google AI Ethicist who was fired by Google for raising the concern that AI overwhelmingly negatively framed non-white humans as threats .. Timnit Gebru
We, as technologists, ARE NOT DOING BETTER. We must do better, and we are not on the "DOING BETTER" trajectory.
We talk about these "incidents" with breathless, "Wwwwellll if we just train our AI better ..." and the tragedies keep rolling.
Q2: Which of you has had a half dozen Squad Cars with Armed Police roll up on you, and treat you like you were a School Shooter? Not me, and I may reasonably assume it's because I am white, however I do eat Doritos.
The school admin has no understanding of the tech and only the dimmest comprehension of what happened. Asking them to do anything besides what the tech company told them to do is asking wayyy too much.
I wonder if the AI correctly identified it as a bag of Doritos, but was also trained on the commercial[0] where the bag appears to beat up a human (his fault for holding on too tight) and then it destroys an alien spacecraft.
The guidance counselor does not have the training or time to "fix" the trauma you just gave this kid and his friends. Insane to put minors through this.
Having worked extensively with computer vision models for our interview analysis system, this incident highlights a critical challenge in AI deployment: the trade-off between false positive rates and detection confidence thresholds. We initially set our confidence threshold at 0.85 for detecting inappropriate objects during remote interviews, but found this led to ~3% false positives (mostly mundane objects like water bottles being flagged as concerning).
We solved this by implementing a two-stage verification system: initial detection runs at 0.7 threshold for recall, but any flagged objects trigger a secondary model with different architecture (EfficientNet vs ResNet) and viewpoint analysis. This reduced false positives to 0.1% while maintaining 98% true positive detection rate. For high-stakes deployments like security systems, I'm curious if others have found success with ensemble approaches or if they're using human-in-the-loop verification? The latency impact of multi-stage detection could be problematic for real-time scenarios.
I don't have kids yet, but I may someday. I went to public school myself, and would prefer to send any kid of mine to public school as well. (I'm not hard against private schools, but I'd prefer my kid gets to make friends from all walks of life, not just people who have parents who can afford private school.)
But I really wouldn't want to send my kid to a school that surveils students all the time, and uses garbage software like this that directly puts kids into dangerous situations. I feel like with a private school, I'd have more choice and ability to influence that sort of thing.
> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
This exact scenario is discussed in [1]. The "human in the loop" failed, but we're supposed to blame the human, not the AI (or the way it was implemented). The humans serve as "moral crumple zones".
"""
The emphasis on human oversight as a protective mechanism allows governments and vendors to have it both ways: they can promote an algorithm by proclaiming how its capabilities exceed those of humans, while simultaneously defending the algorithm and those responsible for it from scrutiny by pointing to the security (supposedly) provided by human oversight.
"""
The article doesn't confirm that there was definitely a human in the loop, but it sorta suggests that police got a chance to manually verify the photo before going out to harass this poor kid.
I suspect, though, that the AI flagging that image heavily influenced the cop doing the manual review. Without the AI, I'd expect that a cop manually watching a surveillance feed would have found nothing out of the ordinary, and this wouldn't have happened.
So I agree that it's weird to just blame the human in the loop here. Certainly they share blame, but the fact of an AI model flagging this sort of thing (and doing an objectively terrible job of it) in the first place should take most of the blame here.
An alert by one of these AI tools, which from what I understand have a terrible track record, should not be reasonable suspicion or probable cause to swarm a teenager with guns drawn. I wish more people in local communities would understand how much harm this type of surveillance and response causes. Our communities should not be using these tools.
When people wonder how can AI mistake a bag of snacks as a weapon, simply answer "42"
It is about the question, the answer will become very clear once you understand what was the question presented to the inference model, and of course what data and context was fed
>> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
No. If you're investigating someone and have existing reason to believe they are armed then this kind of false positive might be prioritizing safety. But in a general surveillance of a public place, IMHO you need to prioritize accuracy since false positives are very bad. This kid was one itchy trigger-pull away from death over nothing - that's not erring on the side of safety. You don't have to catch every criminal by putting everyone under a microscope, you should be catching the blatantly obvious ones at scale though.
The perceived threat of government forces assaulting and potentially killing me for reasons i have no control over, this is the kind of stuff that terminates the social contract. I'd want a new state that protects me from such stuff.
Even better, share the frame(s) that the guess was drawn from with a human for verification before triggering ANYTHING. How much trouble could that possibly be? How many "guns" is this thing detecting in a day across all sites? I doubt more than a couple or we'd have heard about tons of incidents, false positives or not.
I don't find that especially good as a sole remedy, because lots of people are stupid. If they see a green outline box overlaid on a video image with the label 'gun', many many people will just respond to the label instead of looking at the underlying image and trying to make a decision. Probability and validation history need to be built into the product so that there are audit logs that can be pored over and challenged. Bad human decision-making, which is rampant, is always smoothed over with justifications like 'I was concerned for everyone's safety', and usually treated in isolation rather than assessed longitudinally.
> “They showed me the picture, said that looks like a gun, I said, ‘no, it’s chips.’”
So AI did the initial detection, but police looked at it and agreed. We don't see the image, but it probably did look like a gun because of a weird shadow or something.
Fundamentally, this isn't really any different from a person seeing someone with what looks like a gun and calling the cops, only it turns out the person didn't see it clearly.
The main issue is just that with increased numbers of images, there will be an increase in false positives. Can this be fixed by including multiple images, e.g. from motion of the object, so police (and the AI) can better eliminate false positives before traumatizing some poor teen?
> So AI did the initial detection, but police looked at it and agreed. We don't see the image, but it probably did look like a gun because of a weird shadow or something.
Not sure I agree. The AI flagging it certainly biased the person doing the manual review toward agreeing with the AI's assessment. I can imagine a scenario where there was no AI involved, just a human watching that same surveillance feed, and (correctly) not seeing anything alarming in it.
Also I expect the AI completely failed at context. I wouldn't be surprised if the full video feed, a few minutes (or even seconds) before the flagged frame, shows the kid crumpling up the empty Doritos bag and stuffing it in his pocket. The AI probably doesn't keep all that context around to use when making a later decision, and giving just the flagged frame of video to the human may have caused them to miss out on important context.
Picture? Images? But those are just frames of footage the cameras have captured! Why would one purposefully use less information to make a decision rather than more?
Just put the full footage in front of an unbiased third party for a multi-stage verification first. The problem space isn't "is that weird shadow in the picture a gun or not?" it's "does the kid in the video have a gun?". It's not hard to figure out the difference between a bag of chips and a gun based on body language. Presumably the kid ate chips out of the bag? Using certain motions that one makes when doing that? Presumably the kids around him all saw the object in his hands and somehow did not react as if it was a gun? Jeez.
Those are a lot of "presumablies". Maybe you're right. Or maybe it was mostly obscured so you really couldn't tell. How do you know it was open and he was eating? How do you know there were other kids around and he wasn't solo? Why do you think the body language would be so different? Nobody is claiming he was using a gun or threatening anyone with it. If you're just carrying something in your hand, I don't know how you could tell what the object is or isn't from body language.
It wasn't open and he wasn't eating. The AI flagged a bulge in his pants pocket, which was the empty, crumpled up bag that he put in his pocket after finishing eating all the chips.
This is quite frankly absurd. The fact that the AI flagged it is bonkers, and the fact that a human doing manual review still believed it was a gun... I mean, just, wow. The level of dangerous incompetence here is staggering.
And I wouldn't be surprised if, minutes (or even seconds) before the video frame the AI flagged, the full video showed the kid finishing the bag and stuffing it in his pocket. AIs suck at context; a human watching the full video would not have made the same mistake. But in mostly taking the human out of the loop, all they had for verification was a single frame of video, captured as a context-free still image.
It is frankly mind-boggling that you or anyone else can defend this crap.
I can understand the outrage in this thread but literally none of what you are all calling for will be done. No one from justice or law reads HN to see what should be done. I wish folks here would keep a cooler head rather than posting lengthy rants and vents that call for punishing school staff. Really unprofessional and immature from a community that prides itself, to fall constantly into a cycle of vitriol.
Can someone outline a more pragmatic, if not likely, course of what happens next after this? Is it swept under the rug and we move on?
It’s unsurprising, since this kind of classification is only as good as the training data.
And police do this kind of stuff all the time (or in the very least you hear about it a lot if you grew up in a major city).
So if you’re gonna automate broken systems, you’re going to see a lot more of the same.
I’m not sure what the answer is but I definitely feel that “security” system like this that are purchased and rolled out need to be highly regulated and be coupled with extreme accountability and consequences for false positives.
At least there is a check done by humans in a human way. What if this human check is removed in future, as AI decisions would be deemed no longer requiring a human inspection?
The model seems pretty shitty. Does it only look on a frame-by-frame basis? Literally one second of video context and it would never make that mistake.
AI is a false (political) wish, it can and never work, it is the desperation of an over extended power structure
to hold on and permanently consolodate controll of all of the worlds population, and nothing else.
the proofs are there.
philosophers mulled this over long ago and made clear statements as to why ai cant work
though not that for a second do I misdunderstand that it is "all in"
for ai, and we all get to go for the 100 trillion dollar ride to hell.
can we have truely awsome automation for manufacturing and mundane beurocratic tasks?, fuck ya we can!
but anything that requires understanding, is forever out of reach, which unfortunatly is also lacking in the people pushing, this thing, now
With high level of hallucination, cops need to tranquilizers more. If the student had reached for his bag just before the cops arrived, BLM 2.0 would have started.
Companies, particularly in the US, very much want to go with (2) and part of the reason they can is because there are zero consequences for incidents like this.
A couple ofexamples spring to mind:
1. the UK Royal Mail scandal where a bad system accused postmasters of theft, some of whom committed suicide over the allegations. Those allegations were later proven false and it was the system's fault. IMHO the people who signed off and deployed this should be charged with negligent homicide; and
2. The Hertz case where people who had returned cars were erroneously flagged as car thieves and report was made to police. This created hell for people who would often end up with warrants they had no idea about and would be detained on random traffic stops over a car that was never stolen.
Now these aren't AI but just like the Doritos case here, the principle is the same: companies are trying to replace people with computers. In all cases, a human should be responsible for reviewing any such complaint. In the Hertz case, a human should check to see if the car is actually stolen.
In the Royal Mail situation, the system needs to show its work. Deployment should be against the existing accounting system and discrepancies between the two need to be investigated for bugs until the system is proven correct. Particularly in the early stages, a forensic accountant (if necessary) should verify that funds were actually stolen before filing a criminal complaint.
And if "false positive" criminal complaints are filed, the people who allowed that to happen, if negligent (and we all know they are), should themslves be criminally charged.
We are way too tolerant of black box systems that can result in significant harm or even death to people. Show your work. And make a human put their name and reputation to any output of such systems.
All right they’ve gotta have a plain clothes bro go up there make sure the kid is chill. You know the difference between a murder and not can be as little as somebody being nice
Exactly. I wonder if this a purpose-built image-recognition system, or is it a lowest-possible effort generic image model trained on the internet? Classifying a Black high school student holding Doritos as an imminent shooting threat certainly suggests the latter.
You're free to (attempt to) amend the Second Amendment, but the Supreme Court of the United States has already affirmed and reaffirmed that individual ownership of firearms in common use is a right.
What do you propose that is "reasonable" given the frameworks established by Heller, McDonald, Caetano, and Bruen?
I can 3D print or mill basically every item you can imagine prohibiting at home: what exactly do laws do in this case?
> the Supreme Court of the United States has already affirmed and reaffirmed that individual ownership of firearms in common use is a right.
The current interpretation of 2A is actually a fairly recent invention; in the past, it's been interpreted much more narrowly. And if SCOTUS can overturn Roe v. Wade's precedent, they can do the same with their interpretation of 2A. They won't of course, at least not until some of its members age out and get -- hopefully -- replaced with people who aren't idiots.
But I'd be fine if 2A was amended away. Let the states make whatever gun laws they want, and we can see whether blue or red states end up with lower levels of gun violence as a result.
The core of the issue is that many Americans do carry weapons which means that whatever the security system, it needs to keep in mind that the suspect might be armed and about to start shooting. This makes the police biased towards escalation because the only way against a shooter is to shoot first.
This problem doesn't exist in Europe or Japan because guns aren't that ubiquitous, which means that the police have the time to think before they act, which makes them less likely to escalate and start shooting. Obviously, for Americans, the only solution is to get rid of the gun culture, but this will never happen, so suck it up that AI gets you swatted.
Everything around us: political tumult and weaponization of the justice system, ICE and other capricious projections of federal authority, the failure of drug prohibition, and on and on and on, points to a very simple solution:
Abolish SWAT teams. Do away with the idea that the state employees can be permitted to be more armed than anyone else.
Blaming the so-called 'swatter' (whether it's a human or AI) is really not getting at the root of the problem.
If it's taking images every 30 seconds, it's getting 86400 x 30 = 2.5 million images per day per camera. So when it causes enormous, unnecessary trauma to one student per week, the company can rightfully claim it has less than a 1 in 10 million false positive rate.
Inflicting trauma on a harmless human in the name of the "safety of others" is never ok. The victim here was not unharmed, but is likely to end up with PTSD and all the mental health issues that come with it.
the best part of the technocracy is that they're not actually all that good at anything. the second best part is that when their mistakes end in someone dead there will be some way that they're not responsible.
Imagine the head scratching that's going on with execs who are surprised things might work when a probabilistic software is being used for deterministic purposes without realizing there's a gap between it kind of by nature.
I'm sure there will be no head scratching. They already know that this can happen, and don't care, because they know that if someone gets killed because of it, they won't be held responsible. And may not even lose any customers.
I was unduly surprised and disappointed when I saw the photo of the kid and he turned out to be black. I would love to believe that this had no impact on how the whole thing played out, but I don't.
If these AI video based gun detectors are not a massive fraud I will eat one.
How on Earth does a person walk with a concealed gun? What does a woman in a skirt with one taped to her thigh walk like?
What does a man in a bulky sweatshirt with a pistol on his back walk like?
What does a teenager in wide legged cargo jeans with two pistols and a extra magazines walk like?
The brochure linked from TFA has a screenshot of a combination of segmentation and object recognition models which are fairly standard in NVRs. Quick skim of the vendor website seems to confirm this[1] and states a claim that they are not analyzing the gait.
The whole idea even accepting the core premise is OK to begin with needs to have a similar analysis applied to it that medical tests do: will there be enough false positives, with enough harm caused by them, that this is actually worse than doing nothing? Compared with likelihood of improving an outcome and how bad a failure to intervene is on average, of course.
Given that there's no relevant screening step here and it's just being applied to everyone who happens to be at a place it's truly incredible that such an analysis would shake out in favor of this tech. The false positive rate would have to be vanishingly tiny, and it's simply not plausible that's true. And that would have to be coupled with a pretty low false negative rate, or you'd need an even lower false positive rate to make up for how little good it's doing even when it's not false-positiving.
So I'm sure that analysis was either deliberately never performed, or was and then was ignored and not publicized. So, yes, it's a fraud.
(There's also the fact that as soon as these are known to be present, they'll have little or no effect on the very worst events involving firearms at schools—shooters would just avoid any scheme that involved loitering around with a firearm where the cameras can see them, and count on starting things very soon after arriving—like, once you factor in second order effects, too, there's just no hope for these standing up to real scrutiny)
T0 be fair most commercials for Doritos, Skittles, Mentos, etc., if occurring in real life, would result in a strong police response just after they cut away.
>Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
It's ok everyone, you're safer now that police are pointing a gun at you, because of a bag of chips ... just to be safe.
/s
Absolutely ridiculous. We're living "computer said you did it, prove otherwise, at gunpoint".
I’ve got great news for you: there are more girls with colored hair than ever before, and we got the Synthwave revival, just try to find the right crowd and put on Timecop1983 in your headphones
Cyberpunk always sucked for all but the corrupt corporate, political and military elites, so it’s all tracking
Well the "hackers" jacking in to the "Hacker News" discussion board, where we talk about the oppression brought in by the corrupt AI-peddling corporations employed by the even more corrupt government, probably aren't all looking like Zero Cool, Snake Plissken, Officer K, or the like, though a bunch may be.
AI singularity wil happen, but motherbrain as a complete moron. It will extinguish humans not as a grand plan for machines to take over, but doing horrible mistakes when trying to make things better.
If any of you had actually paid attention to the source media, you would have noticed that they were explicitly dystopias. They were always clearly and explicitly hell for normal people trying to live life.
Meanwhile, tons of you watched star trek and apparently learned(?) that the "bright future" it promised us was.... talking computers? And not, you know, post scarcity and enlightenment that allowed people to focus on things that brought them joy or they were good at, and an entire elimination of the concept of "capitalism" or personal profit or resource disparity that could allow people to not be able to afford something while some asshole in the right place at the right time gets to take a percentage cut of the entire economy for their personal use.
The primary "technology" of star trek was socialism lol.
Oh of course they were dystopias. But at least they were cool and there was a fair amount of competence floating around.
My point is exactly that we got the dystopia but also it's not even a little cool, and it's very stupid. We could have at least gotten the cool dystopia where bad things happen but at least they're part of some kind of sensible longer-term plan. What we got practically breaks suspension of disbelief, it's so damn goofy.
> The primary "technology" of star trek was socialism lol.
Yep. Socialism, and automatic brainwashing chairs. And sending all the oddball non-conformists off to probably die on alien planets, I guess. (The "Original Series" is pretty weird)
The safest thing to do is to pull all Frito Lay products off shelves until the packaging can be redesigned to ensure that AI never confuses them for guns. It's a liability issue. THINK OF THE CHILDREN.
I think it's almost guaranteed that this model has race-related biases, so no, I don't think you're kidding at all. I think it's entirely likely that an Asian (or white) kid of the same build, wearing the same clothes, with a crumpled-up bag of Doritos in his pocket, would not get flagged as having a gun.
> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
It just created a situation in which a bunch of people with guns were told that some teen had a gun. That's a very unsafe situation that the system created, out of nothing.
And some teen may be traumatized. Again, unsafe.
Incidentally, the article's quotes make this teen sound more adult than anyone who sold or purchased this technology product.
https://www2.ljworld.com/news/schools/2025/aug/07/lawrence-s...
Another false positive by one of these leading content filters schools use - the kid said something stupid in a group chat and an AI reported it to the school, and the school contacted the police. The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time. They are suing Gaggle, who claims they never intended their system to be used that way.
These kinds of false positives are incredibly common. I interviewed at one of their competitors (Lightspeed), and they actually provide a paid service where they have humans review all the alerts before being forwarded to the school or authorities. This is a paid addon, though.
https://archive.is/DYPBL
> Gaggle’s CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said. “I wish that was treated as a teachable moment, not a law enforcement moment,” said Patterson.
It's entirely predictable that schools will call law enforcement for many of these detections. You can't sell to schools that have "zero tolerance" policies and pretend that your product won't trigger those policies.
Exactly. In a saner world, we could use fallible AI to call attention to possible concerns that a human could then analyze and make an appropriate judgment call on.
But alas, we don't live in that world. We live in a world where there will be firings, civil, and even criminal liability for those who make wrong judgments. If the AI says "possible gun", the human running things who alerts a SWAT team faces all upside and no downside.
Hmm, maybe this generation's version of "nobody ever got fired for buying IBM" will become "nobody ever got fired for doing what the AI told them to do." Maybe humanity is doomed after all.
I can't say that I think it would be a saner world to have the equivalent of a teacher or hall monitor sitting in on every conversation, even if that computer chaperone isn't going to automatically involve the cops. I don't think you can build a better society where everyone is expected to speak and behave defensively in every circumstance as if their words could be taken out of context by a snitch - computer or otherwise.
There is still liability there and it should be even higher when the decisions to implement so callously bad processes. Doubly so since this has demonstrably happened once.
At least the current moment, the increasing turn to using autonomous weaponry against one’s citizens - I don’t think it says so much about humanity so much as the US. I think US foreign policy is a disaster but turning the AI-powered military against the citizenry does look like it’s going to be quite successful, presumably because the US leadership is fighting an enemy incapable of defending itself. I think it’s unsustainable though economically speaking. AI won’t actually create value once it’s a commodity itself (since a true commodity has its value baked into its price). Rates of profit will continue to fall. The ruling class will become increasingly desperate in its search for growth. Eventually an economy that resorts to techno-fascism implodes. (Not before things turning quite ugly of course.)
"It wasn't used as directed", says man selling Big Boom Fireworks to children.
> They are suing Gaggle, who claims they never intended their system to be used that way.
Yeah, there's a shop near me that sells bongs "intended" for use with tobacco only.
> They are suing Gaggle, who claims they never intended their system to be used that way.
Is there some legal way to sue a pair of actors (Gaggle and school) then let them sue each other over who has to pay what percentage?
You separately sue everyone that might be liable. Some of the parties you sue might end up suing each other.
> This is a paid addon, though
Holy shitballs. In my experience such pain addons have very cheap labor attached to them, certainly not what you would expect based on the sales pitch.
The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time.
All he wanted was a Pepsi. Just one Pepsi. And they wouldn't give it to him.
>...its purpose is to “prioritize safety and awareness through rapid human verification.
Oh look, a corporation refusing to take responsibility for literally anything. How passe.
The invention of the corporation is virtually to eliminate responsibility/culpability from any individual.
Human car crash? Human punishment. Corporate-owned car crash? A fine which reduces salaries some negligible percent.
Yes, corporations have all of the rights of a person, abilities beyond a person, yet few of the responsibilities of a person.
Our failure at "corporate alignment" makes it pretty clear that we're also going to fail at any version of "AI alignment"...
The two will likely facilitate eachother :(
Don't forget paying their way out of crimes and no applicability to three strikes laws.
They actually don't have all the rights of a person and they do have those same responsibilities.
If this company was a sole proprietorship, the only recourse this kid would have is to sue the owner, up to bankruptcy.
Since it's a corporation, his recourse is to sue the company, up to bankruptcy.
As for corporations having rights, I can explain it further if necessary but the key understanding is that the singular of "corporations are people" is "a corporation is people" not "a corporation is a person".
You can't put a corporation in prison. But a person you can. This is one of the big problems. The people making the decisions at corporations are shielded from personal consequences by the corporation. A corporation can be shut down but it rarely happens.
Even when Boeing knowingly caused the deaths of hundreds (especially the second crash was entirely preventable if they would have been honest after the first one), all they got were some fines. Those just end up being charged back to their customers, a big one being the government who fined them in the first place.
> Even when Boeing knowingly caused the deaths
Since corporations aren't people, Boeing didn't know anything.
Did someone at Boeing have all of that knowledge?
I'm sure the top leadership was well aware of what happened after the first crash yes. They should have immediately gone public and would have prevented the second crash.
Don't forget that hiding MCAS from pilots and the FAA was a conscious decision. It wasn't something that 'just happened'. The decision to not make it depend on redundant AoA sensors by default too.
My point is, I can imagine that the MCAS suicidal side-effect was something unexpected (it was a technical failure edge-case in a specific and rare scenario) and I get that not anticipating it could have been a mistake, not a conscious decision. But after the first crash they should have owned up to it and not waited for a second crash.
And who even cares if they knew?
Extenuating circumstances, at best.
A drunk driver doesn't get to claim that they didn't know someone was in front of their car.
You need a judge and jury for prison sentences for criminal convictions.
If the government decides to prosecute the matter as a civil infraction, or doesn't even bother prosecuting but just has an executive agency hand out a fine, that's not a matter of the corporation shielding people, that's a matter of the government failing to prosecute or secure a conviction.
If the company is a sole proprietorship, you can sue the person who controls it up to bankruptcy, which will affect their personal life significantly. If the company is a corporation/LLC, you can sue the corporate entity up to the bankruptcy of the corporate entity, while the people controlling the company remain unaffected.
This gets even more perverse. If you're an individual you actually can't just set up an LLC to limit your own liability. There's no manner for an individual to say "I'm putting on a hat and acting solely as the LLC" - rather as the owner you need to find and employ enough judgement-proof patsies that the whole thing becomes a "group project" and you can say you personally weren't aware of whatever problem gave rise to liability. In other words, the very design of corporations/LLCs encourages avoiding responsibility.
You're correct with the nitpick about the Supreme Council's justification, but that justification is still poor reasoning. Corporations are government-created liability shields. How they can direct their employees should be limited, to avoid trampling on those individuals' own natural rights. A person or group of people who want to exercise their personal natural rights through hired employees can always forgo the government-created liability shield and go sole proprietorship / gen partnership.
> a corporation refusing to take responsibility for literally anything. How passe
Versus all the natural people at the highest echelons of our political economic system valiantly taking responsibility for fuckall?
> Versus all the natural people
We can at least hold them responsible.
> We can at least hold them responsible
We don’t. (We can also hold corporations responsible. We seldom do.)
The problem isn’t in the form of legal entity fraud and corruption wears.
Fair enough, but it is much harder to hold a corporation responsible.
Jail is a great deterrent for natural persons.
> it is much harder to hold a corporation responsible
In some ways, yes. In most ways, no. In most cases, a massive fine aligns interests. Our problem is we've become weak kneed at levying massive fines on corporations.
Unlike a person, you don't have to house a corporation to punish it. Your fine simply wipes out the owners. If the enterprise is a going concern, it's born under new ownership. If it's not, its assets are redistributed.
> Jail is a great deterrent for natural persons
Jail works for executives who defraud. We just, again, don't do it. This AI could have been sold by a billionaire sole proprietor, I doubt that would suddenly make the rules more enforceable.
It's probably just US culture of "if you aren't cheating you aren't trying to win hard enough".
I certainly didn't imply that to be the case and I'm not sure how you could draw that conclusion from 2 whole sentences.
Engineer: hey I made this cool thing that can help people in public safety roles process information and make decisions more efficiently! It gives false positives but you save more time than it takes less time to weed through them.
Someone nearby: well what if they use it to replace human thinking instead of augment it?
Engineer: well they would be ridiculous. Nobody would ever think that’s a good idea.
Marketing Team: it seems like this lands best when positioning it as a decision-making tool. Let’s get some metrics on how much faster it is at making decisions than people are.
Sales Rep: ok, Captain, let’s dive into our flagship product, DecisionMaker Pro, the totally automated security monitoring agent…
::6 months later—some kid is being held at gunpoint over snacks.::
Refer to the post office scandal in Britain and the robodebt debacle in Australia.
The authorities are just itching to have their brains replaced with by dumb computer logic, without regard for community safety and wellbeing.
Lack of Accountability as-a-Service! Very attractive proposition to negligent and self-serving organizations. The people in charge don't even have to pay for it themselves, can just funnel the organization money to the vendor. Encouraging widespread adoption helps normalizes the practice. If anyone objects, shut them down as not thinking-of-the-children and something-must-be-done (and every other option is surely too complicated/expensive).
Nice fantasy, but the reality is that the "people in public safety roles" love using flimsy pretenses to harass and abuse vulnerable populations. I wish it was just overeager sales and marketing, but you're view of humanity is way too naive especially as masked thugs are disappearing people in the street as we type.
It’s actually “AI swarmed” since no human reasoning, only execution, was exerted - basically have an AI directing resources.
Reverse Centaur. MANNA.
3-in-1. Lack.
when attacked by bees am I hive swarmed?
delegating decision to AI, excluding human from the "human in the loop" is kind of unexpected as a first step, as in general it was expected that exclusion will start from the other end. Sideway i wonder how is that going to happen on the battlefield.
for this civilian use case the next step - AR googles worn by police with that AI will be projecting onto the googles where that teenager has his gun (kind of Black Mirror style), and the next step after that is obviously excluding the humans even from the execution step.
In any system, there are false positives and false negatives. In some situations (like a high recall disease detection) false negatives are much worse than false positives, because the cost of a false positive is a more rigorous screening.
But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.
Aside from improving the quality of the detection model, we should try to reduce the “cost” of both failure modes as much as possible. Putting a human in the loop or having secondary checks are ways to do that.
In this case false positives are far, far worse than false negatives. A false negative in this system does not mean a tragedy will occur, because there are many other preventative measures in place. And never mind the fact that this country refuses to even address the primary cause of gun violence in the first place: the ubiquity of guns in our society. So systems like this is what we end up with when we ignore to address the problem of guns and choose to deal the downstream effects of that instead.
I tend to categorize these under a dutch idiom which I can’t describe, but which is abundantly clear in pictorial form:
https://klimapedia.nl/wp-content/uploads/2020/01/Dweilen_met...
"Treating the symptoms not the cause" would be the english equivalent.
(for others: the Dutch expression is "Dweilen met de kraan open", "Mopping with the tap open")
> the primary cause of gun violence in the first place: the ubiquity of guns in our society
I would have gone with “a normalized sense of hopelessness and indignity which causes people to feel like violence is the only way they can have any agency in life” considering “gun” is the adjective and “violence” is the actual thing you're talking about.
Both are true. The underlying oppressive, lonely, pro-bullying culture creates the tension. The proliferation of high lethality weapons makes it more likely that tension will eventually release in the form of a mass tragedy.
Improvement in either area would be a net positive for society. Improvement in both areas is ideal but solving proliferation seems a lot more straightforward than fixing the generally miserable society problem.
I think there’s probably some correlation between ‘generally miserable society’ and ‘we think it’s ok to have children surveiled by AI’
To be clear, the false negative here would be a student who has brought a gun to a school and the computer ignores it. That is a situation where potentially multiple people can be killed in a short amount of time. It is not far, far worse to send cops.
Depends on the false positive rate doesn't it. If police are being sent to storm a school every week due to a false positive, that is quite bad. And people will become conditioned to not care about reports of a gun at a school because of all the false positives.
For what I’m saying, no it doesn’t because I’m just comparing a single instance of false positive to a single instance of false negative. Neither is desirable.
> But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.
Given the probability of police officers in the USA taking any action as hostile and then ending up shooting him a false positive here is the same as swatting someone.
The system here sent the police off to kill someone.
Yep. Think of it as the new exciting version of swatting. Naturally, one will still need to figure out common ways to force a specific misattribution, but, sadly, I think there will be people working on it ( if there aren't already ).
[dead]
Sure. But school shootings are also common in the US. A student who has brought a gun to a school is very likely not harmless. So false negatives aren’t free either.
What's the proportion of gun-carrying to shooting in schools?
I was swatted once. Girlfriend's house. Someone called 911 and said they'd seen me kill a neighbor, drag their body into the house, and was now holding my gf's family hostage.
We answered the screams at the door to guns pointed at our faces, and countless cops.
It was explained to us that this was the restrained version. We got a knock.
Unfortunately, I understand why these responses can't be neutered too much. You just never know.
In this case, though, you COULD know, COULD verify with a human before pointing guns at people, or COULD not deploy a half finished product in a way that prioritizes your profit over public safety.
s/COULD/SHOULD/g
Happened to a friend of mine by an ex GF who said he was on psych meds (true though he is nonviolent with no history) and that he was threatening to kill his parents. NYPD SWAT no-knock kicked the door down to his apartment which terrorized his elderly parents as they pointed guns at their son (in his words, "machine guns".) BUT because he has psych issues and on meds he was forced into a cop car in front of the whole neighborhood to get a psych evaluation. He only received an apology from the cops who said they have no choice but to follow procedure.
edit should add sorry to hear that.
> who said he was on psych meds (true though he is nonviolent with no history)
I don't understand the connection here
Do the cops not ever get tired of being fooled like this? Or do they just enjoy the chance to go out in their army-surplus armored cars and pretend to be special forces?
I had convos with cops about swatting, the good ones aren't happy to go kick down someone's door who isn't about to harm someone but feel they can't chance making a fatally wrong call when it isn't swatting, also they have procedures to follow and if they don't the outcome is on them personally and potentially legally.
As for bad cops they look for any reason to go act like aggro billy badasses.
> the good ones ...
uh-huh
> if they don't the outcome is on them personally and potentially legally.
Bullshit, they're rarely held accountable when they straight up murder people, and even then "accountable" is "have to go get a different job". https://en.wikipedia.org/wiki/Killing_of_John_T._Williams
ACAB
It seems entirely in line to not be held accountable for terrorizing/murdering people when you are held accountable for doing the opposite?
It just means the police force is an instrument of terror.
[dead]
This is a really good question. Sadly the answer is that they think it's how the system is meant to work. Well that seems to be the answer that I see coming from police spokespeople
Its likely procedure that they have to follow (see my other post in this thread.)
I hate to say this but I get it. Imagine a scenario happens where they decide "sounds phony. stand down." only for it to be real and people are hurt/killed because the "cops ignored our pleas for help and did nothing." which would be a horrible mistake they could be liable for, never mind the media circus and PR damage. So they treat all scenarios as real and figure it out after they knock/kick in the door.
To that end, we should all have a cop assigned to us. One cop per citizen, with a gun pointed at our head at all times. Imagine a scenario happens where someone does something and that cop wasn't there? Better to be safe.
Why stop at one? Imagine how much safer we’d be with TWO cops per citizen! And all those extra jobs that would be created!
And then cops for the cops!
I don't think you know how policing works in America. To cops, there are sheep, sheepdogs, and wolves; they are sheepdogs protecting us sheep from the criminals. Nobody needs to watch the sheepdogs!
But lets think about their analogy a little more: sheepdogs and wolves are both canines. Hmm.
Also "funny" how quickly they can reclassify any person as a "wolf", like this student. Hmm.
> Nobody needs to watch the sheepdogs!
A sheepdog that bites a sheep for any reason is killed.
Maybe we should move beyond binary thinking here. Yeah, it's worth sending someone to investigate but also making some effort to verify who the call is coming from - to get their identity, and to ask them something simple like to describe the house (in this example) so the arriving cops will know they go to the right address. Now of course you can get a description of the house with Google Street Maps, but 911 dispatchers can solicit some information like what color car is currently parked outside or suchlike. They could also look up who occupies the house and make a phone call while cops are on the way.
Everyone knows swatting is a real thing that happens and that it's problematic, so why don't police departments have procedures in place which include that possibility? Who benefits from hyped-up police responses to false claims of criminal activity?
Cops don't have a duty to protect people, so "cops ignored our pleas for help and did nothing" is a-ok, no liability (thank you, qualified immunity). They very much do not treat all scenarios as real; they go gung-ho when they want to and hang back for a few hours "assessing the situation" when they don't.
> they go gung-ho when they want to and hang back for a few hours "assessing the situation" when they don't.
Yeah. They were happy to take their sweet time assessing everything safely outside the buildings at Uvalde.
I'm a paramedic, who has personally attended a swatting call where every single detail was so egregiously wrong, but police still went in, no-knock, causing thousands of dollars damage, that, to be clear, they have absolutely zero liability for, but thankfully no injuries.
"I can see them in the upstairs window" - of a single story home.
"The house is red brick" - it was dark grey wood.
"No cars in the driveway" - there was two.
Cops still said "hmm, still could be legit" and battered down the front door, deployed flashbangs.
There are more options here than "do nothing" and "go in guns blazing".
Establishing the probable trustworthiness of the report isn't black magic. Ask the reportee for details, question the neighbours, look in through the windows, just send two plain clothed officers pretending to be salesmen to knock on the door first? Continously adjust the approach as new information comes in. This isn't rocket science, ffs.
See my other comment in this thread. I've personally witnessed trying to ask the caller verifying details because dispatchers were suspicious.
Even with multiple major discrepancies, police still decided they should go in, no-knock.
It doesn't make sense. If you were holding people hostage, you'd have demands for their release. Windows could be peeked into. If you dragged a dead body into a house, there'd be evidence of that.
False positives can effectively lead to false negatives too. If too many alarms end in teens getting swatted (or worse) for eating chips, people might ignore the alarm if an actual school shooter triggers it. Might assume the AI is just screaming about a bag of chips again.
I think a “true positive” is an issue as well if the protocol to manage it isn’t appropriate. If the kid was armed with something other than nacho cheese, the provocative reaction could have easily set off a tragic chain of events.
Reality is there are guns in schools every day. “Solutions” like this aren’t making anyone safer. School shooters don’t fit this profile - they are planners, not impulsive people hanging out at the social event.
More disturbing is the meh attitude of both the company and the school administration. They almost engineered a tragedy through incompetence, and learned nothing.
>And some teen may be traumatized.
Um. That's not really the danger here.
The danger is that it's as clear as day that in the future someone is gonna be killed. That's not just a possibility. It's a certainty given the way the system is set up.
This tech is not supposed to be used in this fashion. It's not ready.
Did you want to emphasize or clarify the first danger I mentioned?
My read of the "Um" and the quoting, was that you thought I missed that first danger, and so were disagreeing in a dismissive way.
When actually we're largely in agreement about the first danger. But people trying to follow a flood of online dialogue might miss that.
I mentioned the second danger because it's also significant. Many people don't understand how safety works, and will think "nobody got shot, so the system must have worked, nothing to be concerned about". But it's harder for even those people to dismiss the situation entirely, when the second danger is pointed out.
I’d argue the second danger is worse, because shooting might be incidental (and up to human judgement) but being traumatized is guaranteed and likely to be much more frequent.
I fully agree, but we also really need to get to a place where drawing the attention of police isn't an axiomatically life-threatening situation.
If the US wasn't psychotic, not all police would have to be armed, and not every police response would be an armed response.
Even if not all police were armed, the response to "AI said someone has a gun" would always be the armed police
Why would it not be "human reviews the image that the AI said was a gun"?
The entire selling point of AI is to not have humans in the loop.
Even despite the massive protests in the past few years, we're moving further in that direction.
Americans are killed by police all the time, and by other Americans. We've already decided as a society that we don't care enough to take the problem seriously. Gun violence, both public and from the state, is accepted as unavoidable and defended as a necessary price to pay to live in a free society[0]. Having a computer call the shots wouldn't actually make much of a difference.
Hell, it wouldn't even move the needle on racial bias much because LLMs have already shown themselves to be prejudiced against minorities due to the stereotypes in their training data.
[0]Even though no other free society has to pay that price but whatever.
Far more deaths by automobile than homicides by guns.
In the US, guns and automobiles kill roughly the same number of people each year.
Guns are actually easier to control and significantly reduce ability to target multiple people at once. There are a lot of countries successfully controlling guns.
To the argument that then only criminals have guns - in India at least, criminals have very limited access to guns. They have to resort to unreliable handmade guns which are difficult to procure. Usually criminals use knives and swords due to that.
> The danger is that it's as clear as day that in the future someone is gonna be killed.
This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.
I'm not downplaying the risks. I'm saying that we should remember that almost everything has risks and benefits, and as a society we decide for or against using/doing them based mostly on the ratio between those two things.
So we need some data on the rates of false vs. true detections here. (A value judgment is also required.)
> This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.
huh, i can think of at least one recent example of a popular figure using this kind of argument to extreme self-detriment.
Is HN really this ready to dive into obvious logical fallacies?
My original comment, currently sitting at -4, has so far attracted one guilt-by-association plus implied-threat combo, and no other replies. To remind readers: My horrifying proposal was to measure both the risks and the benefits of things.
If anyone genuinely thinks measuring the risks and benefits of things is a bad idea, or that it is in general a good idea but not in this specific case, please come forward.
> Is HN really this ready to dive into obvious logical fallacies?
No, which is why your comment was downvoted - the following is a fallacy:
> This argument can be made about almost every technology,
That's the continuum fallacy.
sorry for being glib; it was low hanging fruit. my actual point should have been more clearly stated: measuring risk/benefit is really complicated because there's almost never a direct comparison to be made when balancing profit, operational excellence and safety.
Stuff like this feels like some company has managed to monetize an open source object detection model like YOLO [1], creating something that could be cobbled together relatively easily, and then sold it as advance AI capabilities. (You'd hope they've have at least fine-tuned it / have a good training dataset.)
We've got a model out there now that we've just seen has put someone's life at risk... Does anyone apart from that company actually know how accurate it is? What it's been trained on? Its false positive rate? If we are going to start rolling out stuff like this, should it not be mandatory for stats / figures to be published? For us to know more about the model, and what it was trained on?
[1] https://arxiv.org/abs/1506.02640
And it feels like they missed the "human in the loop" bit. One day this company is likely to find itself on the end of a wrongful death lawsuit.
They’ll likely still be profitable after accounting for those. This is why sociopaths are so successful at business
[flagged]
"“We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident,” the principal wrote. “Our counselors will provide direct support to the students who were involved.”"
Make them pay money for false positives instead of direct support and counselling. This technology is not ready for production, it should be in a lab not in public buildings such as schools.
Charge the superintendent with swatting.
Decision-maker accountability is the only thing that halts bad decision-making.
> Charge the superintendent with swatting.
This assumes no human verification of the flagged video. Maybe the bag DID look like a gun. We'll never know, because modern journalism has no interest in such things. They obtained the required emotional quotes and moved on.
Human verified the video -> human was the decision-maker. No human verified the video -> Human who gave a blank check to the AI system was the decision-maker. It's not really about the quality of journalism, here.
Please provide the quote from the story that says which of those is the case.
We're talking about who should be charged with a crime. I sincerely hope we're going to do more discovery than "ask Dexerto to summarize what WBAL-TV 11 News said".
> Police later showed him the AI-captured image that triggered the alert. The crumpled Doritos bag in his pocket had been mistaken for a gun.
That quote sorta suggests that the police got the alert, looked at the photo, and was like "yeah, that could be a gun, let's go".
Still dumb.
Not at all.
Superintendent approved a system that they 100% knew would hallucinate guns on students. You assert that if the superintendent required human-in-the-loop before calling the police that the superintendent is absolved from deploying that system on students.
You are wrong. The superintendent is the person who decided to deploy a system that would lead to swatting kids and they knew it before they spent taxpayer dollars on that system.
The superintendent also knew that there is no way a school administrator is going to reliably NOT dial SWAT when the AI hallucinates a gun. No administrator is going to err on the side of "I did not see an actual firearm so everything is great even though the AI warned me that it exists." Human-in-the-loop is completely useless in this scenario. And the superintendent knows that too.
In your lifetime, there will be more dead kids from bad AI/police combinations than <some cause of death we all think is too high>. We are not close to safely betting lives on it, but people will do it immediately anyway.
"In your lifetime, there will be more dead kids from bad AI/police combinations than <some cause of death we all think is too high>."
Maybe, but that won't stop the kind of people that watch cable news from saying "if it stops one crime" or "if it saves one life".
No amount of telling people that AI hallucinates will get some people to believe that AI hallucinates.
So, are you implying that if humans surveil kids at random and call the SWAT team if a frame in a video seems to imply one kid has a gun, that then it's all OK?
Those journalists, just trying to get (unjustified, dude, unjustified!!) emotes from kids being mistakenly held at gun point, boy they are terrible.... They're just covering up how necessary those mistakes are in our pursuit of teh crime...
If security sees someone carrying a gun in surveillance video, on a gun free campus, and policy verify it, then yes, that's justified, by all aspects of the law. There are countless examples of surveillance of illegal activity resulting in police action.
Are you suggesting it's not?
Nobody saw a gun in a video. Nobody even saw something that looked like a gun. A chip bag, at most, is going to produce a bulge. No reasonable human is going to look at a kid with a random bulge in their pocket and assume gun. Otherwise we might as well start sending our kids to school naked; this is the kind of paranoia that brought us the McMartin Preschool nonsense.
They didn't see that, though. They saw a kid with a bulge over their pants pocket, suggesting that something was in the pocket. The idea that any kind of algorithm can accurately predict that an amorphous pocket bulge is a gun is just bonkers stupid.
(Ok, ok, with thin, skin-tight, light-colored pants, maybe -- maybe -- it could work. But if it mistook a crumpled-up Doritos bag as a gun, clearly that was not the case here.)
> Make them pay money
It already cost money paying for the time and resources to be misappropriated.
There needs to be resignations, or jail time.
The taxpayers collectively pay the money, the officers involved don't (except for that small fraction of their income they pay in taxes that increase as a result).
I wonder how much more likely it is to get a false positive from a black student.
The question is whether that Doritos carrying kid is still alive, instead of being shot at by the violent cops (who typically do nothing when an actual shooter is roaming a school on a killing spree; apropos the Uvalve school shooting, when hundreds of cops milled around the school in full body armor, refusing to engage the shooter on killing spree inside the school, and they even prevented the parents from going inside to rescue their kids) based on a false positive about a gun (and the cops must have figured that it's likely a false positive, because it is info from AI surveillance), only because he is white?
Before clicking on the article, I kinda assumed the student was black. I wouldn't be surprised if the AI model they're using has race-related biases. On the contrary, I would be surprised if it didn't.
I assume they were provide gift cards good for psychotherapy sessions.
> Make them pay money for false positives instead of direct support and counselling.
Agreed.
> This technology is not ready for production
No one wants stuff like this to happen, but nearly all technologies have risks. I don't consider that a single false positive outweighs all of its benefits; it would depend on the rates of false and true positives, and the (subjective) value of each (both high in this case, though I'd say preventing 1 shooting is unequivocally of more value than preventing 1 innocent person being unnecessarily held at gunpoint).
I think I’ve said this too many times already but the core problem here and with the “AI craze” is that nobody realy wants to solve problems, what they want is a marketable product and AI seems to be the magic wrench that fits in all the nuts and since most of the people doesn’t really knows how it works and what are its limitations the happily buy the “magic dust”.
> nobody realy wants to solve problems, what they want is a marketable product
I agree, but this isn't specific to AI -- it's what makes capitalism work to the extent that it does. Nobody wants to clean a toilet or harvest wheat or do most things that society needs someone to do, they want to get paid.
Absolutely, but I don’t believe the responsibility falls in the hands of those looking to make a profit but rather into of those in charge of regulating how those profits should be made, after all thieves want to make a profit too but we don’t allow them to, at least not if it’s not a couple of millions.
In the US cops kill more people than terrorists. As long as you quantifying values take that into account.
I get that people are uncomfortable with explicit quantification of stuff like this, but removing the explicitness doesn't remove the quantification, it just makes it implicit. If, say, we allow people to drive cars even though car accidents kill n people each year, then we are implicitly quantifying that the value of the extra productivity society gets by being able to get places quickly in a car is worth the deaths of those people.
In your example, if terrorists were the only people killing people in the US, and police (a) were the only means of stopping them, and (b) did not benefit society in any other way, the equation would be simple: get rid of the police. There wouldn't need to be any value judgements, because everything cancels out. But in practice it's not that easy, since the vast majority of killings in the US occur at the hands of people who are neither police nor terrorists, and police play a role in reducing those killings too.
I expect that John Bryan -- who produces content as The Civil Rights Lawyer https://thecivilrightslawyer.com -- will have something to say about it.
He frequently criticizes the legality of police holding people at gunpoint based on flimsy information, because the law considers it a use of force, which needs to be justified. For instance, he's done several videos about "high risk" vehicle stops, where a car is mistakenly flagged as stolen, and the occupants are forced to the ground at gunpoint.
My take is that if both the AI automated system and a "human in the loop" police officer looked at the picture and reasonably believed that the object in the image was a gun, then the stop might have been justified.
But if the automated system just sent the officers out without having them review the image beforehand, that's much less reasonable justification.
The article says the police later showed the student the photo that triggered the alert. He had a crumpled-up Doritos bag in his pocket. So there was no gun in the photo, just a pocket bulge that the AI thought was a gun... which sounds like a hallucination, not any actual reasonable pattern-matching going on.
But the fact that the police showed the photo does suggest that maybe they did manually review the photo before going out. If that's the case, do wonder how much the AI influenced their own judgment, though. That is, if there was no AI involved, and police were just looking at real-time surveillance footage, would they have made the same call on their own? Possibly not: it feels reasonable to assume that they let the fact of the AI flagging it override their own judgment to some degree.
Someday there'll be a lawyer in court telling us how strong the AI evidence was because companies are spending billions of dollars on it.
Or they'll tell us police have started shooting because an acorn falls, so they shouldn't be expected to be held to higher standards and are possibly an improvement.
And there needs to be an opposing lawyer ready to tear that argument to pieces.
You mean in the same fallacious sense of "you can tell cigarettes are good because so many people buy them"?
That sort of rhetoric works very well unfortunately.
Is use of force without justification automatically excessive force or is there a gray area?
See https://en.wikipedia.org/wiki/Graham_v._Connor
Ah, the coming age of Palantir's all seeing platform; and Peter Thiel becoming the shadow Emperor. Too bad non-deterministic ml systems are prone to errors that risk lives when applied wrongly to crucial parts of society. But in an authoritarian state, those will be hidden away anyway, so there's nothing to see here: move along folks. Yes, surveillance and authoritarianism go hand in hand, ask China. It's important to protest these methods and push lawmakers to act against them; now, before it's too late.
I might be missing something but I don' think this article isn't about palantir or any of their products
Palantir is but one head of the hydra which has hundreds of them, and all concerns about a single one apply to the whole beast hundredfold.
It's still not helpful to wander into threads to talk about your favorite topic without making effort to provide some context on why your comments are relevant. When random crazy people come up to you spouting their theories in public places, the problem is not that their concerns are necessarily incoherent or invalid; the problem is that they broadcasting their thoughts randomly with no context, and their audience has no way of telling whether they just need to verbalize what's bothering them or have mistaken a passer-by for one of the villains in their psychodrama.
tl;dr if you want to make a broad point, make the effort to put it in context so people can appreciate it properly.
You're absolutely right, Palantir just needs a different name and then they'd have no issues.
This comment has a double negative, which makes it a false positive.
The article is about omnialert, not palantir, but don’t let the facts get in the way of your soapbox rant.
Same fallible systems, same end goal of mass surveillance.
That may be the case, but only one of them is actually responsible for armed police swarming this student and it wasn't Palantir. It seems very strange that you're so eager to give a free pass to the firm who actually was at fault here.
I'm pretty sure that some people will continue to apply the term "soapbox ranting" to all opposition against the technofascism even when victims of its false positives will be in need of coroners, not psychologists.
I dont think a guy who knows so much about the anti christ could be wrong.
[flagged]
Pre-emptive compliance out of fear - then my boy, the war is already lost.
"competition is for losers"... right? ;-)
[flagged]
So you just live a reactionary life? Nothing matters until it affects you personally? Should we get rid of free speech if jason-phillips doesn't have anything to say?
No, I just don't live around any of you people. I have copious amounts of freedom, including free speech.
You get the society/government/neighbors that you deserve.
> it just doesn’t impact me
It didn’t impact these people before the Dorito incident either.
Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
prioritize your own safety by not attending any location fitted with such a system, or deemed to be such a dangerous environment that such a system is desired.
the AI "swatted" someone.
The corporate version of "It's a feature, not a bug."
Calling it today. This company is going to get innocent kids killed.
How many packs of Twizzlers, or Doritos, or Snickers bars are out there in our schools?
First time it happens, there will be an explosion of protests. Especially now that the public knows that the system isn't working but the authorities kept using it anyway.
This is a really bad idea right now. The technology is just not there yet.
And then there's plenty of bullies who might put a sticker of a picture of a gun on someone's back, knowing it will set off the image recognition. It's only a matter of time until they figure that out.
That's a great and terrifying idea. When that inevitably happens, you'll then have a couple of 13-year-olds: one dead, and one shell-shocked kid in disbelief that a stupid prank idea he cooked up in 60 seconds is now claimed as the root cause why someone was killed. That one may be charged with a crime or sued, though the district who installed this idiotic thing is really to blame.
When I was a kid, we made rubber-band guns all the time. I’m sure that would set it off too.
> The technology is just not there yet.
The technology literally can NEVER be there. It is completely impossible to positively identify a bulge in clothing as a handgun. But that doesn’t stop irresponsible salesmen from making the claim anyway.
>First time it happens, there will be an explosion of protests.
Why do you believe this? In the US, cops will cower outside of a school with an armed gunman actively murdering children, forcibly detain parents who wish to go in if the cops wont, and then re-elect everyone involved
In the US, an entire segment of the population will send you death threats claiming you are part of some grand (democrat of course) conspiracy for the crime of being a victim of a school shooting. After every school shooting, republican lawmakers wear an AR-15 pin to work the next day to ensure you know who they care about.
Over 50% of the country blamed the protesting students at Kent state for daring to be murdered by the national guard.
Cops can shoot people in broad daylight, in the back, with no justification, or reasonable cause, or can even barge into entirely the wrong house and open fire on homeowners exercising their basic right to protect themselves from strangers invading their homes, and as long as the people who die are mostly black, half the country will spout crap like "They died from drugs" or "they once sold a cigarette" or "he stole skittles" or "they looked at my wife wrong" while the cops take selfies reenacting the murder for laughs and talk about how terrified they are by BS "training" that makes them treat every stranger as a wanted murderer armed to the teeth. The leading cause of death for cops is still like heart disease of course.
Trump sent unmarked forces to Portland and abducted people off the street into unmarked vans and I think we still don't really know what happened there? He was re-elected.
Clearly it did not prioritize human safety.
"rapid human verification." at gunpoint. The Torment Nexus has nothing on these AI startups.
Why did they waste time verifying? The police should have eliminated the threat before any harm could be done. Seconds count when you're keeping people safe.
I get that you're being sarcastic and find the police response appalling, but the sad reality of Poe's Law is that there are a lot of people who would unironically say this and would have cheered if the cops had shot this kid, either because they hate black people or because they get off on violence and police shootings are a social sanctioned way to indulge that taste.
We all know the cops will go for the easy prey:
* Even hundreds of cops in full body armor and armed with automatic guns will not dare to engage a single "lone wolf" shooter doing a killing spree in a school; the heartless cowards may even prevent the parents from going inside to rescue their kids: Uvalde school shooting incident
* Cop on a ego trip, will shoot down a clearly harmless kid calmly eating a burger in his own car (not a stolen car): Erik Cantu incident
* Cops are not there to serve the society, they are not there to ensure safety and peace for the neighborhood, they are merely armed militia to protect the rich and powerful elites: https://www.alternet.org/2022/06/supreme-court-cops-protect-...
All of your examples are well known not because its normal and accepted but because they are exceptions. For every one bad example there are thousand good ones, that's humans for you.
Doesn't mean they are perfect or shouldn't criticised but claiming that's all they are doing isn't reasonable either.
If you look at actual per capita statistics you will easily see this.
The dispatch relayer and responding officers should at least have ready access to a screen where they can see a video/image of the raw footage that triggered the AI alert. If it is a false alarm, they will better see it and react accordingly, and if it is a real threat they will better understand the initial context and who may have been involved.
According to a news article[1], a human did review the video/image and flagged it as a false positive. It was the principal who told the school cop, who then called other cops:
> The Department of School Safety and Security quickly reviewed and canceled the initial alert after confirming there was no weapon. I contacted our school resource officer (SRO) and reported the matter to him, and he contacted the local precinct for additional support. Police officers responded to the school, searched the individual and quickly confirmed that they were not in possession of any weapons.
What's unclear to me is the information flow. If the Department of School Safety and Security recognized it as a false positive, why did the principal alert the school resource officer? And what sort of telephone game happened to cause local police to believe the student was likely armed?
1. https://www.wbaltv.com/article/student-handcuffed-ai-system-...
Good lord, what an idiot principal. If the principal saw how un-gun-like it looked, he could have been brave enough to walk his lazy ass down to where the student was and said "Hey (Name), check this out. (show AI detection picture) The AI camera thought this was a gun in your pocket. I think it's wrong, but they like to have a staff member sign off on these since keeping everyone safe from violence is a huge deal. Can I take a picture of what it actually is in your pocket?"
Sounds like a "better safe than sorry" approach. If you ignore the alert on the basis that it's a false positive, then it turns out it really was a gun and the person shoots somebody, you're going to get sued into the ground, fired, name plastered all over the media, etc. On the other hand, if you call in the cops and there wasn't a gun, you're fine.
> "On the other hand, if you call in the cops and there wasn't a gun, you're fine."
Yeah, cause cops have never shot somebody unarmed. And you can bet your ass that the possible follow-up lawsuit to such a debacle's got "your" name on it.
Good luck suing somebody for calling the police.
In Texas filing a false report is a crime and can result in fines and/or imprisonment. Details:
https://legalclarity.org/false-report-under-the-texas-penal-...
Furthermore, anyone who files a false report can be sued in civil court.
“The system flagged a gun, please check it out” is not a false report.
It might be, depending on the integrity of "the system".
I can make a system that flags stuff, too. That doesn't mean it's any good. If they can show there was no reasonable cause then they've got a leg to stand on.
Reports on child welfare, it is often illegal to release the name of the tipster. Commonly taken advantage of by disgruntled exes or in custody dusputes.
Ask a black teenager about being fine.
Well, you could ask this kid, he is black and wasn't harmed. It's not the cops' fault someone told them he had a gun.
Next up, a captcha that verifies you're not a robot by swatting you and checking at gunpoint.
This may be mean, but we should really be careful about just handing AI over to technically illiterate people. They're far more likely to blind trust the LLM/AI output than someone who may be more experienced and take a beat. AI in an agentic-state society (what we have in America at least) is an absolute ticking time bomb. Honestly, this is what AI safety teams should be concentrated on: making sure people who think the computer is infallible understand that, no, it isn't, and you shouldn't just assume what it tells you is correct.
We already handed over the Internet to technically illetrate people long time ago.
Walking through TSA scanners, I always get that unnerving feeling I will get pulled aside. 50% of the time they flag my cargo pants because of the zipper pockets - There is nothing in them but the scanner doesn't like them.
Now we get the privilege of walking by AI security cameras placed in random locations, hoping they don't flag us.
There's a ton of money to be made with this kind of global frisking, so lots of pressure to roll out more and more systems.
How does this not spiral out of control?
To be fair, at least you can choose not to wear the cargo pants.
A friend of mine once got pulled aside for extra checks and questioning after he had already gone through the scanners, because he was waiting for me on the other side to walk to the gates together and the agent didn't like that he was "loitering" – guess his ethnicity...
How is it fair to say that? That's some "why did you make me hurt you"-level justification.
No, it's not.
I have shoes that I know always beep on the airport scanners, so if I choose to wear them I know it might take longer or I might have to embarrassingly take them off and put them on the tray. Or I can not wear them.
Yes, in an ideal world we should all travel without all the security theatre, but that's not the world we live in, I can't change the way airports work, but I can wear clothes that make it faster, I can put my liquids in a clear bag, I can have bags that make it easy to take out electronics, etc. Those things I can control.
But people can't change their skin color, name, or passport (well, not easily), and those are also all things you can get held up in airports for.
That's true, if you're saying "I can at least avoid being assaulted by the shitty system", I just want to point out that it is a shitty system.
I fully agree with you on that, it is a shitty system :)
> guess his ethnicity...
Not sure, but I bet they were looking at their feet kinda dusting off the bottoms, making awkward eye contact with the security guard on the other side of the one way door.
Get Precheck or global entry. I only do a scanner every 5 years or so when I get pulled at random for it. Otherwise it's metal detector only. Unless your zippers have such chunky metal that they set that off you'll be fine. My belt and watch don't.
Note: Precheck is incredibly quick and easy to get; and GE is time consuming and annoying, but has its benefits if you travel internationallly. Both give the same benefits at TSA.
Second note: let's pretend someone replied "I shouldn't have to do that just to be treated...blah blah" and that I replied, "maybe not, but a few bucks could still solve this problem, if it bothers you enough that's worth it to you."
...maybe not, but a few bucks could still solve this problem
Sure, can't argue with that. But doesn't it bug you just a little that (paying a fee to avoid harassment) doesn't look all that disimilar from a protection racket? As to whether it's a few bucks or many, now you're just a mark negotiating the price.
"Just pay to not be harrassed or have your rights/dignity stepped on" a typical take to find on the orange site.
I already adjust my clothing choices when flying to account for TSA's security theater make-work bullshit. Wonder how long before I'm doing that when preparing to go to other public places.
(I suppose if I attended pro sports games or large concerts, I'd be doing it for those, too)
I was getting pulled out of line in the 90’s for having long hair. I don’t dress in shitty clothes or fancy ones, I didn’t look funny, just the hair, which got regular compliments from women.
I started looking at people trying to decide who looked juicy to the security folks and getting in line behind them. They can’t harass two people in rapid succession. Or at least not back then.
The one I felt most guilty about, much later, was a filipino woman with a Philippine passport. Traveling alone. Flying to Asia (super suspect!). I don’t know why I thought they would tag her, but they did. I don’t fly well and more stress just escalates things, so anything that makes my day tiny bit less shitty and isn’t rude I’m going to do. But probably her day would have been better for not getting searched than mine was.
Speak up citizens!
Email the state congressman and tell them what you think.
Since (pretty much) nobody does this, if a few hundred people do it, they will sit up and take notice. It takes less people than you might think.
Since coordinating this with a bunch of strangers (I.e. the public) is difficult, the most effective way is to normalise speaking up in our culture. Of course normalising it will increase the incoming comm rate, which will slowly decrease the effectiveness but even post that state, it’s better than where we are, which is silent public apathy
If that's the case, why do people in Congress keep voting for things their constituents don't like? When they get booed at town halls they just dismiss it as being the work of paid activists.
Yeah, Republicans hide from townhalls. Most of them have one constituent, Trump.
I got pulled aside because I absentmindedly showed them my concealed carry permit, not my driver's license. I told them I was a consultant working for their local government and was going back to Austin. No harm no foul.
If the system used any kind of logic whatsoever a CCW permit would not only allow you to bypass airport security but also carry in the airport (Speaking as both a pilot and a permit holder)
Would probably eliminate the need for the TSA security theater so that will probably never happen.
You can carry in the airport in AZ without a permit, in the unsecured areas. I think there was only one broo-ha-ha because some particularly bold guy did it openly with a rifle (can't remember if there's more to the story).
The point of the security theater is to assuage the 95th percentile scared-of-everything crowd, they're the same people who want no guns signs in public parks.
That may have been true 25 years ago. All the rules are now mostly an annoyance and don't reassure anyone.
There weren't a lot of people voicing opposition to TSA's ending of the shoes off policy earlier this year.
You're right not a lot of people objected to TSA ending the no shoes safety rule, and it's a shame. I certainly objected and tried to make my objections known, but apparently 23 or 24 years of the iconic custom of taking shoes off went to waste because the TSA decided to slack off
No.
Right from the beginning it was a handout to groups who built the scanning equipment, who were basically personal friends with people in the admin. We paid absurd prices for niche equipment, a lot of which was never even deployed and just sat in storage.
Several of the hijackers were literally given extended searches by security that day.
A reminder that what actually stopped hijackings (like, nearly entirely) was locking the cabin door, which was always doable, and has not ever been breached. Not only did this stop terrorist hijackings, it stopped more casual hijackings that used to be normal, it could also stop "inside man" style hijackings like that one with a disgruntled FedEx pilot, it was nearly free to implement, always available, harms no one's rights, doesn't turn airport security into a juicy bombing target, doesn't slow down an important part of the economy, doesn't invent a massive bureaucracy and LEO in the arms of a new american agency that has the goal of suppressing domestic problems and has never done anything useful. Keep in mind, shutting the cockpit door is literally how the terrorists themselves protected themselves from being stopped and is the reason Flight 93 couldn't be recovered.
TSA is utterly ineffective. They have never stopped an attack, regularly fail their internal audits, the jobs suck, and they pay poorly and provide minimal training.
> regularly fail their internal audits
Not even. It's that they rarely pass the audits. Many of the audits have a 90-95% "missed suspect item/s" result.
[dead]
Getting pulled aside by TSA for secondary screening is nowhere in the ball park of being rushed at gunpoint as a teenager and told to lay down on the ground where one false move will get you shot by a trigger happy cop that probably won’t face any consequences - especially if the innocent victim is a Black male.
In fact, they will probably demonize the victim to find sn excuse why he deserved to get shot.
I wasn't implying TSA-cargo-pant-groping is comparable. My point is to show escalation in public facing systems. We have been dealing with TSA. Now we get AI Scanners. What's next?
Also, no need to escalate this into a race issue.
But it was a black man they harassed.
Yes because I’m sure if a White female had been detected by AI of carrying a gun, it would have been treated the same way.
You have no evidence to suggest this, just bias. Unless you are aware of the AI algorithm, then it's a pointless discussion that only causes strife and conjecturing.
It’s not the AI algorithm, it’s the police response I’m questioning would be different.
How many audit the police videos have you seen on Youtube? There are an insufferable amount of "white" people getting destroyed by the cops. If you replace the "white" people in these videos with "black" then 99% of viewers would assume the cops are hardcore racist, when in fact, they are just bad cops - very bad cops, that have some deep psychological issues - probably rooted from a traumatic childhood.
https://www.sentencingproject.org/reports/one-in-five-dispar...
https://scholar.harvard.edu/files/fryer/files/empirical_anal...
https://www.prisonpolicy.org/blog/2022/12/22/policing_survey...
Why don’t you pay the bribe and skip the security theater scanner? It’s cheap. Most travel cards reimburse for it too.
I'm sure CLEAR is already having giddy discussion on how they can charge you to get pre-verified access to walk around in public. We can all wear CLEAR certified dog tags so the cops can hastle the non-dog-tagged people.
The TSA scanners also trigger easily on crotch sweat.
I enjoy a good grope, so I’ll keep that in mind the next time I’m heading into the us.
>the system “functioned as intended,”
Behold - a real life example of a "Not a hotdog" system, except this one is gun / not-a-gun.
Except the fictional one from the series was more accurate...
He could have been easily murdered. It's not the first time by a far margin that a bunch of overzealous cops murder a kid. I would never ever in my life set foot in a place that sends me armed cops so easily. That school is extremely dangerous.
I think the reason the school bought this silly software is because it's a dangerous school, and they're grasping at straws to try and fix the problem. The day after this false positive, a student was robbed.[1] Last month, a softball coach was charged with rape and possession of child pornography.[2] Last summer, one student was stabbed while getting off the bus.[3] Last year, there were two incidents where classmates stabbed each other.[4][5]
1. https://www.nottinghammd.com/2025/10/22/student-robbed-outsi...
2. https://www.si.com/high-school/maryland/baltimore-county-hig...
3. https://www.wbaltv.com/article/knife-assault-rossville-juven...
4. https://www.wbal.com/stabbing-incident-near-kenwood-high-sch...
5. https://www.cbsnews.com/baltimore/news/teen-injured-after-re...
That certainly sounds bad, but it's all relative; keep in mind this school is in Baltimore County, which is distinct from the City of Baltimore and has a much different crime profile. This school is in the exact same town as Eastern Tech, literally the top high school in Maryland.
Hi, I'm not following the point being made.
I skimmed through all the articles linked in GP and finding them pretty relevant to whatever decision might have been made to utilize the AI system (not at all to comment on how badly the bad tip was acted on).
Hailing from and still living in N. California, you could tell me that this school is located in Beverly Hills or Melrose Place, and it would still strike me as a piece of trivia. If anything, it'd just be ironic?
For context, Baltimore (City) is one of the most dangerous large cities in the US. Between the article calling the school "Kenwood High School in Baltimore" and the GP's crime links, a casual reader could mistakenly picture a dangerous inner-city school. But in reality it's located in a low-rise suburb in the County. Granted, it's an inner-ring blue collar suburb, but it's still a night-and-day difference from the worst neighborhoods in the city. And the schools in those bad neighborhoods tend to have far worse crimes than what was listed above.
So my point was that while the list of incidents is definitely not great, it's still way less severe than many inner-city schools in Baltimore. And honestly these same types of incidents happen at many "safe" large suburban high schools in "nice" areas throughout the US... generally less often than at this particular school, but not an order-of-magnitude difference.
Basically, I'm saying that GP's assertion of it being a "dangerous school" is entirely relative to what you're comparing to. There are much worse schools in that metro area.
That sounds to me like it's pretty close to the middle of the curve a large High School in the US.
I doubt that. I moved around a lot as a kid, so I went to at least eight different public schools from Alabama to Washington. One school was structurally condemned while I attended it. Some places had bullying, and sometimes a couple of people fought, but never with weapons, and there was never an injury severe enough to require medical attention.
I also know several high school teachers and the worst things they've complained about are disruptive/stupid students, not violence. And my friends who are parents would never send their kids to a school that had incidents like the ones I linked to. I think this sort of violence is limited to a small fraction of schools/districts.
> I think this sort of violence is limited to a small fraction of schools/districts.
No, definitely not. I went to a decently-well-ranked suburban school district, and still witnessed violent incidents... no weapon used, but still multiple cases where the victim got a concussion. And there were arrests, a gun found in a kid's locker, etc. This stuff was unfortunately relatively normal, at least in the 90s. Not quite as often as at the school in the article, but still.
Based on your reporting, that's one violent crime per year, and one alleged child rapist. [0]
The crime stats seem fine to me. In a city like Baltimore, the numbers you've presented are shockingly low. When I was going through school, it was quite common for bullies to rob kids... even on campus. Teachers pretty much never did anything about it.
[0] Maybe the guy is a rapist, and maybe he isn't. If he is, that's godawful and I hope he goes to jail and gets his shit straight.
If false positives are known to happen, then you design a system where the image is vetted before telling the cops the perpetrator is armed. The company is basically swatting, but I'm sure they'll never be held liable.
Actually, if a system has too many false positives or false negatives, it's basically useless. There will eventually be doubts amongst the operators of it and the whole thing will implode, which is the best possible outcome.
We already went through this years ago with all those terrorism databases and we (humanity) have learned nothing--any database will have a percentage of erroneous data, it is impossible to eliminate erroneous data completely. Therefore, any database used to identify <fill in the blank> will have erroneous conclusions. It's been observed over and over again and governments can't help themselves that "this time it will be different because <fill in the blank> e.g. AI.
I think the most amazing part is that the school doubled down on the mistake by parroting the corporate line.
I expect a school to be smart enough to say “Yes, this is a terrible situation, and we’re taking a closer look at the risks involved here.”
Lawyer's advice?
I would think "no comment" would be safer/smarter than "yeah, your kids are at risk of being shot by police by attending our school, deal with it".
Good point.
Except they never say that.
> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
> Baltimore County Public Schools echoed the company’s statement in a letter to parents, offering counseling services to students impacted by the incident.
(Emphasis mine)
We blame AI here but what's up with law enforcment that comes with loaded guns in hand and send someone to the ground and cuff him before actually doing any check?
That is the real issue.
Police force anywhere else in the world that know how to behave would have approched the student, have had a small chat with him, found out all he had in hands was a bag of doritos, maybe would have asked politely to see the content of his bag, explaining the search has been triggered by an autodetection system that may lead to occasional errors and wished him a good day.
Or they'd have looked at whatever images the system had used for its decision and see it was a false positive without having to send anyone over.
What is happening in the world. There should be some liability for this but nothing will happen.
A bunch of companies and people invested unimaginable amounts of money in these technologies in the hope they will multiply that money. They will showe it down our throats no matter what, this isn't about security and making the world a better place, saving lives or preventing bad things to happen, this is strictly about those people and companies making as much money as possible, or at least for now not losing the money they invested.
Not sure nothing will happen. Some trial lawyers would love to sue a city, a school system, and an AI surveillance company over "trauma, anxiety, and mental pain and suffering" caused by the incident. There will probably be a settlement that nobody ever hears about.
A settlement paid by the taxpayers with no impact at all on anyone actually responsible.
The world is doing fairly ok, thank you. The US however I’m not so sure as people here are apparently more concerned by the AI malfunction than with the idea it’s somehow sensible to live monitor high schools for gun threat.
Its not just the US. China runs the same level of surveillance, its being implemented all throughout Europe, Africa and Asia. This is becoming the norm.
Because if the "gun threat" system isn't accurate, then it's a system for false positives and false negatives and it's actually worse than having no such system. Maybe that's what you meant?
So you’re okay with trigger happy cops forcing a teenager to the ground because he had a bag of Doritos?
No, I think it’s crazy that people somehow think it’s rational to video monitor kids and be worried they have actual fire arms.
I think that’s a level of f-ed up which is so far removed from questioning AI that I wonder why people even tolerate it and somehow seem to view the premise as normal.
The cop thing is just icing on the cake.
It's a system that was sold to a legally risk-averse school district or city or whatever. It's sales job and the non-technical people buy it because they aren't equipped to even ask the right questions about it. They created even more problems for themselves than the problems they purportedly attempted to solve! This is modern life in a nutshell.
Law enforcement officers, judicial officials, social workers, and similar generally maintain qualified immunity from liability in the course of their work. This case for example in which judges and social workers allegedly failed to properly assess a mother's fitness for child custody despite repeated indicators suggesting otherwise. The child was ultimately placed in the mother's care, and later was killed in an execution style (not due to negligence).
https://www.youtube.com/watch?v=wzybp0G1hFE
This case happened in the county I reside in and my sister-in-law is an attorney for the county in CP, although this was not her case directly. I can tell you what led to this: The COVID lockdowns! They stopped doing all the usual home visits and follow ups because everyone was too scared to do their jobs.
This case was a horrifying failure of the entire system that up until that point had fairly decent results for children who end up having to be taken away from their parents and later returned once the Mom/Dad clean up their act.
Not applicable - As a society we’ve countless times chosen to favour the right of the mother to keep children above the rights of other humans. Most children are killed in the home of the mother (i.e. either by the mother, or where partner choice would have avoided that, while the father was available), or even worde in the Anders Breivik situation (father available with stable job and perspectives in life, but custody refused, child grew up a mass murderer as always).
> custody refused, child grew up a mass murderer as always).
What?
[flagged]
[flagged]
The people building these things are good friends with the bullies and scammers now.
Memories of Jean Charles de Menezes come to mind: https://en.wikipedia.org/wiki/Killing_of_Jean_Charles_de_Men...
And Amadou Diallo:
<https://en.wikipedia.org/wiki/Killing_of_Amadou_Diallo>
That was my first thought as well. A worry is police officers make mistakes which leads to anywhere from hapless people getting terrorized, harmed or killed. The bad thing about AI is it'll allow police to escape responsibility. Perhaps also where a human will realize it made a mistake they can admit it and everything is okay. But if AI says you had a gun, it won't walk that back. AI said he had a gun. But when we checked, he didn't have it anymore.
In the Menezes case the cops were playing a game of telephone that ended up with him being shot in the head.
Sincere, and snarky summary:
"Omnilert" .. "You Have 10 Seconds To Comply"
-now targeting Black children!
Q: What was the name of the Google AI Ethicist who was fired by Google for raising the concern that AI overwhelmingly negatively framed non-white humans as threats .. Timnit Gebru
https://en.wikipedia.org/wiki/Timnit_Gebru#Exit_from_Google
We, as technologists, ARE NOT DOING BETTER. We must do better, and we are not on the "DOING BETTER" trajectory.
We talk about these "incidents" with breathless, "Wwwwellll if we just train our AI better ..." and the tragedies keep rolling.
Q2: Which of you has had a half dozen Squad Cars with Armed Police roll up on you, and treat you like you were a School Shooter? Not me, and I may reasonably assume it's because I am white, however I do eat Doritos.
> “They didn’t apologize. They just told me it was protocol. I was expecting at least somebody to talk to me about it.”
I wonder how effective an apology and explanation would have been? Just some respect.
The school admin has no understanding of the tech and only the dimmest comprehension of what happened. Asking them to do anything besides what the tech company told them to do is asking wayyy too much.
Effective at what? No one is facing any consequences anyway.
More's the pity. The school district could use some consequences.
Except for the kids who experienced the “rapid human verification” firsthand.
Not a bad point, but a fake apology is worse than none.
Maybe an apology from the AI?
I wonder if the AI correctly identified it as a bag of Doritos, but was also trained on the commercial[0] where the bag appears to beat up a human (his fault for holding on too tight) and then it destroys an alien spacecraft.
[0] https://www.youtube.com/watch?v=sIAnQwiCpRc
The guidance counselor does not have the training or time to "fix" the trauma you just gave this kid and his friends. Insane to put minors through this.
Having worked extensively with computer vision models for our interview analysis system, this incident highlights a critical challenge in AI deployment: the trade-off between false positive rates and detection confidence thresholds. We initially set our confidence threshold at 0.85 for detecting inappropriate objects during remote interviews, but found this led to ~3% false positives (mostly mundane objects like water bottles being flagged as concerning).
We solved this by implementing a two-stage verification system: initial detection runs at 0.7 threshold for recall, but any flagged objects trigger a secondary model with different architecture (EfficientNet vs ResNet) and viewpoint analysis. This reduced false positives to 0.1% while maintaining 98% true positive detection rate. For high-stakes deployments like security systems, I'm curious if others have found success with ensemble approaches or if they're using human-in-the-loop verification? The latency impact of multi-stage detection could be problematic for real-time scenarios.
I don't have kids yet, but I may someday. I went to public school myself, and would prefer to send any kid of mine to public school as well. (I'm not hard against private schools, but I'd prefer my kid gets to make friends from all walks of life, not just people who have parents who can afford private school.)
But I really wouldn't want to send my kid to a school that surveils students all the time, and uses garbage software like this that directly puts kids into dangerous situations. I feel like with a private school, I'd have more choice and ability to influence that sort of thing.
How likely is it that the AI system would have classified the bag of Doritos as a weapon had the person carrying it been white instead of black?
> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
This exact scenario is discussed in [1]. The "human in the loop" failed, but we're supposed to blame the human, not the AI (or the way it was implemented). The humans serve as "moral crumple zones".
""" The emphasis on human oversight as a protective mechanism allows governments and vendors to have it both ways: they can promote an algorithm by proclaiming how its capabilities exceed those of humans, while simultaneously defending the algorithm and those responsible for it from scrutiny by pointing to the security (supposedly) provided by human oversight. """
[1]: https://pluralistic.net/2024/10/30/a-neck-in-a-noose/
The article doesn't confirm that there was definitely a human in the loop, but it sorta suggests that police got a chance to manually verify the photo before going out to harass this poor kid.
I suspect, though, that the AI flagging that image heavily influenced the cop doing the manual review. Without the AI, I'd expect that a cop manually watching a surveillance feed would have found nothing out of the ordinary, and this wouldn't have happened.
So I agree that it's weird to just blame the human in the loop here. Certainly they share blame, but the fact of an AI model flagging this sort of thing (and doing an objectively terrible job of it) in the first place should take most of the blame here.
An alert by one of these AI tools, which from what I understand have a terrible track record, should not be reasonable suspicion or probable cause to swarm a teenager with guns drawn. I wish more people in local communities would understand how much harm this type of surveillance and response causes. Our communities should not be using these tools.
The regular types of school shootings weren't enough, so they invented AI-powered police school shootings to the mix.
Up next: gun kitted drones patrolling the school playground.
When people wonder how can AI mistake a bag of snacks as a weapon, simply answer "42"
It is about the question, the answer will become very clear once you understand what was the question presented to the inference model, and of course what data and context was fed
>> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
No. If you're investigating someone and have existing reason to believe they are armed then this kind of false positive might be prioritizing safety. But in a general surveillance of a public place, IMHO you need to prioritize accuracy since false positives are very bad. This kid was one itchy trigger-pull away from death over nothing - that's not erring on the side of safety. You don't have to catch every criminal by putting everyone under a microscope, you should be catching the blatantly obvious ones at scale though.
The perceived threat of government forces assaulting and potentially killing me for reasons i have no control over, this is the kind of stuff that terminates the social contract. I'd want a new state that protects me from such stuff.
Systems like this need to report confidence in their assertions.
e.g. Not "this student has a gun" but "this model says the student has a gun with a probability of 60%".
If an AI can't quantify it's degree of confidence, it shouldn't be used for this sort of thing.
Even better, share the frame(s) that the guess was drawn from with a human for verification before triggering ANYTHING. How much trouble could that possibly be? How many "guns" is this thing detecting in a day across all sites? I doubt more than a couple or we'd have heard about tons of incidents, false positives or not.
I wanna see the frames too.
I don't find that especially good as a sole remedy, because lots of people are stupid. If they see a green outline box overlaid on a video image with the label 'gun', many many people will just respond to the label instead of looking at the underlying image and trying to make a decision. Probability and validation history need to be built into the product so that there are audit logs that can be pored over and challenged. Bad human decision-making, which is rampant, is always smoothed over with justifications like 'I was concerned for everyone's safety', and usually treated in isolation rather than assessed longitudinally.
Doesn’t work. Competition will make them report higher accuracy to make the product look better.
It sounds like the police mistook it as well:
> “They showed me the picture, said that looks like a gun, I said, ‘no, it’s chips.’”
So AI did the initial detection, but police looked at it and agreed. We don't see the image, but it probably did look like a gun because of a weird shadow or something.
Fundamentally, this isn't really any different from a person seeing someone with what looks like a gun and calling the cops, only it turns out the person didn't see it clearly.
The main issue is just that with increased numbers of images, there will be an increase in false positives. Can this be fixed by including multiple images, e.g. from motion of the object, so police (and the AI) can better eliminate false positives before traumatizing some poor teen?
> So AI did the initial detection, but police looked at it and agreed. We don't see the image, but it probably did look like a gun because of a weird shadow or something.
Not sure I agree. The AI flagging it certainly biased the person doing the manual review toward agreeing with the AI's assessment. I can imagine a scenario where there was no AI involved, just a human watching that same surveillance feed, and (correctly) not seeing anything alarming in it.
Also I expect the AI completely failed at context. I wouldn't be surprised if the full video feed, a few minutes (or even seconds) before the flagged frame, shows the kid crumpling up the empty Doritos bag and stuffing it in his pocket. The AI probably doesn't keep all that context around to use when making a later decision, and giving just the flagged frame of video to the human may have caused them to miss out on important context.
Picture? Images? But those are just frames of footage the cameras have captured! Why would one purposefully use less information to make a decision rather than more?
Just put the full footage in front of an unbiased third party for a multi-stage verification first. The problem space isn't "is that weird shadow in the picture a gun or not?" it's "does the kid in the video have a gun?". It's not hard to figure out the difference between a bag of chips and a gun based on body language. Presumably the kid ate chips out of the bag? Using certain motions that one makes when doing that? Presumably the kids around him all saw the object in his hands and somehow did not react as if it was a gun? Jeez.
Those are a lot of "presumablies". Maybe you're right. Or maybe it was mostly obscured so you really couldn't tell. How do you know it was open and he was eating? How do you know there were other kids around and he wasn't solo? Why do you think the body language would be so different? Nobody is claiming he was using a gun or threatening anyone with it. If you're just carrying something in your hand, I don't know how you could tell what the object is or isn't from body language.
It wasn't open and he wasn't eating. The AI flagged a bulge in his pants pocket, which was the empty, crumpled up bag that he put in his pocket after finishing eating all the chips.
This is quite frankly absurd. The fact that the AI flagged it is bonkers, and the fact that a human doing manual review still believed it was a gun... I mean, just, wow. The level of dangerous incompetence here is staggering.
And I wouldn't be surprised if, minutes (or even seconds) before the video frame the AI flagged, the full video showed the kid finishing the bag and stuffing it in his pocket. AIs suck at context; a human watching the full video would not have made the same mistake. But in mostly taking the human out of the loop, all they had for verification was a single frame of video, captured as a context-free still image.
It is frankly mind-boggling that you or anyone else can defend this crap.
Computer says it looks like a gun.
https://en.wikipedia.org/wiki/Computer_says_no
I can understand the outrage in this thread but literally none of what you are all calling for will be done. No one from justice or law reads HN to see what should be done. I wish folks here would keep a cooler head rather than posting lengthy rants and vents that call for punishing school staff. Really unprofessional and immature from a community that prides itself, to fall constantly into a cycle of vitriol.
Can someone outline a more pragmatic, if not likely, course of what happens next after this? Is it swept under the rug and we move on?
It’s unsurprising, since this kind of classification is only as good as the training data.
And police do this kind of stuff all the time (or in the very least you hear about it a lot if you grew up in a major city).
So if you’re gonna automate broken systems, you’re going to see a lot more of the same.
I’m not sure what the answer is but I definitely feel that “security” system like this that are purchased and rolled out need to be highly regulated and be coupled with extreme accountability and consequences for false positives.
> “They showed me the picture, said that looks like a gun, I said, ‘no, it’s chips.’”
"Sorry, that's Nacho gun"
Armed and dangerous until proven chips.
The solution is easy. Gun control. We dont feel the need to have AI surveillance on people to detect guns in ROTW.
At least there is a check done by humans in a human way. What if this human check is removed in future, as AI decisions would be deemed no longer requiring a human inspection?
Isn't that what happened here?
Wouldn’t have thought AI assessment of security image is enough for probable cause
Very ripe for a lawsuit. I would expect lawyers to be calling daily.
The model seems pretty shitty. Does it only look on a frame-by-frame basis? Literally one second of video context and it would never make that mistake.
The only way we could have foreseen this was immediately.
I thought on first glance the source was from doritos.com
That would have been bold
The "AI mistake" part is a red herring.
The real question is: Would this have happened in an upper/middle class school.
The student has dark skin. And is attending a school in a crime ridden neighborhood.
Were it a white student in a low crime neighborhood, would they have approached him with guns drawn?
The AI failure is masking the real problem - bad police behavior.
In 1987, Paul Verhoeven predicted exactly this in the original Robocop.
ED-209 mistakenly viewed a young man as armed, blows him away in the corporate boardroom.
The article even included an homage to:
“Dick, I’m very disappointed in you.”
“It’s just a small glitch.”
And so begins the ending of the "unfinished fable of the sparrows"
AI is a false (political) wish, it can and never work, it is the desperation of an over extended power structure to hold on and permanently consolodate controll of all of the worlds population, and nothing else.
the proofs are there.
philosophers mulled this over long ago and made clear statements as to why ai cant work
though not that for a second do I misdunderstand that it is "all in" for ai, and we all get to go for the 100 trillion dollar ride to hell.
can we have truely awsome automation for manufacturing and mundane beurocratic tasks?, fuck ya we can!
but anything that requires understanding, is forever out of reach, which unfortunatly is also lacking in the people pushing, this thing, now
I would get my GED at that point. Screw that school.
Hallucinate much?
Robocop.
Edit: And racism. Just watched the video.
With high level of hallucination, cops need to tranquilizers more. If the student had reached for his bag just before the cops arrived, BLM 2.0 would have started.
There are two basic ways AI can be used:
1. To enhance human productivity; or
2. To replace humans.
Companies, particularly in the US, very much want to go with (2) and part of the reason they can is because there are zero consequences for incidents like this.
A couple ofexamples spring to mind:
1. the UK Royal Mail scandal where a bad system accused postmasters of theft, some of whom committed suicide over the allegations. Those allegations were later proven false and it was the system's fault. IMHO the people who signed off and deployed this should be charged with negligent homicide; and
2. The Hertz case where people who had returned cars were erroneously flagged as car thieves and report was made to police. This created hell for people who would often end up with warrants they had no idea about and would be detained on random traffic stops over a car that was never stolen.
Now these aren't AI but just like the Doritos case here, the principle is the same: companies are trying to replace people with computers. In all cases, a human should be responsible for reviewing any such complaint. In the Hertz case, a human should check to see if the car is actually stolen.
In the Royal Mail situation, the system needs to show its work. Deployment should be against the existing accounting system and discrepancies between the two need to be investigated for bugs until the system is proven correct. Particularly in the early stages, a forensic accountant (if necessary) should verify that funds were actually stolen before filing a criminal complaint.
And if "false positive" criminal complaints are filed, the people who allowed that to happen, if negligent (and we all know they are), should themslves be criminally charged.
We are way too tolerant of black box systems that can result in significant harm or even death to people. Show your work. And make a human put their name and reputation to any output of such systems.
Who knew eating Doritos could make you a millionaire?
I hope this kid gets what he deserves.
What a tragedy. I'm sure racial profiling on behalf of the AI and the police had absolutely nothing to do with it.
All right they’ve gotta have a plain clothes bro go up there make sure the kid is chill. You know the difference between a murder and not can be as little as somebody being nice
Can someone write the novel
“Computer says die”
Wait… AI hallucinated and the police overreacted to a black kid who actually posed no threat?
I thought those two things were impossible?
I would be certainly curious to test ethnicity with this system. Will white students with a bag of Doritos be flagged, or only if they’re black?
Exactly. I wonder if this a purpose-built image-recognition system, or is it a lowest-possible effort generic image model trained on the internet? Classifying a Black high school student holding Doritos as an imminent shooting threat certainly suggests the latter.
its not gun detection that ai is racist just like its white creators
we need personal liability for the owners of companies that make things like this
This is what we get instead of reasonable gun control laws.
You're free to (attempt to) amend the Second Amendment, but the Supreme Court of the United States has already affirmed and reaffirmed that individual ownership of firearms in common use is a right.
What do you propose that is "reasonable" given the frameworks established by Heller, McDonald, Caetano, and Bruen?
I can 3D print or mill basically every item you can imagine prohibiting at home: what exactly do laws do in this case?
> the Supreme Court of the United States has already affirmed and reaffirmed that individual ownership of firearms in common use is a right.
The current interpretation of 2A is actually a fairly recent invention; in the past, it's been interpreted much more narrowly. And if SCOTUS can overturn Roe v. Wade's precedent, they can do the same with their interpretation of 2A. They won't of course, at least not until some of its members age out and get -- hopefully -- replaced with people who aren't idiots.
But I'd be fine if 2A was amended away. Let the states make whatever gun laws they want, and we can see whether blue or red states end up with lower levels of gun violence as a result.
> “false positive” but claimed the system “functioned as intended,”
Fuck you.
The core of the issue is that many Americans do carry weapons which means that whatever the security system, it needs to keep in mind that the suspect might be armed and about to start shooting. This makes the police biased towards escalation because the only way against a shooter is to shoot first.
This problem doesn't exist in Europe or Japan because guns aren't that ubiquitous, which means that the police have the time to think before they act, which makes them less likely to escalate and start shooting. Obviously, for Americans, the only solution is to get rid of the gun culture, but this will never happen, so suck it up that AI gets you swatted.
You had me up until the last sentence...
>Obviously, for Americans, the only solution is to get rid of the gun culture, but this will never happen, so suck it up that AI gets you swatted.
...and you are correct.
Everything around us: political tumult and weaponization of the justice system, ICE and other capricious projections of federal authority, the failure of drug prohibition, and on and on and on, points to a very simple solution:
Abolish SWAT teams. Do away with the idea that the state employees can be permitted to be more armed than anyone else.
Blaming the so-called 'swatter' (whether it's a human or AI) is really not getting at the root of the problem.
"Omnilert Gun Detect delivers instant gun detection, near-zero false positives".
If it's taking images every 30 seconds, it's getting 86400 x 30 = 2.5 million images per day per camera. So when it causes enormous, unnecessary trauma to one student per week, the company can rightfully claim it has less than a 1 in 10 million false positive rate.
(* see also "how to lie with statistics").
Inflicting trauma on a harmless human in the name of the "safety of others" is never ok. The victim here was not unharmed, but is likely to end up with PTSD and all the mental health issues that come with it.
I hope they sue the police department over this.
America does American things.
the best part of the technocracy is that they're not actually all that good at anything. the second best part is that when their mistakes end in someone dead there will be some way that they're not responsible.
Sad for the student.
Imagine the head scratching that's going on with execs who are surprised things might work when a probabilistic software is being used for deterministic purposes without realizing there's a gap between it kind of by nature.
I'm sure there will be no head scratching. They already know that this can happen, and don't care, because they know that if someone gets killed because of it, they won't be held responsible. And may not even lose any customers.
> Imagine the head scratching that's going on with execs
I can't. The execs won't care and probably in their sadist ways, cheer.
Fair. Only a matter of time until it's big enough that it can't be avoided.
This is only the beginning of AI-hallucinated policing. Not a good start, and I don't think it's going to end well for citizens.
"end well for citizens."
That ship has long sailed buddy.
Yeah ask all those citizens getting “detained” by ICE how it worked out for them.
I was unduly surprised and disappointed when I saw the photo of the kid and he turned out to be black. I would love to believe that this had no impact on how the whole thing played out, but I don't.
If these AI video based gun detectors are not a massive fraud I will eat one.
How on Earth does a person walk with a concealed gun? What does a woman in a skirt with one taped to her thigh walk like? What does a man in a bulky sweatshirt with a pistol on his back walk like? What does a teenager in wide legged cargo jeans with two pistols and a extra magazines walk like?
The brochure linked from TFA has a screenshot of a combination of segmentation and object recognition models which are fairly standard in NVRs. Quick skim of the vendor website seems to confirm this[1] and states a claim that they are not analyzing the gait.
[1]: https://www.omnilert.com/blog/what-is-visual-gun-detection-t...
The whole idea even accepting the core premise is OK to begin with needs to have a similar analysis applied to it that medical tests do: will there be enough false positives, with enough harm caused by them, that this is actually worse than doing nothing? Compared with likelihood of improving an outcome and how bad a failure to intervene is on average, of course.
Given that there's no relevant screening step here and it's just being applied to everyone who happens to be at a place it's truly incredible that such an analysis would shake out in favor of this tech. The false positive rate would have to be vanishingly tiny, and it's simply not plausible that's true. And that would have to be coupled with a pretty low false negative rate, or you'd need an even lower false positive rate to make up for how little good it's doing even when it's not false-positiving.
So I'm sure that analysis was either deliberately never performed, or was and then was ignored and not publicized. So, yes, it's a fraud.
(There's also the fact that as soon as these are known to be present, they'll have little or no effect on the very worst events involving firearms at schools—shooters would just avoid any scheme that involved loitering around with a firearm where the cameras can see them, and count on starting things very soon after arriving—like, once you factor in second order effects, too, there's just no hope for these standing up to real scrutiny)
The real issue is that they obviously can't detect what's in a backpack or similar large vessel.
Gait analysis is really good these days, but normal, small objects in a bag don't impact your gait.
T0 be fair most commercials for Doritos, Skittles, Mentos, etc., if occurring in real life, would result in a strong police response just after they cut away.
How is this not slander? I would absolutely sue the fuck out of this system where it puts people's lives in danger.
> How is this not slander?
Because that's not what slander is.
[dead]
>Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
It's ok everyone, you're safer now that police are pointing a gun at you, because of a bag of chips ... just to be safe.
/s
Absolutely ridiculous. We're living "computer said you did it, prove otherwise, at gunpoint".
We got our cyberpunk future, except none of it's cool and everything's extremely stupid.
They could at least thrown in some good music and cute girls with colored hair to make us feel better :(
You get grok Lady for the latter.
I’ve got great news for you: there are more girls with colored hair than ever before, and we got the Synthwave revival, just try to find the right crowd and put on Timecop1983 in your headphones
Cyberpunk always sucked for all but the corrupt corporate, political and military elites, so it’s all tracking
> there are more girls with colored hair than ever before
The ones I see don't tend to lean cute.
Well the "hackers" jacking in to the "Hacker News" discussion board, where we talk about the oppression brought in by the corrupt AI-peddling corporations employed by the even more corrupt government, probably aren't all looking like Zero Cool, Snake Plissken, Officer K, or the like, though a bunch may be.
Good point. Gets my upvote for mentioning Snake Plissken. Escape from LA is such a masterpiece.
I recommend rewatching the trilogy of Brazil, 12 Monkeys and Zero Theorem.
It's sadly the exact future that we are already starting to live in.
Pretty sure cyberpunk was always this dark.
Dark, yes, but also cool and with a fair amount of competence in play, including among powerful actors, and often lots of competence.
We got dark, but also lame and stupid.
AI singularity wil happen, but motherbrain as a complete moron. It will extinguish humans not as a grand plan for machines to take over, but doing horrible mistakes when trying to make things better.
If any of you had actually paid attention to the source media, you would have noticed that they were explicitly dystopias. They were always clearly and explicitly hell for normal people trying to live life.
Meanwhile, tons of you watched star trek and apparently learned(?) that the "bright future" it promised us was.... talking computers? And not, you know, post scarcity and enlightenment that allowed people to focus on things that brought them joy or they were good at, and an entire elimination of the concept of "capitalism" or personal profit or resource disparity that could allow people to not be able to afford something while some asshole in the right place at the right time gets to take a percentage cut of the entire economy for their personal use.
The primary "technology" of star trek was socialism lol.
Oh of course they were dystopias. But at least they were cool and there was a fair amount of competence floating around.
My point is exactly that we got the dystopia but also it's not even a little cool, and it's very stupid. We could have at least gotten the cool dystopia where bad things happen but at least they're part of some kind of sensible longer-term plan. What we got practically breaks suspension of disbelief, it's so damn goofy.
> The primary "technology" of star trek was socialism lol.
Yep. Socialism, and automatic brainwashing chairs. And sending all the oddball non-conformists off to probably die on alien planets, I guess. (The "Original Series" is pretty weird)
gustapo
The safest thing to do is to pull all Frito Lay products off shelves until the packaging can be redesigned to ensure that AI never confuses them for guns. It's a liability issue. THINK OF THE CHILDREN.
The only thing that can stop a bag guy with Doritos is ...
Let's hope that, thanks to AI, the young man will now have a healthier diet! /s
Feed the same system an image of an Asian kid and it will think the bag of chips is a calculator /s
Or am I kidding? AI is only as good as its training and humans are...not bastions of integrity...
I think it's almost guaranteed that this model has race-related biases, so no, I don't think you're kidding at all. I think it's entirely likely that an Asian (or white) kid of the same build, wearing the same clothes, with a crumpled-up bag of Doritos in his pocket, would not get flagged as having a gun.
Using humans for training gurantees bad outcomes because humans cannot demonstrate sociality at the same scale as antisociality.
Poor kid, and what an incompetent police department not to use their own judgement ……
But ……
Doritos should definitely use this as an advertisement, Doritos - The only weapon of mass deliciousness, or something like that
And of course pay the kid, so something positive came come out of the experience for him
The snack you'll only SWAT from my cold dead hands
[flagged]
[flagged]
"Stop resisting...the flavor"
Man. People did not like this bit of thread.