AI, a Watchful Eye: The Less than Stellar Performance of AI Security and the Consequences Thereof
James Hotham
The use and abuse of widespread camera surveillance is not a novel fear. For decades, media has explored this concept. However, a new threat has arisen in a new form. It has not taken the form of an oppressive government, a terrorist group, or a supreme artificial intelligence. Rather, it comes from private party security providers. Several security providers have begun to work AI into their security cameras for the use of threat detection.[1] However, the success of these threat detection models is dubious. Just this year in late October, one of these systems placed in a Baltimore school, detected an individual carrying a firearm.[2] Police arrived and identified the suspect as sixteen-year-old Taki Allen.[3] However, after the police drew their weapons and handcuffed young Allen, they discovered the “firearm” was actually just an empty bag of Doritos.[4]
Despite the fact that AI technology of this level of sophistication is relatively new, it has sprouted into a multimillion-dollar industry in just a few years. But despite years of development, mishaps like these still occur. This article will explore how these systems work, why they malfunction, ways consumers can avoid these malfunctions, and potential liability for when they malfunction.
I. How do AI Security Systems Work?
Security AI systems use convolutional neural networks (CNNs), an AI learning algorithm that uses “three-dimensional data for image classification and object recognition.”[5] CNNs take images and break them down into their key features and subsequently reform the image, regrouping and recontextualizing what each item is in reference to what else is in the image.[6] For example, if an image of a house was fed into a CNN it would break down the image into smaller pieces such as windows, doors and the building’s siding.[7] The AI could then take each of these individual pieces and reform the image, combining them to requalify the image as a whole as a house.[8] CNNs that are trained to detect a certain set of data become better at identifying said data by meshing the images they understand to contextualize a wider set of images faster and more accurately.[9]
CNNs detect threats by examining environments in real time using camera feeds. These images are noisier than static images because people are constantly moving, and lighting shifts throughout the day, making identifying a smaller object, like a gun or a knife, more difficult.[10] This noise, the quality of the systems training data, as well as the quality of the camera, can cause false positives or result in a failure to recognize threats entirely.[11] Because these systems have the potential to flag false positives, human verification is often a final procedure to ensure that false alarms are not reported to police.[12] In some of these cases, the AI will forward the image to the client themselves so that they can verify the threat’s validity or send the threat to security where an agent will verify the treat.[13]
II. The Faults and Risks of AI Security
Although all AI security networks rely on CNNs to detect threats, not all CNNs are created equal.[14] A CNN’s quality is largely a product of its training data; if inadequately trained, the results of the system could be inconsistent, leading to missed treats or false alarms.[15] In 2024 one such security provider, Evolv Technologies Holdings INC (Evolv), got into hot water when the FTC filed an action claiming that the company had misrepresented what its AI was capable of.[16]
In its complaint, the FTC claimed that Evolv “made false or unsupported claims about its security screening system.”[17] At the core of the company’s misrepresentations was the issue of the AI’s ability to differentiate ordinary items from threats.[18] Evolv’s security system is designed to detect threats in a similar manner to metal detectors.[19] Evolv’s system uses large sensors that use electromagnetic waves to produce an image of metal objects that pass by it.[20] In theory, when a metallic object that is similar in shape to a firearm or knife passes through, it will detect it, identify it, and send a notification to the client of the threat.[21]
However, there are several reports of this system failing to detect clear threats.[22] In one such report, cited in the FTC’s complaint,[23] Evol’s system permitted a seven-inch knife to pass through security, which was subsequently used to stab a student.[24] In response, school officials increased the sensitivity of the device. However, instead of alerting the school to more weapons, the device simply delivered more false alarms related to clearly dissimilar items like phones and laptops.[25]
Evolv’s security system is not the only one making mistakes on this scale. In January of this year, Omnilert’s security system made another security error. However instead of a false flag, the security system failed to identify a firearm that was used in a Nashville school shooting.[26] The reasons given for this failure are mixed.[27] Representatives of Omnilert claim that there actually was no failure because the gun was not brandished, making it impossible for the system to detect it.[28] However, school officials claim that the gun was brandished but that it was done too far away from the camera to be identified.[29]
These systems are volatile and, in some ways, deeply flawed – failing to identify threats or pulling them out of thin air. Because they are so new, there are no government-imposed requirements for them to be marketed to ensure quality aside from FTC regulations that require companies in general to avoid deceitful advertisement.[30] When these systems are in use, how are we to assess damages or assign blame when they malfunction? On the one hand, the owner of the property that is in contract with the security provider likely will have a cause of action for indemnity, if they did not contract out of it. But what about when these systems cause harm due to a false alarm?
III. Liabilities as a Results of Security System Malfunctions
Many of these security systems have implemented human verification systems so if the principal calls in a report of a potential threat and someone gets injured as a result, would they be liable for negligence? Let’s return to the case of Taki Allen. Let’s say, for instance, that instead of just aiming firearms at Allen and handcuffing him, the officer got nervous and used what they believed to be appropriate force ultimately injuring Allen, who would or could be held responsible for the injury?[31]
Because the officer acted in good faith with what they believed to be appropriate force, they would be shielded from liability due to qualified immunity.[32] The security provider likely would not be on the line for negligence due to a lack of a special relationship. This leaves the school, which could be held responsible under a negligence theory. However, liability under this theory might be tricky.
To be negligent, the school would have acted in a manner that a careful, reasonable person would have in the same circumstances. In retrospect, it seems clear that they should have verified the threat before calling the police, but that fails to account for the paranoia surrounding gun violence in American schools.[33] When “[e]very second counts when lives are stake,” would a reasonable person take [34] what may end up being crucial minutes to review an AI screening result before calling the police? I think it would be difficult to convince anyone of that fact given the times we live in now.
[1] See generally Industries We Serve, Omnilert, https://www.omnilert.com/industries (last visited Nov. 16, 2025); AI Security Cameras and Systems: What to Know, Avigilon, https://www.avigilon.com/blog/ai-security-cameras (last visited Nov. 16, 2025).
[2] Janay Reece, A.I. Gun Detection False Alarm at School has Baltimore County Leaders Calling for Review, CBS News (Oct. 23, 2025), https://www.cbsnews.com/baltimore/news/false-alarm-gun-detection-kenwood-maryland-artificial-intelligence-review/.
[3] Liv McMahon & Imran Rahman-Jones, Armed Police Handcuff Teen After AI Mistakes Crisp Packet for Gun in US, BBC (Oct. 24, 2025), https://www.bbc.com/news/articles/cgjdlx92lylo.
[4] Id.
[5] What are Convolutional Neural Networks?, IBM, https://www.ibm.com/think/topics/convolutional-neural-networks (last visited Nov. 16, 2025); see also Afshine Amidi & Shervine Amidi, Convolutional Neural Networks Cheatsheet, Stanford, https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks (last visited Nov. 16, 2025).
[6] Amidi, supra note 5.
[7] Id.
[8] Id.
[9] Cole Stryker, What is Training Data?, IBM, https://www.ibm.com/think/topics/training-data (last visited Nov. 16, 2025).
[10] See generally Data-Centric AI v. Model-Centric AI in Gun Detection, Omnilert, https://www.omnilert.com/why-omnilert/data-centric-ai-vs-model-centric-ai (last visited Nov. 16, 2025).
[11] Id.
[12] Real-Time Human Verification, Omnilert, https://www.omnilert.com/solutions/professional-monitoring (last visited Nov. 16, 2025).
[13] Id.; see also Reese, supra note 2.
[14] Amidi, supra note 5.
[15] Id.
[16] Complaint For Injunction and Other Relief at 1, FTC v. Evolv Technologies Holdings Inc., No. 1:24-CV-12940 (D. Mass 2024), https://www.ftc.gov/system/files/ftc_gov/pdf/EVOLVCOMPLAINTFILED.pdf.
[17] Id.
[18] Id. at 3–4.
[19] Id.
[20] Id.
[21] Id.
[22] Id. at 5.
[23] Id.
[24] Sarah Al-Arshani, A Nearly $4 Million AI-Powered Weapons Scanner Sold to a New York School System Failed to Detect Knives, Business Insider (May 25, 2023), https://www.businessinsider.com/ai-powered-weapons-scanner-new-york-school-failed-detect-knives-2023-5.
[25] Complaint For Injunction and Other Relief at 1, FTC v. Evolv Technologies Holdings Inc., No. 1:24-CV-12940 (D. Mass 2024), https://www.ftc.gov/system/files/ftc_gov/pdf/EVOLVCOMPLAINTFILED.pdf.
[26] Amanda Musa, This AI Technology was Supposed to Detect Guns in School. Here’s What Happened Outside Nashville, CNN (Feb. 1, 2025), https://www.cnn.com/2025/02/01/us/ai-gun-detection-software-antioch-school.
[27] Id.
[28] Id.
[29] Id.
[30] See generally Federal Trade Commission Act, 15 U.S.C.A. § 52.
[31] See generally McMahon supra note 3.
[32] See generally Pearson v. Callahan, 555 U.S. 223, at 244 (2009).
[33] Alex Leeds Matthews, Amy O’Kurk, & Annette Choi, School Shootings in the US: Fast Facts, CNN (Nov. 14, 2025), https://www.cnn.com/us/school-shootings-fast-facts-dg.
[34] Omnilert’s Gun Detection System, Omnilert, https://www.omnilert.com/solutions/gun-detection-system (last visited Nov. 16, 2025).
