AI, a Watchful Eye: The Less than Stellar Performance of AI Security and the Consequences Thereof
James Hotham
The use and abuse of widespread camera surveillance is not a novel fear. For decades, media has explored this concept. However, a new threat has arisen in a new form. It has not taken the form of an oppressive government, a terrorist group, or a supreme artificial intelligence. Rather, it comes from private party security providers. Several security providers have begun to work AI into their security cameras for the use of threat detection.[1] However, the success of these threat detection models is dubious. Just this year in late October, one of these systems placed in a Baltimore school, detected an individual carrying a firearm.[2] Police arrived and identified the suspect as sixteen-year-old Taki Allen.[3] However, after the police drew their weapons and handcuffed young Allen, they discovered the “firearm” was actually just an empty bag of Doritos.[4]
Despite the fact that AI technology of this level of sophistication is relatively new, it has sprouted into a multimillion-dollar industry in just a few years. But despite years of development, mishaps like these still occur. This article will explore how these systems work, why they malfunction, ways consumers can avoid these malfunctions, and potential liability for when they malfunction.
