August 2, 2019
How Virtual Security Can Help Protect Your Co-working Offices Co-working spaces are becoming one of today’s most popular office layout alternatives. In 2017, the number of workers who use co-working...
We’ve all seen it in the latest blockbuster – the security guard taken advantage of while sleeping on the job. The well-executed heist where masked bank robbers tap into the surveillance cameras to gain access to the vault. In these circumstances, the security guards don’t stand a chance – outmatched by more sophisticated masterminds armed with the latest technology. That is about to change. Artificial Intelligence (AI), whether it’s in the form of rules-based machine learning or behavioral analytics, will soon begin to revolutionize the security industry.
Case and point: Knightscope, a Mountain View, California startup, is developing a fleet of crime-fighting robots called K5. Resembling a taller (5 feet) more robust (300 lbs) version of R2D2, the K5 is armed with highly sophisticated predictive surveillance analytics capable of detecting suspicious activity. Designed to assist security officers in corporate environments, malls, hospitals, etc., K5 uses an arsenal of sensors to not only capture criminal activity in real-time but report it to authorities instantaneously.
In Osaka, Japan, 46 security cameras armed with the latest AI software technology are programmed to “automatically search for signs of intoxication in passengers at the Kyobashi train station,” according to Business Insider. Whether it’s watching for people stumbling dangerously close to the tracks or sleeping on benches, surveillance cameras can seamlessly alert attendants. With close to 60% of the 221 people hit by trains due to intoxication, according to a West Japan Railway Study, relying on surveillance analytics may be a welcomed innovation over the possibility of human error.
With the human attention span limited to five minutes during monotonous tasks and the turnover rate at 400%, the security industry is in desperate need of a sophisticated watchdog. Relying on a series of flow chart-like algorithms, surveillance cameras powered by AI can compare objects with hundreds and thousands of stored images. The K5 security robot, for example, has automatic license plate recognition (ALPR) that can scan up to 1,500 license plates per minute and according to the company’s website, “it notifies the authorities if it scans a license plate registered to a suspected criminal.”
Knightscope co-founder and former law enforcement agent Stacey Dean Stephens and his partner co-founder William Li designed K5 “in response to the tragic events at Sandy Hook and the Boston Marathon,” according to the company’s website. The founders believed that with a “unique combination of hardware and software, they could greatly reduce crime by as much as 50%.”
AI instantly transforms video surveillance into a highly intuitive activity. From intoxication to trespassing, companies like Behavioral Recognition System Labs are using machine learning to monitor criminal behavior. AISight (pronounced eyesight), for example, installed their equipment in Boston post-2013 Marathon bombings. Designed to alert authorities in real-time, it uses the latest in video analytics to recognize criminal behavior before it takes place. “We are recognizing a precursor pattern that may be associated with a crime that happens,” Wesley Cobb, chief science officer at BRS Labs said in an interview with Bloomberg.
From issues surrounding privacy to AI technology still not perfected, video surveillance will need to go through some much-needed growing pains. AI has yet to be perfected for three reasons: one, artificial intelligence requires a ton of ever-changing hard-to-get information; two, AI is unable to multi-task; and three, there needs to be more focus on how these systems reach their final conclusions.
According to Neil Lawrence, Professor of machine learning at the University of Sheffield and part of Amazon’s AI team, “these systems don’t just require more information than humans to understand concepts or recognize features, they require hundreds of thousands times more.” Lawrence says that huge tech giants like Google, Facebook and Microsoft are perfect resources for AI. “They have abundant data and so can afford to run efficient machine learning systems.”
The second major problem, according to Google DeepMind research scientist Raia Hadsell, is AI’s inability to multi-task. Compared to an idiot savant, “there is no neural network in the world, and no method right now that can be trained to identify objects and images, play Space Invaders, and listen to music,” Hadsell said.
Finally, there needs to be greater insight into how AI reaches its conclusions. In other words, instead of being spoon fed the meaning of objects, systems need to interpret their surroundings objectively. According to Murray Shanahan, Professor of Cognitive Robotics at Imperial College London, “what goes on in the mind can be reduced to basic logic, where the world is defined by a complex dictionary of symbols. By combining these symbols – which represent actions, events, objects, etc. – you basically synthesize thinking.”
Video surveillance allows authorities to detect the presence of a gun inside a school building or monitor suspicious activity from onlookers prior to the start of a marathon. There are some major “Big Brother” drawbacks with software programs designed to watch people. Stuart Russell, AI researcher at the University of California/Berkeley and co-author of “Artificial Intelligence: A Modern Approach thinks “Intelligent ‘watching’ program will likely freak people out more than a human monitor does, even though most people would reasonably expect they are being watched if they encounter surveillance cameras.”