We sat down with Ryan Fairclough, Herta’s APAC Sales Manager, to talk ethics within the physical security sector.
How do you see the security sector nowadays?
The Physical Security Industry is a fast-moving technical beast, the availability of new technology coupled with the human urge to feel safety in their environment has led the industry to be one of the fastest growing in the world. Whether it be IP Video Surveillance, Access Control or any other range of products, most people have – knowingly or unknowingly – interacted with a part of this technology on any given day.
How do you expect the security sector will evolve?
Having been involved in this industry for a number of years the evolution has been at a steady but constant pace in all facets of the technology. There were of course early outliers, such as biometric fingerprint access, but they truly lacked any sort of widespread adoption in the wider industry. For the most part, the tech evolution seemed to come more from the management side of the product. Consider the need for a more viable VMS, or [at one time] the never-ending race to launch an IP Video camera with more Megapixels than your competitor, only for real world deployment to be at 1.3MP anyway!
In my opinion, no part of the technology evolution (revolution maybe!) has been as quick and as demanded as the Video Analytics space. It seems overnight that the theory of such concepts moved from people understanding AI to be something out of an Arnold Schwarzenegger film, to those same people demanding its application in their own requirements.
What was once a famous scene in a Tom Cruise movie where our action hero set in some distant future is identified, welcomed and offered products for sale based on his eyes alone is now a reality of sorts. Consider the time it took to move beyond a Coax cable and discussions about TVL lines to the full adoption of Cat5 cables and IP addresses within cameras. The speed of our adoption and demand for Video Analytics seems almost Usain Bolt like in its speed to market.
What about ethics in this sector?
Obviously with this rapid approach to market there has been a range of issues of ethics with the use and application of such technologies. The current world geo-political state and the almost parallel rise of advanced AI brings with it comparisons of George Orwell’s magnificent dystopian novel “1984”. This was even before we entered the now historical year of 2020 where the whole world has been turned on its head by an unseen force unlike anything anyone has ever seen in their own lifetimes, COVID-19.
But are these comparisons fair?
It’s like if you ask me if the use of Analytics Technology, including Facial Recognition Technology is going to send us hurtling towards a point whereby George Orwell’s aforementioned musings [first published in 1949] become some sort of Nostradamus like prophecy?
In my opinion “No” and ultimately my faith in our own industry and humanity at large is at the core of my reasoning.
I heard of some very high-profile cases recently, of alleged misuse of Security technology and one country, rightly or wrongly, comes under pressure regularly for some of their deployments and reasonings. Whilst I am not here to debate the actual reality of these allegations, they do prove that the concept of unethical uses of Surveillance equipment, and in particular smarter AI and Facial Recognition products.
But in noting these allegations and people’s distaste for such uses, most notably ethnic profiling and the “social currency”, I think we reach the realisation that self-monitoring and industry concern will long term outweigh unethical applications.
The simple fact that these questions are being raised at such an embryonic phase of Technology deployment lends credence to the theory that ethical use it at the forefront of the vast majority of our industry.
I do however believe that it is incumbent on us as the Vendors and Integrators of these products to ensure that we are deploying them for the benefit of the future, and not for any nefarious purpose.
How do you deal with this in Australia?
In Australia we have a quant conversational tool called “The Pub Test” to solve basic disagreements. At its simplest, it conveys a situation whereby if you were to find an average person in an average bar and stopped to talk with him, presented all the facts of your application and use case, would that person agree with your use or no? Would they feel comfortable to be part of such a use case?
What would the average person with little to no knowledge of our industry believe to be right in the circumstances?
We have the obligation to ensure we are passing “The Pub Test” and that our applications of Security Technology and Analytics comes from a place of future prosperity for all as opposed to oppression or unsafe purposes. I believe our industry as a whole is more than capable of achieving such a utopian ideology and we can lay to rest the excellent writings of George Orwell as a work of pure Fiction.