Facing up to our future?
Colour Splash Be-IT Blog

Facing up to our future?

Facing up to our future?

Posted on 15th February 2021

LinkedIn ShareShare
More

We’ve written a lot in the past on facial recognition technology which, I was surprised to discover, began back in 1964, before everyone currently working at Be-IT was born!  On the one hand, as enthusiastic technophiles, we love the concept and how it is designed and delivered by its proponents.  On the other hand, in a world where authoritarianism is visibly on the rise, the risks in terms of our personal freedom are all too obvious.  Consider this, from the website of a company that is a “a global leader in the development of intelligent image processing systems.”

“As a division of our parent company, it’s in our DNA as solution providers to drive engineering standards and exceed customer expectations. 

“In combination with any IP camera, (our) system evaluates image data and not only recognizes people, but also reliably determines their age and gender. Thanks to state-of-the-art AI analysis, the system enables complex evaluations in real time and meet all data protection requirements.”

Now this particular company is, I am sure, an ethical and honest business that does indeed conform to all the current legal requirements around privacy and data protection.  However, I then read that a survey by MIT in the US has studied over 130 facial recognition data sets compiled over 43 years.  Their disturbing finding is that over this time, those compiling these data sets gradually stopped asking for people’s consent.  Increasingly, this means that people’s personal photos are being used in surveillance systems without their knowledge/consent.

The study which MIT reported has identified four major eras of facial recognition, as shown on the chart here from the MIT article. Much of the second period was driven by the interest of the US military in facial recognition, but a key date was 2007, when “the release of the Labeled Faces in the Wild (LFW) data set opened the floodgates to data collection through web search. Researchers began downloading images directly from Google, Flickr, and Yahoo without concern for consent. LFW also relaxed standards around the inclusion of minors, using photos found with search terms like “baby,” “juvenile,” and “teen” to increase diversity.” 

Four eras of Facial Recognition

Since then, Facebook has applied its technological expertise to this area, at which point, according to the authors of the study, “This is when manual verification and labelling became nearly impossible as data sets grew to tens of millions of photos…”

This led to the situation whereby AI started to do more harmful things with these stored images. For example, an AI “saw’ a cropped head and shoulders photo of US politician Alexandria Ocasio-Cortez and “autocompleted” her in a bikini. Strangely, or perhaps not, when the same AI autocompletes a head and shoulder shot of a man it tends to “dress” him in a business suit.  There are many other examples of much worse,  because, “the enormous datasets compiled to feed these data-hungry algorithms capture everything on the internet. And the internet has an overrepresentation of scantily clad women and other often harmful stereotypes.”

Put simply, there are many benefits to the development of such technologies, but they come at a cost.  And, at a time when pressure it being ramped up on tech – especially social media – companies, it’s a cost that the IT world has to openly and honestly quantify if it is to continue to have the trust of its users and customers.  

Scott Bentley, Be-IT

Posted in Opinion


.. Back to Blog

Be-IT Accreditations