Colour Splash Be-IT Blog

AI is only too human at times

AI is only too human at times

Posted on 7th March 2022

LinkedIn ShareShare

Did you know that robots can be racists?  Not in the ways we think of as racist (they don’t sit around at night seeking out extreme-right chat rooms - or at least I hope not), but they do, sadly, reflect some of the biases, both conscious and unconscious, of those who program them.  Ironically, given Silicon Valley’s well-publicised wokeness and right-on ‘progressive’ values, much of the problem stems from the tech industry itself. It’s some of the people who make and program the computers who are at fault.

We have written in the past on the Be-IT blog about the problems of FRT (facial recognition technology) and how it struggles to distinguish differently-coloured faces, with a bias against non-white ones. FRT also identified members of the US congress as criminals. Then there is what has correctly been described as "the worst miscarriage of justice in recent British legal history"  in which the government-owned Post Office installed a new IT system in 1999, called Horizon, that had been built by the Japanese company Fujitsu to perform functions like stock-taking, accounting and transactions. Very soon, sub-postmasters (who ran smaller post offices) identified faults in the system, but rather than investigate these, the Post Office prosecuted 736 sub-postmasters, some of whom were convicted for false accounting and theft were sent to prison. It took to 2019 before the Post Office admitted its mistakes and settled the claims made by those wrongly convicted or otherwise affected. For some, it was too late: they had committed suicide.

Or how about Amazon, who prioritised areas with a high concentration of Prime members when it expanded its free same-day deliveries. These tended to be white, with predominantly black neighbourhoods consequently being excluded from the service. 

 

facebookThen there is Facebook, with its achingly right-on owner, which tells us that it has been working to improve the fairness of its AI, and that it requires advertisers to certify they understand its policies prohibiting discriminatory practices. A Facebook spokesman said: "We stand against discrimination in any form and we are committed to studying algorithmic fairness. We recently expanded limitations on targeting options for job, housing and credit ads to the UK."

To test this, a member of our marketing team went to boost a job posting on Facebook.  Among the options for advertising is the ability to target only men or women (shown here). As you can see, you don’t even need algorithms to discriminate…

The UK Competition and Markets Authority (CMA) is currently scrutinising the tech giants' algorithms to see if they are unfairly targeting certain groups. The CMA has been researching this for more than a year and is going to publish its findings in the next few months. It can’t come soon enough. 

Matt Druce, Client Delivery Director, Be-IT Projects

 

 

Posted in AI, Opinion


.. Back to Blog

Be-IT Accreditations
Cookies and Privacy on this website
We use Cookies to ensure that we give you the best experience on our website. If you wish you can restrict or block cookies by changing your browser setting. If you continue without changing your settings, we'll assume that you are happy to receive all cookies on this website.