Michael Brennan|November 2, 2020

Let’s get it out there at the outset, there is nothing artificial about Artificial Intelligence, it is a human artefact, and as such subject to the faults and failings (aka biases and prejudices) of its human developers. If we hold this single truth in mind when thinking about all things AI, we may well end up in a much better place.  

This post won’t go into the technical aspects of machine learning, including the role of historic datasets and proxy data, but you can find out much more at Wednesday’s Profusion webinar. What we aim to do here is to set the scene for that discussion and stimulate some thoughts on the wider context. 

Diversity needed – demographic, social and cognitive 

Much has been said about the lack of diversity in Silicon Valley, the epicentre of (Western) digital hegemony. The numbers are stark in terms of gender, ethnicity, and impairments. Without going into the detail there is a vast over-representation of white (middle class) males at the heart of the industry. To compound the issue there is a very narrow, broadly libertarian, certainly individualistic and free-market, philosophy, that has dominated tech thinking in the Valley over the last 25 years.  

Combining the demographics and the philosophy creates a very narrow, closed, self-reinforcing, approach to data and digital innovation. This is reflected in the development and missteps of the digital platforms that now dominate so much of our lives. 

All of which provides just a small flavour of the context behind a discussion of algorithmic fairness and data ethics today.  

Data is not neutral 

Arguably it was Cathy O’Neill who first brought this topic to a wider, non-technical, audience with her (2016) book Weapons of Math Destruction, while last year saw the publication of Algorithms of Oppression by Safiya Umoja Noble  – proving that the whistle-blowers have all the best titles! 

As the sub-title – How Search Engines Reinforce Racism – of the latter suggests the content is very much rooted in Silicon Valley and extends far beyond the individual experience of specific algorithms or the personal consequences of automated decision making. 

The book effectively blows the lid off the idea of algorithms being neutral, unbiased and objective while emphasising that technology and the internet don’t simply reflect or mirror our societies, they actively shape and change our understanding – according to their internal logic. 

And that logic, at least since the Dot Com Crash, has been driven by commercial interest, something that the founders of Google were very aware of at the outset, publicly arguing against an advertiser funded model – because it would inevitably distort priorities and principles. Then the crash hit, and they needed to make a quick buck to stay afloat.  

But how right they were! An obvious example being the Facebook algorithms, designed to maximise user engagement, they have ended up fuelling a toxic mix of hate speech, fake news, conspiracy theories and much more. Unintended consequences? Almost certainly. Entirely unforeseeable? No. 

Legislation may play a role 

Its interesting to note that a bill – The Algorithmic Accountability Act – first published in the Spring of 2019 is currently working its way through the United States legislature.  

No one is completely satisfied with its recommendations and implications (of course) but the fact it exists at all is a major indicator of political concern – along with the many other enquiries and investigations into Silicon Valley from across the USA and the EU. 

Equally we should note that the GDPR and by extension the UK Data Protection Act (2018) include the right to object to automated decision making (while the ICO are also progressing wider work on AI). In fact, the US legislation borrows significantly from the language of GDPR with its emphasis on DPIAs (Data Protection Impacts Assessments). 

Public services in the front line 

Explaining the rationale for the bill its proposers cited two (then recent) cases of algorithmic unfairness, one saw Facebook accused of breaking the US Fair Housing Act by enabling discriminatory ad targeting, the second involved Amazon recruitment and revelations of a sexist automation process. 

But where the greatest concerns about algorithmic bias and fairness have been expressed is in relation to the use of AI and automated decision making in the public sector, with Cathy O’Neill providing a great selection of examples from law enforcement to prisons, education and housing. 

In many respects we can see public agencies and government departments as the guinea pigs in the evolution of AI systems, across the world and certainly in the UK, austerity depleted public service teams are grappling with ever increasing demands and scarce resources.  

In such circumstances the promise of data, machine learning and AI to streamline and accelerate decision making processes can be hard to resist – and hard to evaluate effectively.  

Equally politicians can find the idea of an outsourced, automated, objective, decision making function irresistible. After all it’s a fantastic way to avoid accountability for contentious decisions – little wonder then that the UK Government is hoping to make house building policy algorithmically driven! 

The algorithm says no 

One of the biggest frustrations with the way things are working today is when an individual or organisation is unable to explain why the algorithm has said no. 

This goes to the heart of the matter in terms of transparency, accountability, and ultimately redress for those affected – and again this is an aspect of the GDPR approach with its focus on the transparent and consensual use of personal data. 

Therefore, Explainable AI will become an ever more important feature of the landscape, it may even mean that our data scientists will have to sacrifice a degree of accuracy in favour of the greater value attributed to transparency and openness. 

Looking ahead 

As I have said (far too often) data is too important to be left to the data team alone, as such it is vital that there are processes in pace to evaluate the risks inherent in any project – one of the reasons that Profusion is launching its own Data Ethics Advisory Board next month. This is the ideal forum to address questions of bias, fairness, and unintended outcomes in advance of any new projects and is a great complement to Data Protection Impact Assessments for example. 

Within the data team itself there needs to be a real commitment to diversity across the parameters outlined above together with a far greater appreciation of the tangible implications of their work, data science does not exist in a vacuum, its outputs can have profound consequences for individuals and communities and need to be understood as such. 

More broadly we would hope that the discussions stimulated by the rise of AI, machine learning and automated decision making are extended across the full range of human and institutional decision making  – after all our biases and prejudices have been around a whole lot longer than our algorithms!

Michael Brennan  

Sign up to our newsletter

Want to make life easier by staying on top of market trends?