Algorithms underpin almost every aspect of life as we know it these days. They calculate things faster than the human mind can, they can process quantities of data so fast our eyes would glaze over, and increasingly so they can automate pretty much anything. Whether or not you believe or like it, algorithms are everywhere.
As computers and machines continue to infiltrate all spheres of life, we must step back and wonder if we are ready for this ongoing digital transformation. Are policymakers aware of how much more is possible today than was even a decade ago; have they adjusted legislation and expectations accordingly? Or, are we embracing digitization a bit too fast, and without concern for the impact it may have on future generations?
Table of Contents
A Brief History of Algorithms
To evaluate this, let’s start at the beginning of the story. Algorithms, the key commands that power most technology today, have in essence been around since the Babylonians in 300BC. Although not called as such, all forms of civilization have needed various strategies to control and organize various operations. With the definition of an algorithm being ‘a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer’ this therefore spans back to the Babylonians who devised a marking scheme to keep track of cattle and grain stocks. From there, different societies helped develop numerical systems, the abacus, and Persian mathematician Abu Abdullah Muhammad ibn Musa Al-Khwarizmi (c. 850 AD) brought algebra, variables, and decimals to the world. Now, in the 21st Century, these systems continue to evolve in complexity, forming the building blocks from which computers can be set standards to execute a growing number of tasks.
Algorithms Today, AI Tomorrow – Are We Ready?
At present, algorithmic use is being pushed to the limits as software developers and technologists attempt to achieve what we once thought to be impossible. With Research and Development (R&D) spending on areas like quantum computing and AI becoming of increasing importance to the West and China, it’s disappointing to see a lack of preparedness surrounding laws and regulations for when these technologies are realised. The UK for example doesn’t even have a statutory definition for AI, let alone any actionable public policy proposals for the future of AI. Similarly, in the US, there is no current federal regulation of AI at all – instead, only promises that it is ‘on the horizon’. For two global powers to be relying on existing tangential policies – such as data protection, consumer protection, and market competition laws – to keep AI developments in check, seems naïve and shockingly unconcerned at the negative consequences unchecked AI use could have.
While still far from enough, at least the EU came together April 2021 to draft some policy proposals. Aiming to provide a framework that will protect vulnerable individuals and industries, their policies are planned to take effect in 2024 (at the earliest). For any nation to be considered a front-runner, thought-leader, superpower – call it what you want – where AI and technology is concerned, surely they’ll also need to set an example with accompanying legislation, especially as Big Tech perhaps lacks the motivation to do so themselves (as we’ll discuss).
AI and Inequality – a very real consequence
If you think about it – technology increases inequality. Even from the earliest industrial revolutions, specialization and the division of labour has meant that people have narrow skillsets, and therefore their jobs become easier to automate. The richest and most able benefit from being able to invest in and own upcoming assets, and society’s inherent capitalist nature allows them to profit by exploiting the middle classes. Technology facilitates this. AI will cause millions of redundancies in the near future; it will completely redefine working norms and typical jobs/ skills required. Those with IT training, or coding abilities, will thrive, while those who perhaps would have made great accountants, will soon be replaced. How are policy makers protecting these people? We can’t expect Big Tech to, as they’re the ones innovating these technologies and encouraging their adoption en masse.
Not only this, but the fundamental tools which facilitate AI could themselves be promulgating other less obvious issues. Just 2 years ago, Timnit Gebru, a prominent AI researcher at Google, was fired simply for refusing to retract her name from a document. There wasn’t actually anything controversial about the paper she had written, it merely pointed out bias in language AI, and potential issues with super-scaling such models.
Similar to this warning, a 2020 MIT Technology Review, focused on algorithms, found that ‘lack of transparency and biased data training mean these tools are not fit for purpose.’
But what do I mean by this?
AI, Algorithms and Racism
Racism, inequality, and misuses of power have been issues in many societies for centuries. In the physical world, this can play out as unlawful stop and searches – or even arrests – conducted by police. Discrimination and harmful stereotypes about Black communities are something that many of us today are trying to fight against, with a recent example being George Floyd and the series of Black Lives Matter movements that followed. What good are all these efforts when computers, AI, and everything in between could be reinforcing the very thing we are trying to eradicate though?
In the aforementioned MIT Review, Yeshimabeit Milner speaks on her experiences as an African-American in Miami. Having witnessed police abusing their power firsthand, she got involved with data-based activism, trying to get to the bottom of the flaws within the very justice system meant to protect her and her peers. She found that predictive policing tools and abuse of data were a significant causal problem, and that location-based and people-based algorithms in particular are biased against these minority communities. In fact, it’s likely they are biased in this way because of unintentional human prejudices, which have found their way into the algorithms’ standards.
Nonetheless, this isn’t a human error per se, not at its core at least (and not if we do something about it). The issue is that the algorithms underpinning the recommendations police receive, use data that continues to perpetuate racism, but also that we see algorithms as objective when they aren’t necessarily. Predictive algorithms are easily influenced by arrest rates, and according to US Department of Justice statistics, you are twice as likely to be arrested if you are Black, than White. Therefore, you can see how it becomes a self-fulfilling prophecy whereby prejudiced police make a disproportionately high number of black arrests, and that data feeds back into the algorithm, which suggests an even higher likelihood of another Black person in a certain community committing a crime. In this scenario, you may think that defunding the police is the solution – they have too many resources, right? Wrong. They have been defunded, and it’s because of this that algorithms and predictive policing tools are so widespread in America; they’re cheaper than human labour.
AI, Algorithms and Disability
While racism is one form of social exclusion, those with disabilities find themselves with similar yet different experiences. Did you know that people with a disability are more than twice as likely to be unemployed than the average person? There’s also a ‘qualifications gap’ between disabled and non-disabled people, largely owing to other disadvantages and stigmas they may have faced in educational and training environments. These are things influenced by a wide range of factors, of course, but in relation to AI and algorithms, the recruitment industry has a lot to answer for.
With candidate numbers increasing, and competitive firms having to sift through ever-growing volumes of applicant information, it’s understandable that many turn to AI to help screen all the data. However, in a 2020 research paper from the Institute for Ethical AI, it has been found that Applicant Tracking Systems (ATS’), which use sets of algorithms to narrow down applicants, ‘interpret complexity as an abnormality, or outlier’ – therefore, those who don’t confirm to what the algorithms deem ‘normal’ (e.g. having a disability, being of a minority ethnicity, or not being cisgender) – are disregarded at the first hurdle.
Our own human biases can also be introduced to ATS screening standards without intention. For example, setting algorithms to favour those who went to a prestigious university, unconsciously discriminates against those with disability (or from a lower socioeconomic background) who may have worked equally hard and be equally qualified, but faced systemic barriers along the way.
Why You Need to Consider and Prioritize Equity in Technology
Those in charge, those with power – they’re meant to protect us. Tech leaders and policy makers are supposed to be forward-looking, anticipating how to mitigate the bad while maximizing the good that comes with technological improvements. The police as an institution are considered to be inherently racist (whether that be true or not) leading to ACAB and similar protests. We have a chance with AI to right any wrongs before they become wide-spread and unretractable, and it would be a huge failing if we knowingly let algorithms perpetuate biases and prejudices we don’t accept in 2022.
It isn’t all bad either. AI has massive potential to be a force for good, especially where disability is concerned, and this is why it’s so important to focus on getting it right. Technology can definitely help make the world more inclusive and accessible, but only if we actively ensure our own biases aren’t its core commands. While AI can help to unlock the potential of those with disability in the workplace, it’ll be no good if they can’t get that job because their AI interview thinks they can’t maintain eye contact, or that their voice/ speech isn’t something it understands.
The Takeaway
As AI and algorithms become evermore present in our lives, as IBM say, it is crucial that we teach them to up ‘uphold society’s moral and legal obligations to treat all people fairly, especially with respect to protected groups that have historically experienced discrimination. Biased human attitudes and wrong assumptions can lead to unfair treatment for people with disabilities in the world today.’ Because despite the risks, these technologies offer a huge opportunity to improve these realities.
This article was contributed by Zoë Balroop. Zoe is a Penultimate year student at the University of Warwick, studying Philosophy, Politics, and Economics (tripartite), and current Tech and Innovation Researcher at Warwick Think Tank. Topics written for them have included issues surrounding AI and algorithms + human gene editing with regards to inequality, as well as international power dynamics of the quantum supremacy race. Have always had a keen interest in technology, and am also interning for 3 tech startups alongside my degree: Algomo, The HR TECH Partnership, and Mayday. I hope to continue to expand my knowledge in these areas!