Skip to main content

Posted 18 Oct 2022

Artificial Intelligence Is Dangerous For Disabled People At Work: 4 Takeaways For Developers And Buyers

By Dr Nancy Doyle
11th October 2022

Artificial Intelligence (AI) powered HR technology threatens the life chances of hundreds of millions of people with disabilities worldwide, as well as those of us who will become disabled in time.

Eight of the 10 largest private U.S. employers use AI to track the productivity metrics of individual workers, many in real-time, according to The New York Times. Yet even the NYT, when analysing the risks to employees from disadvantaged backgrounds, in low-paid jobs, failed to reference the threat to the millions of people in work whose interaction with the screen and camera is impacted by their disability and is, therefore ‘not-standard’. While employers say “derelict workers” can be rooted out and industrious ones rewarded, no reference is made to how these productivity metrics should be adjusted to ensure disabled employees are assessed fairly. Susan Scott-Parker, a veteran disability inclusion campaigner and founder of the campaign Disability Ethical AI asks:

“What if the reason I do not hit my key strike targets is my employer’s failure to give me a one-handed keyboard?”

Surely, we already have the legal statutes required to challenge these inequities? The UN Convention on Rights of Persons with Disabilities, and most regulators, state that the obligation to make accommodations is triggered when the person makes the request. Yet time and time again automated requirement processes (including those that deploy AI) either do not permit the candidate to request adjustments; make it extremely difficult to do so; or make lodging the request stigmatising and embarrassing. Developers of AI HR technology are not incentivised to prove their products are safe for disabled job seekers or employees or indeed for anyone. Many providers state unequivocally that it is employers (not the AI software developer) who are legally (and they imply ethically) liable should a disabled job seeker complain they have been discriminated against.

The burden remains on individual disabled job seekers to prove they have been treated badly by an algorithm; an algorithm they did not even know was there. We all know job seekers rarely have the time or motivation to litigate when turned down for a job: we are a long way from the threat of legal risk genuinely influencing HR. So if not litigation, what could possibly persuade them that the risk this technology poses to the life chances of hundreds of millions (never mind the risk of losing out in the war for talent) outweighs the cost benefits to their recruiters?

Biased and Flawed

HR’s problem is much more than just the bias in the data – the science that developers claim underpins these tools all too often does not stand up to scrutiny. Recent research from New York University revealed that 2 tools used to predict your personality by analysing the words you use in your resume and your LinkedIn profiles were, quite simply, not valid. As it turns out, your personality changes noticeably depending on if these tools rely on your resume or your LinkedIn profile.

Neither buyers nor AI developers understand the difference between the inevitable disability bias in the data and the discrimination triggered when the AI tool, which does not itself make adjustments, is dropped into a standardised, rigid, e-process, which by definition disadvantages candidates who require reasonable adjustments so as to compete on an equal basis. In other words “AI is the new flight of stairs” for the disabled candidate.

All big, predictive data sets rely on the notion of the statistical norm, what is known as the bell curve, from which we draw predictive correlations from averages. These are limited to serving the needs of around 67% of people who will score near average on whichever measurement is taken and therefore the further away from any ‘norm’ you are, the less likely the AI will meet your need. Who decides what norms are taken, the extent to which your ‘category’ is present on the internet which feeds data into the AI learning, will determine your inclusion in an AI-dominated world. This is dangerous for all minoritised groups.

Playing Catch Up

It is clear that we need to update our disability legislation to include digital discrimination, which includes HR-based AI but also daily interactions which rely on memory, restrictive controls for making mistakes and more. Our technological capabilities have overtaken our human ethics and reasoning capacity. Scott-Parker tells the story of the person with Tourette’s who tried more than 100 times via emails, LinkedIn, telecalls, etc. to tell an employer that while he could not hold the game controls for the recruitment assessment, he could do the job. She reports:

“We need legislation which creates an obligation to make it easy for candidates to request adjustments, effectively and in non-stigmatising fashion, very early in the recruitment process. And what about our candidate with a facial disfigurement– he doesn’t need the employer to make any adjustments, he just needs the facial/ emotion recognition tech to not discard him when he looks into the camera. Some regulators are beginning to address the potential impact, positive and negative, of Artificial Intelligence on persons with disabilities. But their response is far from sufficient in a world where HR directors have not been taught that standardised automated recruitment processes discriminate against candidates with disabilities. And where existing disability discrimination legislation should and must apply but is proving so ineffectual. And where those leading the worldwide ‘Ethical and Responsible AI’ debate remain, for whatever reason, ‘disability oblivious’ or should that be ‘disability averse?’ What will it take to capture their attention?”

Four Key Takeaways

So if you are a developer or an HR AI buyer, here are four key takeaways on hiring AI inclusion

  1. Take a pause before implementing any rigid system with no flexibility, for monitoring or hiring.
  2. Get in touch with a wide range of disabled people to consider the potential flaws from multiple perspectives.
  3. Assume there will be flaws and lend a hand ironing them out with pilot testing.
  4. Give everyone easy contact details if their disability prevents them from using the system, so that you can upgrade and improve, on the basis of feedback from the human beings trying (unsuccessfully) to navigate your systems.

The genie is out of the bottle as far as AI Tech is concerned, but we must not give up on our moral and legal principles of disability equality and inclusion.