ARTificial Intelligence | The Black Box Problem and A.I. Hiring Software

Like it or not, machine learning models—or A.I.—are quickly seeping into every technological aspect of our lives. And along with that ubiquitousness comes a wave of hype and panic; if it isn’t being positioned to achieve sentience and take over the planet, it’s in headlines everywhere as replacing human workers at every level of industry—and in creative fields in particular. But beyond the hype—the maybes and what-ifs and possible far futures—the fact remains that machine learning models are being used in a growing number of sectors; not least among them publishing.

This series dials back the layers of hype and fear to look not at how important A.I. may be in the far future, but how it’s being used right now in ways that directly impact the publishing industry. Inevitably, some of the talk about A.I. will end up being nothing more than hype, as we’ve seen from a cycle of tech bubbles in the 2020s. But in the meantime, its capabilities right now are impacting the publishing landscape in ways that are unlikely to go away anytime soon.

Modern A.I. has what’s referred to as a “black box problem.” Machine learning models are, in essence, learning, and that means that they can grow past the point where even their developers know how they process the data to reach their conclusions. This can pose a very serious problem not only because it makes it more difficult to determine what a problem is stemming from in order to fix it, but also because it makes transparency in A.I. decisions nearly impossible.

Let’s say, for example, that you want to hire a new sales representative. And let’s also say that you’re a large multi-national corporation that’s likely to get applications in the hundreds for the position. Having hiring managers read through that many applications is an arduous task, so you enlist the help of a commercially available A.I. program that sifts through those resumes and shuffles the best ones to the top before handing them off to a real person; this allows you to get through all those hundreds of applications in record time. Now, let’s say also that you’re a forward-thinking company that advertises your commitment to diversity in hiring on your website and in your job posting. You would expect that this might encourage more diverse candidates to apply for your position, but when you look through the applicants that made it through your fancy A.I. hiring program, you notice something. They’re all white.

Now, if you had real humans sifting through all of those applications, it would probably be pretty clear that one or more of your hiring staff were exhibiting racial bias—particularly if you knew that applicant of color had applied for the position. But the A.I. is just a program. And because of the black box problem, you couldn’t really know why it chose those candidates over others. Maybe it’s looking for names that are similar to those of current employees and reproducing existing inequities; maybe it’s weighted to choose applicants from certain schools that just happen to have higher percentages of white students; maybe some of the skills or extracurriculars that it weighs as desirable are only available to people with large amounts of disposable income, which ignores the historic inequities in wealth between different racial and ethnic groups in the U.S.; the problem is, you can’t know. The programmers might look at this and come up with some reasons why this might have happened—like the ones above—and adjust the program’s parameters to accommodate for them, but it’s always possible that they miss something, or mis-identify the problem.

And that’s assuming that they even see or acknowledge it in the first place. Because nobody—from the applicants to the programmers to the companies using these software—knows exactly why the A.I. made this decision, it can be more difficult for a minority applicant to prove discrimination in hiring. And the black box problem compounds this by denying even the possibility of transparency, since programmers will claim that they couldn’t figure out why the program rejected a given applicant even if they wanted to.

In an age where the biggest trade publishers in North America market themselves to readers and potential employees as having a commitment to diversity, it’s important to understand how A.I. hiring software might conflict with that commitment. Gender, disability, and race are all things that affect too many aspects of our lives to trust that a machine learning model will be able to judge a candidate’s employability without picking up on any of them; the Department of Justice and the Equal Employment Opportunity Commission have already expressed serious concerns about the ability of A.I. to reinforce existing hiring biases. Right now, we’re at a pivotal point where A.I. hiring software is just starting to become the standard in places like the publishing industry, and we have to decide whether the convenience of these programs outweighs a commitment to diversity, equity, and inclusion in the workplace.

Previous
Previous

ARTificial Intelligence | The Author's Guild vs. OpenAI

Next
Next

Epistolary Library | Letter #1