Hiring software could reinforce existing bias. Can the ethical AI movement change that? – Grid News
Photo illustration of man looking at representations of AI.

Photo Illustration: Mae Decena. Sources: Photo illustration: Mae Decena. Sources: GlobalStock/Viktoria Korobova/Edwin Tan/bluebay2014/Valerii Evlakhov/Getty Images

360°

Hiring software could reinforce existing bias. Can the ethical AI movement change that?

LENSES

360°

Hiring software could reinforce existing bias. Can the ethical AI movement change that?

As companies adopt AI tools, some researchers push for greater transparency.

Contributors

What are 360s? Grid’s answer to stories that deserve a fuller view.

Overview

LENSES

You sit down for a job interview. As you answer questions, staring at your computer or cellphone, an artificial intelligence algorithm analyzes your words, behavior, intonations and even facial expressions to rank you against other candidates.

That scenario is increasingly common, as companies in the U.S. and around the world adopt AI hiring software to make finding new employees faster and easier. The covid pandemic has only accelerated the trend. One recent global survey of 500 human resources employees found that 24 percent of respondents’ companies are using AI in their recruitment processes — and 56 percent plan to do so in the next year. Algorithms are now doing everything from sorting through applicants’ resumes to analyzing taped interviews.

But while AI hiring software is often marketed as a more objective way to sift through piles of applicants, researchers and government officials are warning that it can reinforce existing biases in ways that are not always obvious. They point to growing evidence demonstrating that AI systems often unintentionally reflect the blind spots and frailties of their human creators. That’s due in part to how algorithms are constructed and which data they are trained on.

It’s part of a larger reckoning over the ethical use of a powerful and increasingly ubiquitous technology whose effects society is only beginning to understand. Scientists, politicians and regulators are all beginning to grapple with AI’s potential — and its pitfalls.


Hear more from Benjamin Powers on this story:


In 2019, the Electronic Privacy Information Center (EPIC) filed a complaint with the Federal Trade Commission against recruiting-software-maker HireVue, alleging that its use of technology to assess a candidate’s appearance was unfair and deceptive. The complaint argued HireVue’s facial analysis amounted to facial recognition, which the company denied, and noted that research has found facial recognition software to be less accurate in assessing women, people of color and people who are neurodivergent.

The FTC has not yet acted on the complaint, but in 2020 HireVue dropped the facial analysis component of its software. The company’s CEO, Kevin Parker, said this was because internal research demonstrated that advances in natural language processing made visual analysis less valuable. He added that an outside consulting firm HireVue hired to audit its software found no bias.

In the meantime, the Equal Employment Opportunity Commission has launched an initiative to examine potential discriminatory AI, and several large law firms have expanded their AI practice groups. State and local politicians are beginning to take action, too. Illinois passed a law in 2019 banning the use of AI to analyze applicants’ video interviews without their consent. And lawmakers in New York City approved a measure last year that will require companies that sell AI-powered hiring software to submit to audits designed to sniff out bias; it takes effect in 2023.

The challenge for regulators and employers alike is sussing out unforeseen effects of using AI to hire people, said Amanda Levendowski, the founding director of the Intellectual Property and Information Policy Clinic at the Georgetown University Law Center. Society understands the ways in which people are biased in the hiring process and many other areas, and there is plenty of related legal precedent for addressing such discrimination. But it’s not clear how that translates into an AI-equipped world.

“We should really be interrogating whether the biases that are ingrained in these systems are better or just different,” Levendowski said. “And I think they’re currently being shown to be qualitatively different, although not necessarily better.”

Thesis

AI is a powerful technology that is being used more and more in everyday life — but it’s only as good as the data it draws on. Rooting out human bias will take concerted effort from developers, users and policymakers.

ADVERTISEMENT

Race/Gender/Identity Lens

A picture is worth a thousand words

    Last September, a group of Facebook users were asked whether they wanted to “keep seeing videos about Primates” after watching a video that featured Black men. The Facebook prompt had been driven by artificial intelligence — in this case, a flawed recommendation system coupled with a previous image recognition training that produced an unexpectedly racist result.

    “This was clearly an unacceptable error on Facebook and we disabled the entire topic recommendation feature as soon as we realized this was happening so we could investigate the cause and prevent this from happening again,” a spokesperson for Meta, Facebook’s parent company, told Grid. That investigation is ongoing.

    Such errors and apologies have been happening for years. Tech companies that leverage AI and algorithms for facial and object recognition, chatbots, and a wide array of other social media tools have been unable to prevent systemic racism, sexism or other biases from creeping in.

    Much of the problem stems from a fundamental truth about AI: A system is only as good as the data used to train it. And biases built into training data sets may be obvious to an AI algorithm but harder for human engineers to detect. That doesn’t mean it’s impossible, though.

    “Why does this keep happening? Because I really believe that these companies, especially Facebook, believe that they can get away with it,” said Mia Shah-Dand, the founder of Women in AI Ethics. “It’s what it would take for them to fix it that’s the question.”

    A 2020 U.N. report described the myriad other ways that AI and algorithms in the world have gone wrong — from exams in the U.K. being graded incorrectly to people in the U.S. being arrested on the basis of faulty facial recognition technology. The report also noted that nearly all the AI systems in existence have been developed by Western companies and come with inherent biases.

    “In fact,” the report says, “these developers are overwhelmingly white men, who also account for the vast majority of authors on AI topics.”

    And the situation has only become worse since then, said Tim Engelhardt, a human rights officer in the U.N.’s rule of law and democracy section, pointing to a follow-up report released late last year.

    The people programming these systems, and the data they draw on, matter. Amazon shelved an AI recruiting tool in 2018 after it was shown to discriminate against women. The Amazon system was basing its decisions, in part, on 10 years of résumé data collected from people who had applied for jobs at the company. The tech industry is largely dominated by men, and while a human recruiter might try to fight their inherent bias, any AI-driven system primed with data on previous hires — like Amazon’s — would be less likely to choose women.

    Similarly, an AI-powered facial recognition system trained on images of white men will have trouble identifying people of color.

    Research from the Algorithmic Justice League, whose mission is to raise awareness around the impacts of AI, has shown that facial recognition systems are less accurate when it comes to people of color, especially women of color. It’s one of the many reasons that politicians and activists have called for a moratorium on the use of facial recognition technology in the United States.

    AI also often struggles when it comes to people who are transgender or nonbinary, said Mutale Nkonde, the founding CEO of AI for the People, a nonprofit communications agency with the goal of eliminating the underrepresentation of Black professionals in the American technology sector.

    “With an algorithm, there’s no way to say to the algorithm, ‘Oh, by the way, the way that we let you know our labeling protocols, say, man, woman, well we’re now going to introduce it the category of people who are trans and nonbinary,” she said. “It’s hard enough to get the algorithm to recognize a man or a woman.”

    Bias can also creep in through data sets that are used as proxies for things like race and income — such as ZIP code or level of urban development.

    “It’s those proxies that really always go back to who has the most money, who has the most power,” said Nkonde. “And always, always, always, always go back to who is considered white, privileged in this context, and they are going to be benefited.”

    But such “predictive patterning” doesn’t reveal anything about the quality of an individual job candidate, Levendowski said.

    “If you were to try to predict the next Supreme Court justice, simply based on historical training data, you would get a white man because it’s only until fairly recently that we’ve had any diversity in terms of race or gender on the court,” she said. “And as far as we know, we’ve never had any diversity in terms of sexuality.”

    ADVERTISEMENT

    Ethics Lens

    The challenge of ethical AI

      A new approach to artificial intelligence, called “ethical AI,” has emerged in recent years to help right the wrongs of the past and prevent future harm.

      Its proponents want to ensure that AI-driven systems don’t perpetuate human biases. Their goal is to develop clear ethical guidelines that developers can follow, both in deciding what capabilities an AI system should have and what data to train it on. The effort involves bringing social science expertise to bear on a field that emerged squarely from the realm of computer science.

      Many companies in tech and other sectors have created teams focused on developing their own ethical AI guidelines and vetting systems or supported broader efforts in the field. The latter includes the Data & Trust Alliance, a group launched last month with support from CVS Health, General Motors and Walmart, among other corporate giants. The organization developed a 55-question evaluation and scoring system to detect bias in the AI hiring systems, with a focus on the underlying data.

      But progress translating ethical AI principles into common practice has been slow, and tensions have emerged between some tech companies and experts in the field.

      In late 2020, engineer Timnit Gebru said Google fired her from its ethical AI research group after she published a study on the pitfalls of a type of AI software called a large language model. Gebru also alleged that Google did not fully support its ethical AI team, especially when the team’s work could affect the company’s bottom line — claims that Google denied. (The company also maintains that Gebru, who has since founded a group focused on the harms of technology on marginalized groups, resigned voluntarily.)

      And a majority of AI experts who responded to a Pew Research survey last year said they were concerned that by 2030, AI would be focused largely on profit optimization and social control. Sixty-eight percent of respondents agreed with the statement that “ethical principles focused primarily on the public good will not be employed in most AI systems by 2030.”

      That forecast reflects a general understanding that ethics are inherently messy and complicated (much like AI itself), and that arriving at a consensus about what an ethical framework for AI should look like will take time. That process is complicated by the fact that, as the U.N. report noted, most AI systems are developed in Western countries and don’t take into account global variations in what is considered ethical.

      And even if the world arrives at a broadly accepted definition of ethical AI, it won’t mean much unless companies agree to abide by its terms — even if doing so affects their bottom lines.

      Seth Dobrin, chief AI officer for IBM, said that the Data & Trust Alliance’s questionnaire is designed to vet and compare vendors of AI software and educate HR departments about the technology. But he noted that the group, whose members include IBM, does not have any mechanism to ensure its member companies use the questionnaire or follow the principles the alliance espouses.

      “If we say we’re going to do something and we don’t do it, we’re going to need to answer to people like you who call us out on it later,” Dobrin told Grid.

      HireVue CEO Parker argues that transparent and vetted AI technology can minimize the influence of bias — whether conscious or unconscious — on hiring decisions.

      But Shah-Dand is skeptical that firms will adopt ethical AI principles without strong incentives.

      “It’s mostly just easier for [tech companies] to pay a fine than actually fix the problem,” she said. “Because that’s their moneymaker that’s sitting on top of their AI model, and right now, there is not enough incentive for them to fix it. Because if there were, if they were forced to pay really significant punitive damages, it wouldn’t keep happening. Right now there is no carrot and there’s no stick.”

      Take Facebook, which received the largest FTC fine in history in 2019 — $5 billion for privacy violations. It’s a big number on paper, but Facebook made $15 billion in profit in the previous quarter alone.

      ADVERTISEMENT

      Science Lens

      The quest for clean data

        When it comes to bias in AI, the devil is often in the data.

        Developing clear guidelines for data set construction and use can help create more equitable systems. But many practitioners of ethical AI argue that developers also need to disclose the data they use to train and feed their algorithms, which are black boxes to outsiders.

        One approach is to adopt “data sheets for data sets” that would describe the provenance of information fed into an algorithm. Before Gebru left Google, she co-authored a study describing the strategy — inspired by the electronics industry. Electronics components are usually packaged with a data sheet that describes how a part operates, results of performance tests and recommended use.

        Gebru and colleagues argue that each AI algorithm be accompanied by a disclosure about the data set it analyzes that includes the data’s source, collection process and recommended use.

        “Datasheets for datasets have the potential to increase transparency and accountability within the machine learning community, mitigate unwanted societal biases in machine learning models, facilitate greater reproducibility of machine learning results, and help researchers and practitioners to select more appropriate datasets for their chosen tasks,” the researchers wrote.

        New York City’s AI hiring law, which takes effect in 2023, takes steps toward putting these ideas into action by requiring companies to divulge the job qualifications or characteristics that an AI uses to sift through candidates’ applications. The Illinois law, which took effect in 2020, similarly mandates that firms using AI to analyze videos of job candidates explain how the programs work and what characteristics they are evaluating.

        William Agnew, a machine learning researcher at the University of Washington, said there are standard practices that scientists use to “clean” data sets, by correcting errors and removing any duplicate data before analyzing them. But it is not always clear that large companies perform such maintenance on data that feeds into their AI systems.

        “There are ways to get lots of people to go through data sets, like [Amazon’s] Mechanical Turk, and find issues,” said Agnew. “There are also automated techniques you can use to try and find them. I’m really not aware of any large-scale efforts by one of these companies to do curated data stuff like this.”

        And that is telling, Agnew added. “If you know if the choice is between not deploying image recognition technology that’s going to have all sorts of racist and sexist biases built in, or maybe spending a couple million dollars to go through the data set to get enough of it out so that the models you train don’t have those issues, they choose to deploy it.”

        Because the public often assumes that all AI systems are inherently objective and fair, companies and scientists have a special responsibility to understand the strengths and weaknesses of the data sources they use, Nkonde said.

        Relying on haphazardly gathered data can give an unfair sheen of legitimacy to biased decision making, further entrenching existing biases.

        “AI is being deployed in more and more locations, and even if people don’t start with all this technical knowledge, they are interacting with these highly technical AIs that are making increasingly important decisions about their lives,” said Agnew. “Equipping them with the tools to understand that and understand when something that is not OK is happening is extremely important.”

        ADVERTISEMENT

        Governance Lens

        Legislation takes aim

          Lawmakers at all levels are starting to address concerns about bias within AI through regulation.

          The New York City and Illinois laws are focused on AI-powered hiring tools, but other places are considering even broader measures. Last month, the attorney general of Washington, D.C., unveiled a bill that would put businesses on the hook for preventing bias in automated decision-making algorithms — and force them to report when bias was detected. The measure would also require companies to reveal what kinds of data they are collecting from users.

          “This so-called artificial intelligence is the engine of algorithms that are, in fact, far less smart than they are portrayed, and more discriminatory and unfair than big data wants you to know,” D.C.’s attorney general, Karl Racine, said in a statement. “Our legislation would end the myth of the intrinsic egalitarian nature of AI.”

          There is no federal law aimed specifically at regulating AI or algorithms. Members of Congress have introduced bills, like Sen. Ron Wyden’s (D-Ore.) Algorithmic Accountability Act or Sens. Jacky Rosen (D-Nev.) and Rob Portman’s (R-Ohio) Advancing American AI Innovation Act, which aims to improve data sets available for AI developers.

          But even the existing laws can be fuzzy. The New York law does not define “artificial intelligence,” for example. And some experts say it’s not clear what the best approach to regulating AI would be.

          “We do need regulatory processes, but what would be the right ones is the next question,” said Finale Doshi-Velez, a Gordon McKay professor in computer science at the Harvard Paulson School of Engineering and Applied Sciences. “I don’t think there’s sufficient incentives within the private sector, even among very well-meaning companies. There are certain types of reporting and certain types of transparency are just not in the company’s interest to do.”

          For now, federal agencies such as the FTC are taking the lead in AI oversight. Last year, the agency put out an advisory warning against the use of biased AI, including the sale and marketing of racially biased algorithms, where “an algorithm is used to deny people employment, housing, credit, insurance, or other benefits,” and “a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.” All of these categories fall under existing laws that can be brought to bear on AI.

          FTC Chair Lina Khan has also appointed former Google AI researcher Meredith Whittaker as a senior adviser on AI. Bringing in Whittaker — a critic of bias in AI who co-founded an institute at New York University to explore the technology’s social implications — could also be a sign that the FTC will move more aggressively in cases of AI bias.

          “Hold yourself accountable — or be ready for the FTC to do it for you,” the agency said in its AI advisory last year.

          The Biden administration has also started work on an “AI bill of rights,” spearheaded by the White House Office of Science and Technology Policy. Officials there, who did not respond to Grid’s request for comment, released a request for public input in October.

          Across the pond, the EU has presented the initial draft of its Artificial Intelligence Act, one of the most ambitious attempts to regulate AI to date.

          AI is not going anywhere. While most people don’t realize it, the technology is already ingrained in many parts of our lives. The supply of data available to develop and train AI systems has exploded over the last decade, and the technology helps companies make money.

          The emerging “ethical AI” movement hopes to guide the technology as it develops, in the hopes of limiting its pitfalls. And a more sober Silicon Valley has, at least publicly, moved away from the old idea of “move fast and break things.”

          But what’s the cost when things do break? And did they need to break at all?

          Levendowski suggested that certain areas should be partitioned off from AI — even if that means pulling the technology back from sectors where it is already in use.

          “We have to ask whether there are domains in which AI and algorithms should be deployed at all,” she said. “I think of areas like the criminal legal system and risk assessment algorithms. Is this an area that we should ever allow algorithms to take the wheel?”

          • Benjamin Powers
            Benjamin Powers

            Technology Reporter

            Benjamin Powers is a technology reporter for Grid where he explores the interconnection of technology and privacy within major stories.

          Contributors

          Hiring software could reinforce existing bias. Can the ethical AI movement change that?