Button to scroll to the top of the page.

News

From the College of Natural Sciences
Font size: +

Can We Build Machines that are Less Biased Than We Are? (Audio)

Think about some of the most important decisions people make – who to hire for a job, which kind of treatment to give a cancer patient, how much jail time to give a criminal. Statistics and Data Sciences faculty member James Scott says we humans are pretty lousy at making them.


"I think there is room for machines to come into those realms and improve the state of our decisions," said Scott. "That's going to involve humans and machines working together, however, not simply treating these decisions the way you might treat a microwave oven just by punching in some numbers and walking away …"

Maybe machines can help us make better decisions. But ultimately, it boils down to the question: can we build machines that are less biased than we are?

What do you think?

Share your thoughts on today's topic by leaving a comment at the bottom of this month's post. Or, if you have more general thoughts you'd like to share about our show, you can take our survey here: https://utexas.qualtrics.com/SE/?SID=SV_eUTDsDlYdmBBPBb


TRANSCRIPT

MA: For Point of Discovery, I'm Marc Airhart. Technologies aren't inherently good or bad. It's how you use them. Think of that early technology, fire – for millennia now, it's been helping people prevent food poisoning, bake bread and survive cold nights. But fire can rage out of control, too. That's how some people are looking at the good and bad potential in artificial intelligence as it becomes more and more a part of our daily lives.

JS: Hi I'm James Scott, and I'm professor of statistics and data sciences here at UT Austin.

MA: Statistics is a key component in artificial intelligence. James Scott applies the tools of statistics and AI to solve problems in a wide range of fields including business and healthcare. He recently co-authored the book AIQ: How People and Machines are Smarter Together. He says if we're careful about how we build them, AI systems can help us make much better decisions.

JS: Well, I see one area that has great potential for doing good for the world but also great potential downsides would be the use of AI in decision making, and here, by decision making I mean things like in the justice system, in college admissions, in human resource decisions within companies about whom to hire, whom to let go, whom to promote, how much to pay people. Those kinds of decisions today are almost exclusively the province of humans and frankly, we're not that good at them.

MA: People of color are disproportionately represented in the prison system, and research has shown that Americans' unconscious biases are a factor. Studies have shown that job applicants with stereotypically African American names are less likely to be called in for interviews.

JS: I don't know how you can look at that scenario, in criminal, in university admissions, in hiring, and not think that there's a substantial room for improvement in the state of human decision-making today. … I think there is room for machines to come into those realms and improve the state of our decisions. That's going to involve humans and machines working together, however, not simply treating these decisions the way you might treat a microwave oven just by punching in some numbers and walking away …

MA: Work with machines in areas like this has already begun. Judges in Broward County, Florida, have used an AI tool to help with decision-making in sentencing criminals and considering releasing inmates on parole. The tool is intended to help judges in making some kind of prediction about how likely a person is to commit more crimes after their release. But journalists at ProPublica uncovered a troubling pattern in the predictions.

JS: It turns out that those patterns of errors fall in a very different way upon whites and African Americans. Among those wrongly flagged as being high risk, they tended to be disproportionately African American. And among those wrongly flagged as low risk, but did go on to commit subsequent crimes, they were disproportionately white …

MA: So there's a danger there, isn't there in people thinking well it's a computer, it's artificial intelligence, there's all this, algorithm behind it. It kind of makes it sound more scientific or more accurate somehow than humans, but at the end of the day it was made by humans right? And it can have all this—it can embody all the same biases and bad information that we make—that we use when we make decisions.

JS: Absolutely, I mean when these algorithms are trained — maybe here's a good analogy, right? Imagine a bouncer at a nightclub, the bouncer is trying to decide who to let in the nightclub right? You know, Idris Elba or somebody walks up to the nightclub …

MA: Idris Elba, by the way, is an actor you might have seen recently in Avengers: Infinity War.

JS: … and the bouncer says, "You're a super handsome guy. You come on in," right? and whereas I walk up to the nightclub and the bouncer says, "You look like a statistics professor. Get out of here," right? So imagine doing—trying to get a machine that would do the same thing, how is the machine gonna work? It will look at a million different interactions of the bouncer and patrons of the nightclub and it will learn to reproduce the patterns that the bouncer applies, so you know if the bouncer is looking at somebody's shoes or the cut of their suit or fancy jeans that they're wearing, the algorithm will learn to make decisions on that basis. But if the bouncer is making racist or sexist or homophobic or other kinds of biased decisions, the algorithm will learn that, too. And that's going to be true no matter whether you're talking about a nightclub bouncer or a college admissions committee or a person sentencing defendants in a criminal trial or an HR manager trying to decide who to hire from a stack of 400 resumes high for a single job.

MA: Remember that old saying about computers—garbage in, garbage out? It's especially true for artificial intelligence. In 2016, a data analytics company held a beauty contest judged by AI bots. Entries came in from all over the world, but nearly all of the 44 winners were white. It turned out that the algorithm had been trained on far more images of light-skinned women than darker-skinned women. Scott says you have to choose your training data carefully. Another important factor for the field of artificial intelligence is addressing who is doing the programming. Computer scientists are overwhelmingly white and male. It's one reason that a lot of the tech industry is talking about the need for greater diversity. In fact, Scott says one of the biggest open research areas in AI right now is …

JS: … how to make algorithms that don't just replicate our biases but can actually give us feedback on them and help us learn to correct our biases and help us learn to build decision-making protocols that are potentially radically better than the bias-riddled ones we have right now, you know the ones that give an undeserved leg up to the people with the prettier face or whiter skin or the richer dad. There's no reason that we should settle for that, whether it's humans or machines at the steering wheel.

MA: AI should embody our "better angels" right?

JS: Absolutely.

MA: Just a final note—as I was preparing this story, I talked to a friend of mine who is African American. He liked the idea of AI as a tool that could help identify biases and help us make better decisions—but he was skeptical that an AI made by humans could be completely unbiased. For him, the idea of relying on a machine entirely to make decisions in the criminal justice system is scary. He said much of the data available to an AI would be biased, even if race weren't explicitly included, given that the fingerprint of discrimination touches every aspect of a person's life—and it wouldn't include hard-to-quantify factors that make each individual unique. At the end of the day, though, I think he and James Scott agree that combining human judgement with artificial intelligence DOES have the potential to help us make better decisions. What do you think? Visit us at pointofdiscovery.org and leave a comment at the bottom. We also have a link to a short survey if you'd like to give us more general feedback about our series.

MA: Point of Discovery is a production of the University of Texas at Austin's College of Natural Sciences. Our senior producer is Christine Sinatra. I'm your host and producer Marc Airhart. Thanks for listening!

About Point of Discovery

Point of Discovery is a production of the University of Texas at Austin's College of Natural Sciences. You can listen via iTunes, RSS, Stitcher or Google Play Music. Questions or comments about this episode or our series in general? Email Marc Airhart.

Galactic “Wind” Stifling Star Formation is Most Di...
CNS Welcomes New Faculty As Fall Semester Begins

Comments

 
No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Guest
Saturday, 16 November 2024

Captcha Image