Button to scroll to the top of the page.

News

From the College of Natural Sciences
Font size: +

As AI Becomes Ubiquitous, There are Risks, Says New AI100 Report

As AI Becomes Ubiquitous, There are Risks, Says New AI100 Report

Artificial intelligence has reached a critical turning point in its evolution, according to a new report by an international panel of experts assessing the state of the field for the second time in five years.

Substantial advances in language processing, computer vision and pattern recognition mean that AI is touching people's lives on a daily basis — from helping people to choose a movie to aiding in medical diagnoses. With that success, however, comes a renewed urgency to understand and mitigate the risks and downsides of AI-driven systems, such as algorithmic discrimination or use of AI for deliberate deception. Computer scientists must work with experts in the social sciences and law to assure that the pitfalls of AI are minimized.

Those conclusions are from a report titled "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report," which was compiled by a panel of experts from computer science, public policy, psychology, sociology and other disciplines participating in AI100. The AI100 standing committee is chaired by Peter Stone, a professor of computer science at The University of Texas at Austin, executive director of Sony AI America and a key author and panel chair for an earlier report from the committee that assessed developments in AI five years ago.

"While many reports have been written about the impact of AI over the past several years, the AI100 reports are unique in that they are both written by AI insiders — experts who create AI algorithms or study their influence on society as their main professional activity — and that they are part of an ongoing, longitudinal, century-long study," Stone said. "The 2021 report is critical to this longitudinal aspect of AI100 in that it links closely with the 2016 report by commenting on what's changed in the intervening five years. It also provides a wonderful template for future study panels to emulate by answering a set of questions that we expect future study panels to reevaluate at five-year intervals."

Virtual Panel Discussions

Tuesday, Sep. 28 at 11am & 7pm CT

Hear from and interact with a panel of report authors

Register here

AI100 is an ongoing project hosted by the Stanford University Institute for Human-Centered Artificial Intelligence that aims to monitor the progress of AI and guide its future development. Michael Littman, a professor of computer science at Brown University, chaired the panel for the new report.

"In the past five years, AI has made the leap from something that mostly happens in research labs or other highly controlled settings to something that's out in society affecting people's lives," Littman said. "That's really exciting, because this technology is doing some amazing things that we could only dream about five or 10 years ago. But at the same time, the field is coming to grips with the societal impact of this technology, and I think the next frontier is thinking about ways we can get the benefits from AI while minimizing the risks."

The report, released on Thursday, Sept. 16, is structured to answer a set of 14 questions probing critical areas of AI development. The questions were developed by the AI100 standing committee consisting of a renowned group of AI leaders. The committee then assembled a panel of 17 researchers and experts to answer them. The questions include "What are the most important advances in AI?" and "What are the most inspiring open grand challenges?" Other questions address the major risks and dangers of AI, its effects on society, its public perception and the future of the field.

In terms of AI advances, the panel noted substantial progress across subfields of AI, including speech and language processing, computer vision and other areas. Much of this progress has been driven by advances in machine learning techniques, particularly deep learning systems, which have made the leap in recent years from the academic setting to everyday applications.

Some of the risks and dangers cited in the report stem from deliberate misuse of AI — deepfake images and video used to spread misinformation or harm people's reputations, or online bots used to manipulate public discourse and opinion. Other dangers stem from "an aura of neutrality and impartiality associated with AI decision-making in some corners of the public consciousness, resulting in systems being accepted as objective even though they may be the result of biased historical decisions or even blatant discrimination," the panel writes. This is a particular concern in areas like law enforcement, where crime prediction systems have been shown to adversely affect communities of color, or in health care, where embedded racial bias in insurance algorithms can affect people's access to appropriate care.

As the use of AI increases, these kinds of problems are likely to become more widespread. The good news, Littman said, is that the field is taking these dangers seriously and actively seeking input from experts in psychology, public policy and other fields to explore ways of mitigating them. At UT Austin, for example, a grand challenge research effort called Good Systems, with which Stone is involved, seeks to design AI technologies that benefit society, and affiliated researchers recently won support from the National Science Foundation to integrate ethics into more educational offerings related to developing AI systems, particularly robotics.

Moving forward, the panel concludes that governments, academia and industry will need to play expanded roles in making sure AI evolves to serve the greater good.

Adapted with permission from Brown University.

Exam-ining Options
Innovative Cancer Research Bolstered by Grants fro...

Comments

 
No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Guest
Friday, 15 November 2024

Captcha Image