Button to scroll to the top of the page.

News

From the College of Natural Sciences
Font size: +

Learning Machines

Learning Machines
aibo-towerOn a small practice field on the first floor of Taylor Hall, robot dogs playing soccer scuttle around the field like infants crawling across a playpen.

They survey their field with Cyclops-like camera eyes and nudge balls around with silver chins. And like little humans, these robots learn to walk, recognize color, and hold a ball. They experiment and adapt, learning new tricks everyday.

The robodogs—members of the UT Austin Villa Robot Soccer Team—serve as a test bed for Dr. Peter Stone’s research in machine learning and artificial intelligence (AI). Stone, an assistant professor of computer sciences and Alfred P. Sloan Research Fellow, ultimately seeks to create completely autonomous agents—independent AI machines that learn and interact like living animals, challenging our perceptions of intelligence and consciousness.

“We always push the boundary of what it means to be human,” says Stone. “The goal of AI is to do things that haven’t been possible before.”

In addition to soccer-playing robodogs, Stone is developing AI-powered cars that drive themselves, self-repairing computers, and software agents that manage supply chains. Regardless of the task these AI agents perform, for them to be independent and successful in our complex world, they must all have the power to learn.

In the living world, learning is a trait unique to animals. An animal’s ability to learn, or improve its behavior through experience, helps it survive in a changing environment. AI agents too need the ability to learn and the flexibility to adapt if they are to survive in a world that constantly provides new challenges. “You don’t want an agent that’s going to step into the same hole or drive into the same pothole every day,” says Stone.

Soccer fields don’t have potholes, per se, but the game offers plenty of challenges for robots. Field surfaces, lighting conditions, and colors of teammates, soccer balls, and goals are unique to each field. And the game itself is dynamic and interactive. Programming robots to respond to every feasible variable in a soccer game by hand (called hardcoding) would be next to impossible, which is why Stone loads his soccer-bots’ brains with machine learning code.

“We’re always looking for ways for the robots to learn themselves, rather than us hardcoding them,” says Stone. “Anything you hardcode is brittle. The more the agents can learn to adapt, the more robust they will be.”

Stone and his students didn’t build the Aibo robodogs—that credit goes to Sony—but they do program their brains from scratch. The robodogs’ brains, which are tiny, removable memory sticks, run programs based on machine learning algorithms.

An algorithm is a sequence of logic that helps the dogs take action based on sensory data they acquire. The human brain works much the same way, says Stone. “Your brain is running an algorithm,” he explains. “It takes in your senses, what you see and hear, and figures out what actions to take. One goal of AI is to figure out the algorithm of the human brain.”

The algorithms spinning through the robodogs’ memory sticks give them the ability to learn through reinforcement. They perform an action—poorly at first—and improve the action based on positive feedback, just like a real dog getting a treat every time she sits on command.

When training to walk, for example, the robodogs time themselves walking back-and-forth on the soccer field. They improve their walk by choosing gaits that increase their speed. They share information about what they are doing through wireless Ethernet, learning how to walk more effectively as a team. Initially, the dogs are clumsy and slow, but after three hours of training the dogs can trot around the field with the best of them. In fact, the robots on the UT Austin Villa team at one time had learned the fastest walk of any Aibo dog on record, a skill that comes in handy on the soccer field. (Other researchers have since improved on Stone and his students’ learning methods and achieved slightly faster walks.)

The UT Austin Villa team competes regularly in tournaments across the country and the world. The dogs placed third in the May 2005 U.S. Open and made it to the quarterfinals in the “legged league” at the World RoboCup 2005 competition in Osaka, Japan.

stone-dogsBy 2050, Stone and other scientists involved in the RoboCup robot soccer project want to see fully independent humanoid robots beating flesh-and-bone world champion soccer players. It sounds like science fiction, but Stone is confident that his research will help produce such a vision.

“It’s unimaginable how we’ll get there, and there are huge challenges along the way,” he says. “But when we solve those challenges, we will have solved a lot of other much more practical challenges.”

He likens the journey from robodogs to robohumans playing soccer to the journey from the Wright brothers’ first flight to a man walking on the moon. Unimaginable progress can happen over fifty to seventy years, and the road leading to robohumans will be dotted with enumerable advances in AI technologies.

Another of Stone’s AI agents, TacTex-05, won the 2005 Trading Agent Competition in Edinburgh, Scotland. TacTex-05, a software agent, competed against other AI agents to manage a business supply chain for manufacturing PCs. TacTex finished the game with the most money in the virtual bank, which Stone credits to its ability to adapt its strategy from game to game using machine learning.

In other words, RoboCup competitions and soccer-playing robots are a means to an end. If teams of robots can play soccer, then they may also be able to perform risky tasks like searching for bombs and rescuing people from crumbling buildings. And machine learning algorithms developed in Stone’s lab could lead to autonomous cars that increase traveling efficiency and reduce automobile accidents; to self-healing computers; and to AI agents that manage supply chains more effectively than humans.

Stone won’t be surprised to look out the window from the backseat of his autonomous car one day to see a world populated by robots working, playing and learning side-by-side with humans. He will have helped make it that way.

Photos by Matt Lankes.
Poison dart frog mimics gain when birds learn to s...
Boyer, Kaufmann and Moore awarded the 2005 ACM Sof...

Comments

 
No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Guest
Tuesday, 19 November 2024

Captcha Image