"There's a joke in the AI community that as soon as AI works, it is no longer called AI," says Sara Hedberg, a spokeswoman for the American Association for Artificial Intelligence. Hedberg, who has written about AI for the past 20 years or so, has done her share of trying to enlighten reporters who are ready to declare AI dead. "Once a technology leaves the research labs and gets proven, it becomes ubiquitous to the point where it is almost invisible," she says. "And so every day, people who use Google News are seeing a website with a bunch of AI behind it. When I use my dishwasher, it incorporates fuzzy logic, which is part of AI, in sensing temperatures and making adjustments. If you look over the 40-year span of AI research, it's an impressive history."
The American Association for Artificial Intelligence serves as a kind of crossroads for AI researchers. Ahead of its 2004 conference, the organization identified a slew of emerging fields where AI research is going strong, starting with counter-terrorism, crisis management and defense. One big project funder is DARPA, the Defense Advanced Research Projects Agency, the same U.
And what do all of these areas have in common? AI applications have grown so diverse that the shared term "artificial intelligence" may be the only thing these applications share. If you declare that your research is AI-related-then, ipso facto, it is.
"AI has splintered into various isolated sub-fields," says Bill Havens, the chief technology officer for Actenum, a Vancouver-based startup tackling difficult scheduling problems. "Twenty years ago, AI researchers shared an underlying set of assumptions and methodology, but now, the vision people, for example, have nothing to say to the natural language people or the constraint programming people like us. None of us are fully aware of what others are doing in our respective sub-domains." Havens, who also runs the Intelligent Systems Lab at Vancouver's Simon Fraser University, says there have always been two primary views of AI. The first tries to get at the underlying mechanisms of human thought. The other tries to do intelligent things, but not necessarily in the way people go about it-the goal being intelligent behavior, not insight into the mind. "The practical people are in the second camp. We're interested in doing things that people find either difficult or impossible to do."
Another way to look at AI is to take the name literally: artificial-that is, "simulated"- intelligence. AI is the art of imparting either human-like, or super-human learning and inference capabilities into a piece of code. In some cases, this means getting a robot to do what a five-year-old can already do, navigate a house without getting stuck in a corner. In other cases, AI means doing a seemingly routine task, like scheduling, better than any human could.
Talk to AI researchers and two ideas routinely come up. The first is the notion of a smart agent, roughly defined as system that can sense, evaluate and react. Smart agents are most commonly mentioned in relationship to the Web: entities that goes onto the Internet to gather information on your behalf. The second idea is that of self-learning-a robot, game or agent, say, that works not so much by an internal knowledgebase as by trial, error and example. In the AI world, to be capable of learning is to gain autonomy, and autonomy is what you need to deal with the unexpected. Combine the reasoning of a smart agent with the ability to learn from experience and you get something that resembles, well, artificial intelligence.
AI remains, for the most part, a set of ongoing research projects with some very clever deployments-all of which may seem crude in the years to come. To get a sense of what AI looks like in the year 2004, I spoke with researchers in a variety of fields. The obvious place to start is in that most quintessential of AI application: robotics
An approach to smarter robots
The one thing we know about developing truly intelligent robots is that it's much easier to simulate one than to create the real thing. "Programming a robot is hard-most of the time, I don't know how to do it," says Bill Smart, an assistant professor of computer science and engineering at Washington University in St. Louis. "I could ask you to get me a sandwich, and you could walk down the hall and do it. But the approach to getting get a robot to do the same is not always clear." Smart says the process of step-by-step programming is so difficult that it is better to let robots figure it out for themselves through trial, error, and feedback. The technique to do that is called "reinforcement learning," and while it too poses challenges, it also opens up the possibility of robots that can make decisions on-the-fly in unstructured environment, including battlefields, and, more benignly, the rubble of buildings in search and rescue efforts.
Conventional robotic programming uses control code that takes in sensory readings and gives an appropriate motor response. That works well enough on an assembly line, an ordered slice of the universe in which the possible scenarios are limited. But if you want to send a robot to search for the survivors of a crumbling building, the number of potential obstacles is too high to anticipate. "Search and rescue is a highly unstructured problem-you can model the world, but there are all sorts of random occurrences, as well," Smart says. Reinforcement learning deals with uncertain situations by running a robot through its paces and giving it feedback on how well it did. "The robot makes a move, gets feedback, and observes how the world looks. Over time, the robot takes this evaluation feedback and learns the best actions to take over the long term. It figures out how the world works."
At least that's the theory. "We're still in the early days: there's a lot of learning going on, but it's at a very low level, on a small scale," Smart says. "We haven't learned how to teach a robot to rescue someone from a burning building. We're still getting it to extricate itself from a cul-de-sac." Smart says that the key will be in creating less power-hungry algorithms, an area he is researching, as well as higher onboard processing power. Both will help robots take better advantage of the limited experience they gain.
The sheer quantity of "experience" makes a big difference in reinforcement learning. Consider the success of a world-class backgammon program called TD-Gammon. Whereas programmers of earlier backgammon programs worked with expert players, TD-Gammon gained its expertise by playing itself-1.
Adam Jacoff, a robotics research engineer at the Intelligent Systems Division of the U.
But roads are more or less predictable. For Jacoff, search and rescue is the next research frontier. "A collapsed building cannot be navigated by a robot in a systematic way. The state-of-the-art is currently remote operation." The goal, he says, is "'bounded autonomy,' meaning there are times when the robot can be fully functional on its own. For example, if a robot under remote guidance lost its radio connection, you'd want it to be smart enough to turn around and get reconnected." Or a pilot might monitor three robots working semi-autonomously, intervening only when one machine gets stuck and calls for help.
To get a sense of how well various robot designs can work in unstructured environments, Jacoff's group designs obstacle courses-or rather, a single obstacle course that has been more or less replicated in several places, including Tokyo's National Museum of Emerging Sciences. The courses include stairs and "collapsed" walls, closed spaces, and rubble. The idea of a "reference test arena" is that robots can take the same test over, in one of several locations, so as to "compare apples to apples," says Jacoff. "We advocate practice, practice, practice. We want teams to go at least every six months. If they think they have a breakthrough in their development, they should run the robot through 50 times to get some statistical bearing on what's working and what's not. Only when you can quantify the results will people pay attention."
Making sense of the Web
With the vast amount of information collected on the Web, some AI researchers are looking at ways to extract more meaningful information than the current crop of search engines can deliver. The overall term for this capability is the "Semantic Web," first proposed by Tim Berners-Lee, the inventor of the World Wide Web.
William Cohen, an Associate Research Professor at the Center for Automated Learning and Discovery, Carnegie Mellon University, is applying machine learning to the problem. Cohen works with programs that learn by example. If you want to, say, train a program to locate websites that contain shopping catalogs, you might give it a thousand websites and tell it which of those contain catalogs and which do not. On the basis of the examples, the computer itself establishes the selection rules.
"One approach is to have the program look at every word on the website and assign each a weight," says Cohen. "So now we have an optimization problem whose aim is to set the weights so that the most positive numbers are for the positive examples and the most negative numbers for the negative examples." Doing so will eventually hone in on key words-such as "cart" or "shopping"-that correlate with the presence of a catalog on the site.
The advantage of this approach is that the Web can be searched as is, without modification. Some of the Google services already demonstrate the advantages. Froogle, the company's shopping engine, differs from comparable services in that no human intervention is required. The online store does not have to enlist with Google before information can fully extracted. It all happens without human intervention.
But if the shopping engine problem has more or less been solved, many other search problems are begging for answers. "Biological research is one area that is crying out for better search tools," says Cohen. "Right now, the field is generating a lot more data than people can readily keep up with. There are hundreds of scientific journals where researchers can publish their results. But going through the literature to look for experiments, say, that investigate how one particular protein interacts with another under specific experimental conditions-that's a very labor intensive process." Cohen says hyperlinks could also serve as a search criteria. "Imagine trying to look at political bloggers and trying to determine whether they are Democrats or Republicans," Cohen says. By looking at the external sites each website points to, a program could start making intelligent guesses about political leanings. "You can use the evidence about one site to tell you something about how another site should be classified," Cohen says. "That's the way that Google's page rank algorithm operates-if a good site points to you, that means you're a good site."
But not all AI researchers looking into Semantic Web techniques agree on this approach. Tim Finin, a professor of computer science and electrical engineering at the University of Maryland, Baltimore County, believes that some form of XML page annotation will provide a faster short-term solution. "A year ago, there might have been tens of thousands of documents on the Web that were marked up in these semantic Web languages," Finin says. "Today there are about two million. That's still just a small fraction of the six billion documents that are on the Web, but it shows that the technology is beginning to be used."
Finin is interested in smart agents, which go out onto the Web and gather information on your behalf. "Web agents never really made it out of the lab," Finin says. "One explanation is that it was just too hard to collect information from Web pages that are intended for people-too hard for the agents to process the content and infer meaning. So if the content were encoded in a way that makes it easier for the agents to understand, that will unleash many interesting applications, including, of course, the more intelligent agents.
"For example, in our academic department, we have a lot of Web pages describing what talks are scheduled and what courses are offered, as well as other events. I'd like it if my agent could watch those places where such schedules are posted and put them on my Outlook calendar if they seem relevant to my interests. By reading annotations that are invisible to the human viewer, the agent would find the same information as a human would, but access it in a way that is more easily understood."
The primary language in use for markup is the Resource Description Framework, or RDF, which is built on top of XML. A more advanced language, Web Ontology Language (OWL), is built on top of that. Finin doesn't think the approach will necessarily require rooms full of people retroactively coding material. Most of it, he says, will be automatically generated from databases. "If you've got your information in a database, as opposed to a flat file, you already have a pretty good understanding what that data means. You can generate HTML from it, but also RDF semantic Web content, as well. So the website our research group maintains stores all the information about the people, their papers and research projects in a database. We then generate both an HTML version that people can look at, and also a semantic Web version that Web agents can understand."
Scheduling off the white board
Another branch of AI-one that does not usually get as much attention-is the work of Actenum's Bill Havens. Known as "combinatorial optimization," it is the craft of using computers to work out the best possible schedule. "Combinatorial problems are simple but insidious to solve," Havens says. "A meeting scheduling is an obvious example - people have busy lives so you have to find time on people's weekly calendars where they can all attend the same meeting." The set of possible choices to make doesn't need to be very large," Havens says. "Even 20 choices can make for a daunting problem."
Scheduling problems are everywhere. Incoming planes need to be routed so that passenger convenience is maximized while air traffic bottlenecks are prevented. Ships entering a harbor must be allocated scarce moorage space (or the harbor pays a fee) while keeping operating overhead as low as possible. Havens' company is working out the scheduling for a group of orbiting imaging satellites and was the runner-up on a contract to create a scheduling program for the U.
"You've got 256 games broadcast on four networks. At first, it looks easy because each network already knows which games it will broadcast-so all you have to do is assign each game to a particular television slot during the season." But there are additional constraints. Maybe no team can't play more than two home games in a row. And no team can play a Monday night game after playing the previous weekend. "You throw all these constraints into the mix and suddenly, that simple problem of assigning games to broadcast slots becomes inhuman. That's why you need AI-it provides the techniques and algorithms that can deal with complexity that humans find unfathomable." Havens says that the world is still running on white boards, with people drawing little scheduling diagrams by hand and then working out the problem in their heads. "And they do an absolutely awful job, but it's what they've been doing for the last 50 years."
Combinatorial problems can be solved by a set of techniques collectively called "constraint programming." There are two basic tasks. The first establishes the constraint model, that is, the range of possible configurations that should be considered. The second, called "constraint solving," focuses on the most promising areas of the search space-what combinations should be considered first, what should be looked at later, and what should be discarded. Both techniques are needed because in many problems, the possibilities are so astronomical that they would tax the capabilities of even the largest computers. Havens rejects the idea that with the gains in supercomputer speeds, combinatorial problems can be solved by sheer processing power alone. "People think that all you need is a bigger hammer, but it doesn't work that way. You get these problems where the possible configurations, the search space, is well beyond what any conceivable constellation of computers could do, even if they had worked as long as the age of the universe. These absurdities occur for even relatively small problems. It's mind boggling."
With constraint programming, you define the scope of the search space, then narrow it. The two techniques are interwoven so that one informs the other. "The interspersing of search and constraint solving mixed together is extremely powerful," Havens says. "It's an iterative process, with backtracking." You reduce the space, consider the richness of the landscape, then reduce the space some more. Or you find that your reduced search space is now a parched desert, and you back up a level and try a different approach.
Scheduling problems of a different sort are being tackled by the AI lab at the University of Michigan, which is applying AI to the needs of older, sometimes forgetful, adults. "We're working on a reminder system for people who may forget to take their medicines or even forget to eat and drink," says Martha Pollack, the associate chairman of the university's Computer Science and Engineering department. "But this is not just an alarm clock-we have a model of their daily plan, and provide alarms in flexible ways. For example, a reminder may come an hour after breakfast, whenever that is. Or suppose someone is diabetic and supposed to eat every three hours. The system would allow for that, even if they ate at different times. The current system works on a PDA, with print big enough that many people can read it without glasses."
Pollack says that the application of AI to this area-known as "cognitive orthotics"-has gained momentum over the past five years, with a dozen or so research projects underway in the U.
Looking ahead
Neil Jacobstein, president and CEO of TechKnowledge Corporation, believes that, long term, AI applications "will need to use language the way people do- understanding context, learning nuances rapidly, making fine distinctions without being told explicitly, and integrating feedback without mediation." Doing so, he says, will be key to the next generation of AI development.
"We're already building large-scale ontologies [formal representations] that contain operational definitions of word meanings and how those meanings are linked to each other." That's the key to developing agents that don't just gather information, but act as that long-sought-after smart assistant. "Many people have given up on that goal, thinking it was too ambitious-and that's indeed been true for the kinds of hardware and software architectures available even today. But I believe we are at the beginning, not the end of designing intelligent systems." Jacobstein envisions systems that can not only use language, but understand how the past affects the present and make useful predictions about future events. They will be "assistants and associates-rather than just big fat task slaves." When? "It will take far more time than expected, on the order of decades-sometimes many decades," he says.
The missing ingredient, Jacobstein says, is a hardware/
But some long-term AI research projects are looking in a different direction: not trying to create smart assistants, but replicate imperfect human beings. At the Institute for Creative Technologies, a unit of the University of Southern California, researchers are tying to create more realistic training environments for the U.
"What's interesting to us is that if you are trying to model humans, the traditional vision of AI is to model them as extremely logical, rational beings. But humans don't work that way. We're more intuitive,' we respond differently under stress," says Randy Hill, deputy director of technology. Hill says that the whole point about developing intelligent agents is to sense, think, and take appropriate action-just as we humans would in a perfect world. "But we are not trying to model a perfect assistant, but an imperfect human being. We don't want the virtual human to be omniscient-to see through walls and know what's happening on other side of town. So we also model hearing and seeing with the limitations of humans to get more human-like behavior."
And when will such a plausible, humanly flawed, simulated human being come into being? "It's a moon shot," says Hill. Except, of course, we've already made it to the moon. Pieces of the AI puzzle may have been deployed everywhere, but the artificial intelligence of the popular imagination, as depicted by Arthur C. Clark and Isaac Asimov, Stanley Kubrick and Steven Spielberg, is still years away.