Humans and Robots to the Rescue!

A new communication system could make it easier for humans and robots to work together in emergency response teams. (Image courtesy of Jose-Luis Olivares/MIT.)

Autonomous robots collaborate by continuously sending each other updates, but bombarding human brains with that much data at once would be intolerable.

In order to facilitate robot/human collaboration, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have devised a simplified system that requires 60 percent less communication.

Multi-agent systems involve collaborations between autonomous agents (human or robot) that require the participants to modify their behaviors in response to others’ models of the surrounding environment as well as their own. Hence, collaborations between humans and robots normally require a lot of information processing.


The Cost of Robot Communication

Currently, the best way to model multi-agent systems is with a decentralized, partially observable Markov decision process (Dec-POMDP). A Dec-POMDP factors uncertainties; it considers whether an agent’s view of the world is correct, whether its estimate of its fellows’ worldviews are correct as well as whether their actions will be successful.

Example of a simple Markov decision process with three states and two actions.
Dec-POMDPs assume some prior knowledge about the environment in which the agents will be operating. This was problematic for the collaborative application the researchers had in mind: human-robot emergency response teams. 

Emergency-response teams enter unfamiliar environments where prior information is often irrelevant to the situation. Moreover, recording the agents’ surroundings in real time is time-consuming and computationally intensive.

To address these issues, the MIT researchers designed a system that simply ignores the uncertainty associated with whether or not an action will be successful. Instead, it assumes that the agent will succeed in whatever it’s trying to do.

 

The Human Factor in Robot Communication

Agents presented with new information have three choices: they can ignore it, use it without broadcasting it or use it and broadcast it.

Each choice has benefits as well as costs. The researchers took this into account by incorporating a cost-benefit analysis into their system based on the agent’s model of the world, its expectations of its fellows’ actions and the likelihood of accomplishing the joint goal more efficiently.

The researchers tested their system with electronic agents in over 300 computer simulations of rescue tasks in unfamiliar environments. The version of their system that permitted extensive communication between agents completed the tasks at a rate only two to ten percent higher than the version that reduced communication by 60 percent.

“What I’d be willing to bet, although we have to wait until we do the human-subject experiments, is that the human-robot team will fail miserably if the system is just telling the person all sorts of spurious information all the time,” said Julie Shah, associate professor of aeronautics and astronautics at MIT.

In a separate project, Shah and her team conducted experiments with only human subjects completing similar virtual rescue missions. By studying the subjects' communication patterns using machine-learning algorithms, the team hopes to incorporate those patterns into their new model to further improve human-robot collaboration.

“We haven’t implemented it yet in human-robot teams,” said Shah. “But it’s very exciting, because you can imagine: You’ve just reduced the number of communications by 60 percent, and presumably those other communications weren’t really necessary toward the person achieving their part of the task in that team.”

The research was presented at the 30th Annual AAAI Conference and published under the title, “ConTaCT: Deciding to Communicate during Time-Critical Collaborative Tasks in Unknown, Deterministic Domains.”