New Security Method Prevents Spoofing in Multi-Robot Systems

The new security technique was verified experimentally using AscTec quadrotor servers such as this one. (Image courtesy of M. Scott Brauer.)
A team of MIT researchers has developed a new technique to defend against hostile takeovers of multi-robot systems. The technique defends against a type of security threat known as a Sybil attack, in which false robots are spoofed in order to influence the coordination of authentic robots in the network.

 

Spatial Fingerprints

Many multi-robot systems coordinate through algorithms in which each robot uses its local observations to make a recommendation for the system. By combining the recommendations from all the robots in the network, a unified decision can be reached.

In a Sybil attack, hackers can take advantage of this procedure by using an adversarial robot to spoof multiple false identities. By doing so, the adversarial robot can wield a disproportionate influence over the entire network.

The new technique addresses this line of attack by assigning each robot in a multi-robot system a unique “spatial fingerprint.” This identifier is obtained by analyzing the physical interactions of the robots’ wireless transmissions with the surrounding environment.

Example Signal Fingerprint: (a) A server (×) receives a client (red circle) signal on 2 paths: direct along 40° attenuated by an obstacle (shaded) and reflected by a wall along 60°. (b) is a corresponding fingerprint: peak heights at 40° and 60° correspond to their relative attenuations. (Image and caption courtesy of Gil et al.)
Using these spatial fingerprints, the researchers developed an algorithm that assigns a confidence value to each client in the network. If multiple clients have the same or similar spatial fingerprints, they’ll receive a confidence value close to zero.

 

Spoof-Resilient Networks

The researchers tested their algorithm with a multi-robot system working the so-called coverage problem, in which each robot reports its own position to update the position of all the robots in the network. The team’s theoretical analysis showed that with 75 percent of robots infiltrated by a Sybil attack, the robots still managed to position themselves within three centimeters of where they should have been.

“This generalizes naturally to other types of algorithms beyond coverage,” said researcher Daniela Rus. These theoretical predictions were also verified experimentally.

“The work has important implications, as many systems of this type are on the horizon — networked autonomous driving cars, Amazon delivery drones, et cetera,” said computer science professor David Hsu, who was not involved in the research. “Security would be a major issue for such systems, even more so than today’s networked computers. This solution is creative and departs completely from traditional defense mechanisms.”

To learn more about the technique, you can read the team’s paper in Autonomous Robots.

Can robots be hacked for corporate espionage? Follow this link to find out.