Spatial Fingerprints
Many multi-robot systems coordinate through algorithms in which each robot uses its local observations to make a recommendation for the system. By combining the recommendations from all the robots in the network, a unified decision can be reached.
In a Sybil attack, hackers can take advantage of this procedure by using an adversarial robot to spoof multiple false identities. By doing so, the adversarial robot can wield a disproportionate influence over the entire network.
The new technique addresses this line of attack by assigning each robot in a multi-robot system a unique “spatial fingerprint.” This identifier is obtained by analyzing the physical interactions of the robots’ wireless transmissions with the surrounding environment.
Spoof-Resilient Networks
The researchers tested their algorithm with a multi-robot system working the so-called coverage problem, in which each robot reports its own position to update the position of all the robots in the network. The team’s theoretical analysis showed that with 75 percent of robots infiltrated by a Sybil attack, the robots still managed to position themselves within three centimeters of where they should have been.
“This generalizes naturally to other types of algorithms beyond coverage,” said researcher Daniela Rus. These theoretical predictions were also verified experimentally.
“The work has important implications, as many systems of this type are on the horizon — networked autonomous driving cars, Amazon delivery drones, et cetera,” said computer science professor David Hsu, who was not involved in the research. “Security would be a major issue for such systems, even more so than today’s networked computers. This solution is creative and departs completely from traditional defense mechanisms.”
To learn more about the technique, you can read the team’s paper in Autonomous Robots.
Can robots be hacked for corporate espionage? Follow this link to find out.