The Ubiquitous Smartphone Gets a New Job: Eyes for Robots

(Image courtesy of OpenBot)

One major hurdle to the advancement of robotics is the great cost associated with the development and manufacturing of these advanced machines. While the most cutting-edge robots can cost tens of thousands of dollars, their much cheaper counterparts lack impressive physical abilities, sensor capabilities, and computational power. This stark trade-off could become a thing of the past due to a common item many people carry in their pockets. An estimated 3.5 billion people globally own a smartphone, with about 96 percent of Americans using the tech devices. Two researchers at Intel’s Intelligent Systems Laboratory, Matthias Muller and Vladlen Koltun, are taking advantage of this banal trend through their OpenBot project.

“We are inspired in part by projects such as GoogleCardboard: by plugging standard smartphones into cheap physical enclosures, these designs enabled millions of people to experience virtual reality for the first time,” wrote Muller and Koltun in their white paper on OpenBot. “Can smartphones play a similar role in robotics?” they asked.

The goal of OpenBot is to overcome two key challenges to advancing robotic technology: accessibility and scalability. The crux of the project is to morph ordinary Android smartphones into robots via an open-source platform as a way to democratize robotics for educational and research purposes.

OpenBot is based on the fusion of a basic robot body that costs $50 and a smartphone. The body is built from a 3D-printed frame with additional basic components assembled by hand using simple tools. A software stack hosts a smartphone that’s mounted on the top of the device’s body, allowing the its capabilities to become part of the robot for navigation, sensing and computation. This is advantageous because smartphones are continually advancing with ever more powerful processors, sensors and multiple communication interfaces. Many contain HD cameras, powerful CPUs, GPUs, IMUs, GPS, Wi-Fi, Bluetooth, and 3G, 4G, and 5G modems. Some even have AI chips accessing neural networks.

A video depicts an OpenBot in action. First, it’s shown effectively following behind a person walking through a neighborhood with winding streets and obstacles such as parked cars. The robot is equipped with an SSD object detector with a MobileNet backbone, and the demonstration shows a strong performance without needing to utilize the latest in smartphone technology. Next, the video provides footage of the bot properly navigating an office environment and executing a driving performance while using an order of magnitude fewer parameters compared to other navigation systems. Furthermore, the robot was able to generalize the ability it had gained when using an initial smartphone to subsequent phones. It was also able to generalize its ability to avoid novel obstacles and novel routes, both static and dynamic.

“Smartphones point to many possibilities for robotics that we have not yet exploited,” stated Muller and Koltun. “For example, smartphones also provide a microphone, speaker, and screen, which are not commonly found on existing navigation robots. These may enable research and applications at the confluence of human-robot interaction and natural language processing. We also expect the basic ideas presented in this work to extend to other forms of robot embodiment, such as manipulators, aerial vehicles, and watercraft.”

OpenBot is not the first attempt to combine smartphones with robotics. However, Muller and Koltun claim that a comparable one, the Wheelphone, has only one- and four-star Github repositories with no current contributions. They also said that the Wheelphone robot has fewer motors than OpenBot and has fewer capabilities, despite being more expensive.