A.I. Creates 3-D Shapes from 2-D Images

A new technique that uses the artificial intelligence methods of machine learning and deep learning is able to create 3-D shapes from 2-D images, such as photographs, and is even able to create new, never-before-seen shapes.

Karthik Ramani, a professor of mechanical engineering at Purdue University, says that the "magical" capability of AI deep learning is that it is able to learn abstractly.

"If you show it hundreds of thousands of shapes of something such as a car, if you then show it a 2-D image of a car, it can reconstruct that model in 3-D," he said. "It can even take two 2-D images and create a 3-D shape between the two, which we call 'hallucination.'"

(Image courtesy of Purdue University.)

When fully developed, this method, called SurfNet, could have significant applications in the fields of 3-D searches on the Internet, as well as helping robotics and autonomous vehicles better understand their surroundings.

Perhaps most exciting, however, is that the technique could be used to create 3-D content for virtual reality and augmented reality by simply using standard 2-D photos.

"You can imagine a movie camera that is taking pictures in 2-D, but in the virtual reality world everything is appearing magically in 3-D," Ramani said. "Inch-by-inch we are going there, and in the next five years something like this is going to happen.

"Pretty soon we will be at a stage where humans will not be able to differentiate between reality and virtual reality."

The computer system then learns both the 3-D image and the 2-D image in pairs, and then is able to predict other, similar 3-D shapes from just a 2-D image.

"This is very similar to how a camera or scanner uses just three colors, red, green and blue—known as RGB—to create a color image, except we use the XYZ coordinates," Ramani said.

(Image courtesy of Purdue University.)
Ramani says this technique also allows for greater accuracy and precision than current 3-D deep learning methods that operate more using volumetric pixels (or voxels).

"We use the surfaces instead since it fully defines the shape. It's kind of an interesting offshoot of this method. Because we are working in the 2-D domain to reconstruct the 3-D structure, instead of doing 1,000 data points like you would otherwise with other emerging methods, we can do 10,000 points. We are more efficient and compact."

One significant outcome of the research would be for robotics, object recognition and even self-driving cars in the future; they would only need to be fitted with standard 2-D cameras, yet still have the ability to understand the 3-D environment around them.

Ramani says that for this research to be developed, more basic research in AI will be needed.

"There's not a box of machine learning algorithms where we can take those and apply them and things work magically," he said. "To move from the flatland to the 3-D world we will need much more basic research. We are pushing, but the mathematics and computational techniques of deep learning are still being invented and largely an unknown area in 3-D."

Read the research paper here.

For more deep learning news, find out how this A.I. Uses Deep Learning to Beat Humans at DOOM.

Source: Purdue University