New 4D camera could supercharge autonomous vehicles

Technologies such as autonomous vehicles rely heavily on the imaging technology they have to capture their surroundings.  A recent paper from engineers at Stanford and the University of California San Diego highlights the progress that has been made.

They’ve developed cameras that can generate 4D images across 138 degrees of information.  The team believe that the devices will provide a huge boost to the capabilities of a range of autonomous technologies that need to understand the environment within which they operate.

“We want to consider what would be the right camera for a robot that drives or delivers packages by air. We’re great at making cameras for humans but do robots need to see the way humans do? Probably not,” the authors say.

A camera fit for a machine

The researchers developed their camera with a spherical lens to provide it with a very wide field of view.  It builds on an earlier work to build a video camera for a DARPA program to capture high res, 360-degree images.  That camera used fiber optic bundles to couple the spherical images to flat local planes.  Whilst it was extremely effective, it was also extremely expensive.

The new device uses a version of this, albeit one that is able to eliminate the fiber bundles.  It enabled them to develop a camera capable of capturing extra-wide images.  The cameras rely on a technology called light field photography.  This is what enables the device to capture images in four dimensions.  It also allows the user to refocus images after they’ve been taken.  It can do this because it captures data on the light position and direction as the image is taken.  The team believe this could allow autonomous systems to better see through things like rain that obscure their vision.

“One of the things you realize when you work with an omnidirectional camera is that it’s impossible to focus in every direction at once—something is always close to the camera, while other things are far away,” they say. “Light field imaging allows the captured video to be refocused during replay, as well as single-aperture depth mapping of the scene. These capabilities open up all kinds of applications in VR and robotics.”

The hope is that this will allow various AI type technologies to better understand how far objects are away from them, and indeed the direction they’re moving and even what they’re made of.  It would give the computer a much better understanding of the world around it.

What’s more, it is just as useful for close up work as it is for longer distance imaging.  This would render it useful for applications such as industrial robotics or landing drones.  It could even work with AR/VR type systems to ensure a more seamless rendering of real scenes.

Suffice to say, it is to date only at a proof-of-concept stage, so a lot of work is required to continue development, but it’s an exciting device and a good indication of the progress being made in this crucial aspect of autonomous systems.

Related

Facebooktwitterredditpinterestlinkedinmail