One thousand-fold more accurate 3D imaging

MIT researchers have claimed that by exploiting the polarisation of light they can increase the resolution of conventional 3D imaging devices as much as 1000 times. The technique could lead to high-quality 3D cameras built into mobile phones and perhaps to the ability to take a photo of an object and then use a 3D printer to produce a replica.

“Today, miniature 3D cameras fit on cellphones,” says Achuta Kadambi, a PhD student in the MIT Media Lab and one of the system’s developers. “But they make compromises to the 3D sensing, leading to very coarse recovery of geometry. That’s a natural application for polarisation, because you can still use a low-quality sensor, and adding a polarising filter gives you something that’s better than many machine-shop laser scanners.”

The researchers’ experimental setup, called Polarised 3D, consists of a Microsoft Kinect — which gauges depth using reflection time — with an ordinary polarising photographic lens placed in front of its camera. In each experiment, the researchers took three photos of an object, rotating the polarising filter each time, and their algorithms compared the light intensities of the resulting images.

On its own, at a distance of several meters, the Kinect can resolve physical features as small as 1cm across. But with the addition of the polarisation information, the researchers’ system could resolve features in the range of hundreds of micrometers: one thousandth the size.

For comparison, the researchers also imaged several of their test objects with a high-precision laser scanner, which requires that the object be inserted into the scanner bed. Polarized 3D still offered higher resolution.

A mechanically rotated polarisation filter would be impractical in a mobile phone camera, but grids of tiny polarisation filters that can overlay individual pixels in a light sensor are commercially available.

The work could also aid the development of self-driving cars. Today’s experimental self-driving cars are reliable under normal illumination conditions, but their vision algorithms are less effective in rain, snow, or fog as water particles in the air scatter light in unpredictable ways, making it harder to interpret. The MIT researchers say that their system can exploit information contained in interfering waves of light to handle scattering.