Neural network and simple ToF sensor can generate 3D images

July 31, 2020 //By Jean-Pierre Joosting
Neural network and simple ToF sensor can generate 3D images
A new method of imaging which harnesses artificial intelligence and a time-of-flight (ToF) sensor to create 3D images could help cars, mobile devices and health monitors develop 360-degree awareness.

Photos and videos are usually produced by capturing photons with digital sensors. Digital cameras consist of millions of pixels that form images by detecting the intensity and colour of the light at every point of space. 3D images can then be generated either by positioning two or more cameras around the subject to photograph it from multiple angles, or by using streams of photons to scan the scene and reconstruct it in three dimensions. Either way, an image is only built if one gathers spatial information of the scene. Reserchers have found another way using a simple one-dimesional ToF sensor and a neural network AI.

In a paper published in the journal Optica, researchers based in the UK, Italy and the Netherlands describe how they can make animated 3D images by capturing temporal information about photons instead of their spatial coordinates. Their process begins with a simple, inexpensive single-point ToF detector that measures the time photons produced by split-second flash of a pulse of laser light take to bounce off each object in any given scene and reach the sensor. The time taken for each reflected photon to reach the sensor is proportional to the distance between the sensor and object.

The information about the timings of each photon reflected in the scene (or temporal data) is collected in a very simple graph.

These graphs are then turned into a 3D image with the help of a sophisticated neural network algorithm. The researchers ‘trained’ the algorithm by showing it thousands of different conventional photos of the team moving and carrying objects around the lab, alongside temporal data captured by the single-point detector at the same time.

The neural network ‘learns’ how the temporal data corresponds to the photos and is then capable of creating highly accurate images from the temporal data alone. In the proof-of-principle experiments, the team managed to construct moving images at about 10 frames per second from the temporal data, although the hardware and algorithm used has the potential to produce thousands of images per second.


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.