Currently, the ability of the neural network to create images is limited to what it has been trained to pick out from the temporal data of scenes created by the researchers. However, with further training and even by using more advanced algorithms, the neural network could learn to visualise a much varied range of scenes, widening its potential applications in real-world situations.
Dr. Turpin added: “The single-point detectors which collect the temporal data are small, light and inexpensive, which means they could be easily added to existing systems like the cameras in autonomous vehicles to increase the accuracy and speed of their pathfinding.”
“Alternatively, they could augment existing sensors in mobile devices like the Google Pixel 4, which already has a simple gesture-recognition system based on radar technology. Future generations of our technology might even be used to monitor the rise and fall of a patient’s chest in hospital to alert staff to changes in their breathing, or to keep track of their movements to ensure their safety in a data-compliant way.”
“We’re very excited about the potential of the system we’ve developed, and we’re looking forward to continuing to explore its potential. Our next step is to work on a self-contained, portable system-in-a-box and we’re keen to start examining our options for furthering our research with input from commercial partners.”
The team’s paper, titled ‘Spatial images from temporal data’, is published in Optica. The research was funded by the Royal Academy of Engineering, the Alexander von Humboldt Stiftung, the Engineering and Physical Sciences Research Council (ESPRC) and Amazon.