Indian-led MIT team develops 'nano-camera' that operates at speed of light




Indian-led MIT team develops 'nano-camera' that operates at speed of light

Researchers in the MIT Media Labs have developed a $500 (Rs 31,100 approx) “nano-camera” that can operate at the speed of light, according to a report by MIT News. The three-dimensional camera can potentially see application in medical imaging and collision-avoidance detectors in cars. There is also scope for the innovation being used to improve the accuracy of motion tracking as well as gesture-recognition devices used in interactive gaming. The team behind the camera include Ramesh Raskar, Achuta Kadambi, Ayush Bhandari, Refael Whyte and Christopher Barsi of MIT as well as Adrian Dorrington and Lee Streeter from the University of Waikato in New Zealand. Based on “Time of Flight” technology, the camera works on similar principals as Microsoft’s recently launched second-generation Kinect for the Xbox One.


camera that can operate at the speed of light! (Image credit: MIT News)

Camera that can operate at the speed of light! (Image credit: MIT News)



The logic behind this is simple: the distance of objects is calculated by how long it takes for a light signal to reflect off a surface and return to the sensor. Since the speed of light is known, it becomes relatively simple for the camera to calculate the distance the signal has traveled and therefore the depth of the object it has been reflected from. Unlike existing devices based on this technology, though, the new camera is not fooled by rain, fog or even translucent objects, according to co-author Kadambi.


“Using the current state of the art, such as the new Kinect, you cannot capture translucent objects in 3-D. That is because the light that bounces off the transparent object and the background smear into one pixel on the camera. Using our technique, you can generate 3-D models of translucent or near-transparent objects," Kadambi said. This essentially means that the camera can look past the multiple reflections created by rain, fog, semi-transparent surfaces or an object in motion, which smears the original signal’s reflection before being collected by the sensor. The solution came in the form of an encoding technique that is currently used in the telecommunications industry. Explaining the new method, Raskar, an associate professor of media arts and science and leader of the Camera Culture group at the Media Lab, said, “We use a new method that allows us to encode information in time. So when the data comes back, we can do calculations that are very common in the telecommunications world, to estimate different distances from the single signal.” The new model, which the team has dubbed "nanophotography", essentially unsmears individual optical paths. Validating the team’s findings, Kadambi said, “By solving the multipath problems, essentially just by changing the code, we are able to unmix the light paths and therefore visualise light moving across the scene.”


Prior to the nano-camera, in 2011, Raskar’s group unveiled a trillion-frame-per-second camera capable of capturing a single pulse of light as it traveled through a scene. The camera was able to achieve this by probing the scene with a femtosecond impulse of light, then using fast laboratory-grade optical equipment to take an image each time. The main drawback of this “femto-camera” was the price tag, with the build cost standing at around $500,000. In contrast, the new “nano-camera” probes the scene with a continuous-wave signal that oscillates at nanosecond periods. This allowed the team to use inexpensive hardware, while achieving a time resolution within one order of magnitude of femtophotography.



ReadMore:Android Games

No comments:

Post a Comment