Thermal imaging, the basis of many night-vision systems, works by detecting heat. It also tends to provide few details. So imaged objects often appear fuzzy or even ghostly. But such blurry, two-tone imagery may soon become a thing of the past. Researchers have now paired artificial intelligence (AI) with thermal vision. Their new system can deliver crisp, detailed images — even in pitch darkness.
The new tech is costly and still a bit clunky. But one day, it just might boost the ability of self-driving vehicles to navigate better.
The infrared — heat — images typical of night-vision goggles appear blurry. It’s due to “ghosting,” where an object’s heat overwhelms any details of its texture. (It’s like how turning on a light might make it difficult to decipher any printing on a glass bulb.)
Fanglin Bao is a theoretical physicist at Purdue University in West Lafayette, Ind. He was part of a team that used a type of thermal camera that can tell the difference between various infrared hues, or wavelengths. The researchers paired that camera with a computer program. It uses AI to untangle information from the camera’s data. This reveals the temperature, texture and makeup of imaged objects.
The technique can turn dark, nighttime scenes into bright, detailed images. The AI system can even match something it perceives in an image — such as water, sand or a tree — to the color that material would have in daylight. Then it paints those images on the screen in somewhat realistic daytime hues.
The team described this system July 26 in Nature.
“There is no restriction of harsh weather conditions or nighttime scenarios,” says Muhammad Ali Farooq. He’s an electrical engineer at the University of Galway in Ireland, who did not take part in the work. That means “you can get very good and very crisp data — even in low lighting,” he says.
Many cameras can measure distances to the objects on which they’re focused. The new camera system can, too, although in a different way — but with about the same accuracy. That means it might one day be useful in self-driving vehicles. They have to know how far away something is. That’s what allows a car to automatically brake in time to avoid smashups.
Current self-driving vehicles often gauge distance by bouncing signals off of objects. That’s also how radar and sonar work. But self-driving cars might confuse each other if they were all sending out signals at once. The new technique doesn’t beam out signals. So it might be safer to scale up in a world with roads full of self-driving cars, Bao’s team says.
Even so, don’t expect to see cars with this system cruising through busy streets any time soon. Its camera is hefty — and about a half meter (1.6 feet) on each side. The whole system also costs more than $1 million, Bao notes. Finally, producing each image takes about a second. That’s way too slow for a self-driving vehicle that needs to respond to road conditions lickety-split.
Still, Bao looks forward to seeing what future versions of this tech might do for autonomous vehicles or robots. He suspects they could soon interpret scenes — and navigate at night — far better than people can today.
Educators and Parents, Sign Up for The Cheat Sheet
Weekly updates to help you use Science News Explores in the learning environment
Thank you for signing up!
There was a problem signing you up.