News
Artificial Intelligence and Machine Learning

Clearing Up the Picture

Posted
A split image of sunny and inclement weather.
Current vision algorithms are largely designed for use in clear conditions, deep learning using convoluted neural networks is now being harnessed to improve visual performance in adverse weather.

Capturing high-quality photos is easier than ever, as filters and image-adjustment tools can enhance images. Yet cameras still struggle to provide a clear image in bad weather, especially in extreme conditions such as heavy rain, fog, or poor lighting at night. Objects in a scene can become hard to see or even invisible, especially when they are far from the lens, and colors are often dulled.

“In rain and snow, you also have motion blur because they are moving,” says Dengxin Dai, a computer vision lecturer at ETH Zurich in Switzerland who coordinated a workshop on all-weather vision at the Conference for Computer Vision and Pattern Recognition (CVPR 2019) in Long Beach, CA. “So the geometry of an object might also get distorted.”

Capturing clear images in all types of weather is vital for several applications. Security cameras installed outdoors, for example, need to detect what is happening in a scene, regardless of the weather. Autonomous cars, which are getting ready to hit the market in the next few years, will rely on cameras and sensors to visualize a driver’s environment in order to drive safely. “A car needs to accurately recognize objects, pedestrians, traffic lights, other cars, and traffic signs even in fog, rain, snow, or at night,” says Dai.

While current vision algorithms are largely designed for use in clear conditions, deep learning using convoluted neural networks (CNNs) iss now being harnessed to improve performance in adverse weather. These algorithms are trained on large data sets, such as collections of at least 10,000 images captured in a weather condition of interest.

The first challenge is finding enough photos. “If it’s foggy, people don’t take many photos,” says Dai, “so it’s quite hard to actually collect as many [images in] foggy conditions compared to nice weather conditions.”

One approach to get around this is to simulate extreme weather. In recent work, Dai and his colleagues analyzed how fog affects light, and how it would alter a photo. They then created a fog effect and applied it to a set of images taken on clear days, a photo collection that already had been annotated by humans, who identified objects of interest to help train deep learning algorithms. “Annotation is very expensive,” says Dai. “We don’t want to repeat the whole process again and again.”

Dai and his colleagues are also taking inspiration from human vision to train their algorithms. As night falls, our eyes gradually adapt to the change, since light diminishes little by little. Yet our vision may struggle if we are suddenly confronted by a sharp change in brightness, such as walking from a dark room into the sunlight. That’s why the team is using intermediate stages, such as a set of images of the range from light to dense fog, to develop their computer vision models. “Humans can continuously adapt to the environment, so that’s why we also designed the algorithm in such a way,” says Dai.   

The team improved its algorithm by training it with simulated foggy images that gradually increased in haziness. The refined system improved its ability to recognize objects in foggy photos by up to about 12%, from about 34.6% to 46.8% with one dataset.

An image of an intersection, top, and the same intersection with fog added digitally.
From the study Curriculum Model Adaptation with Synthetic and Real Data for Semantic Foggy Scene Understanding,
Christos Sakaridis, Dengxin Dai, Luc Van Gool, International Journal of Computer Vision, 2018.

Other approaches also are being used to simulate extreme weather. Jean-François Lalonde at Laval University in Quebec, Canada, and his team have been using physics-based algorithms to recreate rain; the parameters can be fine-tuned to mimic different quantities of rain, from a light drizzle to a downpour. The simulated rain is inserted into clear weather images used to train deep learning algorithms to recognize different features, such as pedestrians, in various intensities of rain.

In recent work, the researchers found that algorithms trained with their mock rain were better at detecting objects. When they tested their system with 1,000 actual rainy images and another 1,000 taken in clear weather that it hadn’t seen before, the algorithm’s ability to detect objects in rainy conditions improved by 14.9%. And for good weather conditions, it performed just as well. “We want to make sure it still works when it’s clear, and that was indeed the case,” says Lalonde. “It turns out that simulated rain helps in improving the robustness of detectors in real conditions.”

Lalonde and his colleagues are currently trying to improve their synthesized rain to include other effects it might have on a scene. Although their system can recreate nice-looking raindrops, it doesn’t take into account wetness on the ground, for example, and the reflections created when car headlights are turned on. It also typically gets darker when it rains, too. “There are all kind of very complicated effects that appear in rainy conditions,” says Lalonde, “and those are really hard to simulate with the physics-based model.”

The team is trying to combine its physical model with a deep learning algorithm that could be trained to recreate additional effects that arise when it rains. “We’re trying to generate even more realistic rainy conditions,” says Lalonde.

Bad weather can get even more complex when different conditions occur simultaneously, for instance on a rainy night. Specialized models are being developed for each condition, but ultimately a single model that can handle all weather would be most efficient for applications such as autonomous cars. “Let’s say you have 20 models for 20 scenarios; you want to either combine them or compress the models such that you have a reasonable model that you can fit into hardware,” says Dai.

Combining cameras with other sensors will likely play a part, too. Cameras attempt to replicate the physical world as people see it, which is the goal when taking photos. However, according to Dai, the use of cameras may not be the most effective tactic for object recognition, which is the main objective of computer vision systems that operate outdoors. “There might be other sensors or sensor combinations that can give a better recognition or localization accuracy,” says Dai. “At the moment, we are only focusing on software and still try to use RGB cameras, but in the long term we want to see whether we can optimize the hardware as well.”

Sandrine Ceurstemont is a freelance science writer based in London, U.K.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More