Cast-GAN: Learning to Remove Colour Cast in Underwater Images


Chau Yi Li and Andrea Cavallaro

Centre for Intelligent Sensing, Queen Mary University of London

Abstract

Underwater images are degraded by blur and colour cast caused by the attenuation of the illuminant in the water medium. To remove the colour cast with neural networks, images of the scene taken under white illumination are needed as reference for training, but are generally unavailable. As an alternative, one can use surrogate reference images taken close to the water surface or degraded images synthesised from reference datasets. However, the former still suffer from colour cast and the latter generally have limited colour diversity. To address these problems, we exploit open data and typical colour distributions of objects to create a synthetic image dataset that reflects degradations naturally occurring in underwater photography. We use this dataset to train Cast-GAN, a Generative Adversarial Network whose loss function includes terms that eliminate artifacts that are typical in underwater images enhanced with neural networks. We compare the enhancement results of Cast-GAN with four state-of-the-art approaches and validate with a subjective evaluation.


BibTeX Citation

@inproceedings{Li2020Cast-GAN,
title={Cast-GAN: Learning to Remove Colour Cast in Underwater Images},
author={Li, Chau Yi and Cavallaro, Andrea},
booktitle={IEEE International Conference on Image Processing},
month={Oct.},
year={2020}
}

Slide to compare the images.

Left image: Right image:

Original man holding beer
Processed with logo and Lightroom presets