Haze removal (or dehazing) is the process in computer vision that attempts to remove haze from images. The process implemented for this project utilizes the dark channel of images. The dark channel for an area within an image, is the color channel of the area that has the lowest intensity.
I(x) = J(x)t(x) + A(1 − t(x)), is the formula largely used to describe haze images.
Sadly I was unable to implement the alpha matting part of the paper, but not without trying at it for a couple of days.
#Haze Removal
To calculate the dark channel I iterated over every 15x15 patch in the image. For each patch we find the color channel that has the minimum value and used that patch in the final dark channel combination.
%image is already available
cols = size(image, 2);
rows = size(image, 1);
dark_channel = zeros(rows, cols);
for ix = 8:rows-8
for iy = 8:cols-8
dark_channel(ix-7:ix+7, iy-7:iy+7) = find_minimum(image(ix-7:ix+7, iy-7:iy+7));
end
end
The atmospheric light is calculated using the dark channel and the original image. To ensure that we select the brightest pixels that are not objects within the image, you must look at the pixels in the dark channel. The simple approach used in the paper finds the .1% brightest pixels of the dark channel, then selects the maximum value from those pixels in the original image.
######Pseudo-Code
%image & dark_channel are already available
[cols rows] = size(dark_channel);
[vector index] = sort(reshape(dark_channel, cols * rows, []), 1, 'descend');
% take the brightest .1% of the dark channel
limit = round(cols * rows /1000);
for ix = 1:limit
vector(ix) = max(image(floor(index(ix)/rows)+1, mod(index(ix),rows)+1,:));
end
atmospheric_light= max(vector(1:limit));
The transmission is the medium transmission describing the portion of the light that is not scattered and reaches the camera.
######Pseudo-Code
cols = size(image, 2);
rows = size(image, 1);
t = zeros(rows, cols);
w = .95;
for ix = 8:rows-8
for iy = 8:cols-8
t(ix-7:ix+7, iy-7:iy+7) = calculate_transmission(image(ix-7:ix+7, iy-7:iy+7), atmospheric_light, w);
end
end
The paper mentions that ω is parameter for each 15x15 patch of the image, that can be fined tuned to keep more haze for the distant objects. I did the same thing that the paper did and fixed it to 0.95 for all results reported. As an afterthought since the paper's process is able to generate a depth map, it should be able to reconfigure ω. After generating the first of the depth map, you could reprocess the image utilizing the depth map info for ω.
This is the final image that will have the hazyiness removed.
######Pseudo-Code
cols = size(image, 2);
rows = size(image, 1);
r = zeros(rows, cols, 3);
t(:,:,1) = transmission;
for ic = 1:3
for ix = 1:rows
for iy = 1:cols
r(ix, iy, ic) = (image(ix, iy, ic) - atmospheric_light) / (max(t(ix, iy), .1)) + atmospheric_light;
end
end
end
*Original
*Dark Channel
*Transmission
*Haze
*Final
*CVPR Final
*Original
*Dark Channel
*Transmission
*Haze
*Final
*CVPR Final
*Original
*Dark Channel
*Transmission
*Haze
*Final
*CVPR Final
*Original
*Dark Channel
*Transmission
*Haze
*Final
*CVPR Final
*Original
*Dark Channel
*Transmission
*Haze
*Final
*CVPR Final
I attempted the process on a night skyline, to see the results on an image that is very already dark. It does a decent job, but would be good to see the results with alpha matting. However since the fog glowed from all the light, the image is overall much darker.
*Original
*Dark Channel
*Transmission
*Haze
*Final
operation | toys | stadium | cones | ny17 | ny12 | ny night scene
dark channel | 0:2.970111 | 0:5.439195 | 0:2.899942 | 0:13.287957 | 0:8.421444 | 3:28.651487
atmospheric | 0:0.017480 | 0:0.023223 | 0:0.011458 | 0:0.056524 | 0:0.031109 | 0:1.098448
transmission | 0:4.333701 | 0:7.725875 | 0:4.136798 | 0:18.308125 | 0:10.956740 | 4:52.034344
radiance | 0:0.016281 | 0:0.033103 | 0:0.015385 | 0:0.139510 | 0:0.048617 | 0:1.840210
total | 0:7.337573 | 0:13.221396 | 0:7.063583 | 0:32.026976 | 0:19.457909 | 8:23.624489
The calculations of the dark channel and the transmission took the longest, which seems accurate due to it's need to go over every patch for each color channel.
The results from my method seems to work fine with the skyline images, but it completely failed on the toys. The skylines are all washed out, and I cannot pin point the cause. A possible reason is that since I did not implement the soft matting, the inconsistencies of the dark channel and transmission map have a direct effect on the scene radiance. Some patches of images are much darker than the rest, but the paper did mention that "since the scene radiance is usually not as bright as the atmospheric light, the image after haze removal looks dim. So, we increase the exposure of J(x) for display."
I like the "haze" images since it encapsulates the color channel that contains the dark channel and is interesting to look at.