Vertical stripe fix for 5D Mark III H.264 files

What is this about?

Some users noticed a vertical pattern in 5D Mark III video files, first with regular raw video images. Some converters will correct this artifact automatically, so it wasn't a big issue lately.

After implementing the 3x crop mode, the artifact became visible in H.264 files as well. The previous algorithm that operates on Bayer raw data can no longer be used - it need to be adapted to H.264 files.

The effect is visible only in highlights, not in shadows. If you see vertical lines visible in shadows, that's a different issue, so you may stop reading here.

What's causing it?

The 5D Mark III appears to read out 8 columns at a time, in parallel. These 8 columns appear to have different amplifier gains, and that's why the effect is visible in highlights, but not in shadows.

Since:

  1. with regular (non-crop) mode, the artifact was visible in RAW, but not in H.264,
  2. with crop mode, the artifact is visible in H.264, but not in RAW,
  3. the crop mode only changes the sampled area on the sensor, and the pixel binning factors, but leaves all other parameters (resolution, Canon processing and so on) unchanged,

we may suspect that those stripes are present in the raw sensor data, corrected by Canon code in regular (non-crop) H.264, but for crop mode H.264, it would require a recalibration (and reverse engineering to figure out how to do that first). It is also possible that our vertical stripe artifact is amplified by the mismatched calibration.

Enough chit-chat, how to fix it?

You will need:

  • IPython notebook
  • octave (the one prepackaged for your operating system should be fine)
  • the 'image' package from octave-forge
  • octave_kernel for the IPython notebook
  • ffmpeg

Let's the sample files from here in octave. We'll extract a single frame from each clip, as uncompressed 8-bit ppm:

In [1]:
%%shell
# note: ffmpeg is very verbose, so I won't display the output here
ffmpeg -ss 0.96 -i 1080p\ Stripes\ 1.mov bad.ppm -y 2>ffmpeg.log
ffmpeg -i 1080p\ Stripes\ 2.mov ref.ppm -y 2>>ffmpeg.log

Octave code follows:

In [2]:
pkg load image
more off
In [3]:
bad = im2double(imread('bad.ppm'));
ref = im2double(imread('ref.ppm'));
figure,imshow(bad(1:5:end,1:5:end,:))
figure,imshow(ref(1:5:end,1:5:end,:))
warning: your version of GraphicsMagick limits images to 16 bits per pixel
warning: called from
    imformats>default_formats at line 256 column 11
    imformats at line 79 column 3
    imageIO at line 106 column 11
    imread at line 106 column 30

The user who reported the issue included a normal image and also a blank wall, that can be used to extract the vertical pattern. Nice!

Let's see a 1:1 crop from the bad image:

In [4]:
imshow(bad(1:400,1:800,:))

Doesn't look very nice.

Let's try to learn the pattern from the reference image (blank wall), from the green channel. Easier to work with grayscale, isn't?

In [5]:
% will stretch the image to make the pattern very obvious
ref_g = ref(:,:,2);
imshow(ref_g(1:400,1:800), [])

The reference blank wall is not completely uniform - let's filter out the low frequency components:

In [6]:
% filter out very low frequency components in the horizontal direction
im = ref_g;
imf = imfilter(im, ones(1,100)/100, 'replicate');
figure, imshow((im-imf)(1:400,1:800), [])

The image is more uniform now, but there are still a bunch of artifacts. Let's find out the gain for each column:

In [7]:
% get the vertical pattern
% we already know the pattern is actually a difference in column gains
% assume the filtered image is ideal, and extract per-column gain variations
gain = median(im ./ imf);
plot(gain)
axis tight

Let's put it together:

In [8]:
function cor = identify_pattern_gray(im)
  % filter out very low frequency components in the horizontal direction
  imf = imfilter(im, ones(1,100)/100, 'replicate');

  % get the vertical pattern
  % we already know the pattern is actually a difference in column gains
  % assume the filtered image is ideal, and extract per-column gain variations
  gain = median(im ./ imf);
  
  % expand the correction data (vector, one entry per column)
  % to match the size of the image
  cor = bsxfun(@plus, im*0, gain);
end
In [9]:
cor_g = identify_pattern_gray(ref_g);
imshow(cor_g(1:400,1:800), []);

That is the correction pattern - stretched to see it better.

You probably noticed it doesn't repeat every 8 columns, as you would expect from my initial description. In raw, it actually does. However, the H.264 1920 appears to be upsampled from 1904, and this explains the not-exactly-periodic pattern you have just seen.

The above image (and all the other crops from this page) are 1:1, non-resized. What you are seeing is the actual pattern that affects the image.

Let's check the corrected green channel:

In [10]:
figure, imshow((ref_g ./ cor_g)(1:400,1:800), []);

Looks like it worked!

What are those ugly blocks? H.264 compression artifacts :)

Let's mask them with a bit of noise:

In [11]:
imshow((ref_g ./ cor_g + randn(size(ref_g))/200)(1:400,1:800), [])

Which one do you prefer?


Forget the noise for now; let's check the other color channels:

In [12]:
ref_r = ref(:,:,1);
ref_b = ref(:,:,3);
cor_r = identify_pattern_gray(ref_r, 0);
cor_b = identify_pattern_gray(ref_b, 0);
figure,imshow([cor_r(1:100,1:800); cor_g(1:100,1:800); cor_b(1:100,1:800)], []);

So it looks like every channel needs its own correction. No big deal.

Let's put it together.

In [13]:
function cor = identify_pattern(im)
  for c = 1:size(im,3)
    cor(:,:,c) = identify_pattern_gray(im(:,:,c));
  end
end
In [14]:
cor = identify_pattern(ref);
fix = bad ./ cor;

imshow(fix(1:400,1:800,:))

Not bad :)

Let's pixel-peep the blank wall a little more, on the green channel:

In [15]:
imshow(fix(1:200,501:1000,2), [])

Let's add the noise again:

In [16]:
fixn = fix + randn(size(fix))/200;
 figure,imshow(fixn(1:400,1:800,:))
 figure,imshow(fixn(1:200,501:1000,2), [min(fix(1:200,501:1000,2)(:)) max(fix(1:200,501:1000,2)(:))])

Looks fine. Now let's save the two results to files.

In [17]:
imwrite(fix, 'fix.jpg', 'quality', 99)
imwrite(fixn, 'fixn.jpg', 'quality', 99)

Now let's process all the frames from our short sample clip. I like the noisy version better, so I'll just use that.

In [18]:
%%shell
ffmpeg -i 1080p\ Stripes\ 1.mov frame%03d.ppm -y 2>ffmpeg.log
In [19]:
for i = 1:25
   fin = sprintf('frame%03d.ppm', i)
   fout = sprintf('fixed%03d.ppm', i);
   im = im2double(imread(fin));
   fix = im ./ cor + randn(size(im))/200;
   imwrite(fix, fout);
end
fin = frame001.ppm
fin = frame002.ppm
fin = frame003.ppm
fin = frame004.ppm
fin = frame005.ppm
fin = frame006.ppm
fin = frame007.ppm
fin = frame008.ppm
fin = frame009.ppm
fin = frame010.ppm
fin = frame011.ppm
fin = frame012.ppm
fin = frame013.ppm
fin = frame014.ppm
fin = frame015.ppm
fin = frame016.ppm
fin = frame017.ppm
fin = frame018.ppm
fin = frame019.ppm
fin = frame020.ppm
fin = frame021.ppm
fin = frame022.ppm
fin = frame023.ppm
fin = frame024.ppm
fin = frame025.ppm

Assemble the frames (feel free to change the codec, I'm not familiar with them):

In [20]:
%%shell
ffmpeg -i fixed%03d.ppm -pix_fmt yuv420p -vcodec rawvideo video.avi -y 2>ffmpeg.log

And now let's admire our masterpiece :)

In [21]:
%%shell
ffplay -loop 100 video-n.avi 2>ffmpeg.log

That's all, folks!

Hope this notebook contains all you need to integrate the method in your video workflow, and to try new ideas.