Improved error metric for image compression (2)

These photographs have been missing from the improved error metric for image compression section, so that it can be loaded faster.

tiger
Error: 2.4%
Components: 33
 

Error: 9.5%
Components: 16
Noisy blocks: 50%

The photographs shown have been obtained by a transform coder, which is quite similar to Jpeg compression, but the final stage of Huffman coding has been omitted. (In a few months, Huffman coding will get its own section). On the left, enough components have been retained to keep the error in individual pixels to under 5%, for all blocks. For the photographs on the right, smooth blocks are subject to the same processing, but only the three major components are recorded for noisy blocks.

baboon
Error: 2.6%
Components: 31
 

Error: 8.8%
Components: 11
Noisy blocks: 65%

Of course, tranform coding is block oriented, not pixel oriented, but then this section is not about transform coders- they will get their own page in good time. This page is about degrading the reconstruction of noisy blocks, hopefully without this becoming apparent. The average number of components corresponding to every image is shown alongside, and is conducive to the compression ratio.

monarch
Error: 2.2%
Components: 29
 

Error: 4.9%
Components: 22
Noisy blocks: 25%

For the images of baboon and tiger there is a pronounced difference in error figures, (and some loss of contrast) but the subjective quality seems much the same. Obviously some photographs are not noisy, and then the savings are less. If the block error is weighted by block noisiness, error figures will become more comparable across different images.

Lenna
Error: 2.4%
Components: 26
 

Error: 4.0%
Components: 18
Noisy blocks: 44%

You don't want to see what Lenna looks like when (left) the 4% error is more evenly spread among all blocks, rather than mainly among noisy blocks, and (right) when only the three dominant coefficients are registered for all blocks (including smooth blocks). OK, I am not showing more images like these- the point is clear.


Error: 4.0%
Components: 14

Error: 9.2%
Components: 3

For a fair comparison, the photographs need to be transmitted losslessly. A lossless Gif format has been used, which is faster than 24 bit bitmap files, but clearly not as compact as Jpeg. (The Gif format will also get its own page soon enough!)

For these images, 8x8 blocks have been used, and the noise threshold lowered to 0.4. I will show the program in the jpeg section, but the executable has been added to this page so you can tweak the noise margin, reproduction quality for smooth blocks and the number of components kept for noisy blocks. You can also use the test images in last month's pack, if you have downloaded it. The mandrill image is especially appropriate.

Many block oriented methods need postprocessing to remove block artifacts from decompressed pictures when the compression ratio is aggressive. Some blurring is usually introduced at block boundaries, though of course this implies that some legitimate edges may lose a degree of sharpness, too. This is only my first attempt at removing blockiness, and I am not proud of that simple routine. It hopefully doesn't prevent a comparison between techniques which distinguish between important and noisy blocks, though, and those which don't.

For fractal coders -provided the block size is uniform- the gains will be not in the compressed filesize but in the way of reduced compression time.

Actually, I have cheated! The photographs in greatest error are in fact on the left. Maybe you did not notice?

1