I had to convert images to black or white points. This post is a summary of my tests and learning. My main source is the nice ‘Dither’ Wikipedia page. The major issue I had to face is when converting an image with an undefined length coming from a scanning process.
To print pictures on thermal paper with reasonably good rendering, they should be converted according the printer limitations. It’s quite simple, it can be printed only black pixels (and whites as the paper is usually white).
The first step consists to get a gray-scale image. There is plenty well known method, from the simplest averaging of the Red/Green/Blue channels to advanced ones where the pixel intensity is scaled to a color wavelength to simulate a particular response (such as black & white films).
The second step could be achieved by several algorithms. To convert gray-scale pixels to only black or white, a rather easy method is the Ordered Dithering. But it there is noticeable patterns on the resulting image. Another common method is the error diffusion, and in particular the Floyd-Steinberg dithering algorithm. It is this last one that I choose to study as I found the resulting image interesting.
The codes we can find over Internet usually rely on a buffer to store and propagate the errors of each converted pixel. This buffer have the same size of the image and is completed during the iteration over all the image lines.
As receipt thermal printer use paper roll, there is virtually no image length limit. If we want to convert on the fly the image and strait print it, we cannot rely on a fixe full image sized buffer. That’s why I experimenting conversion algorithm based on Floyd-Steinberg, but with only a small rotating buffer. The size of this ‘line’ buffer could be as small as a line size plus one.
Other than the code itself, I try to explain the principle with this Gif :
And the Java code use to generate it :
Very good readings about dithering :