Bitmap mode involves shading in the color of images, as opposed to just making the outlines. For some images, this makes vastly superior output compared to outlines. Faces are a prime example. When outlined, edges of shadows appear the same as edges of objects, causing a mess of lines to be drawn on a face. With shading, shadows and lower contrast features remain in the image, improving the familiarity of the face.
Its my face! |
The Concept
How do I generate levels of grey scale when my pen only creates a line of constant darkness? The answer: squiggles. Each pixel forms a 2D square on the paper. The more that square is shaded in by the pen, the darker the pixel. This idea can be clearly seen in one of the earlier test pictures. Ramp function patterns are drawn on the paper to fill the square. Then I tested with square functions, then triangle waves.
Since the shading depends on the width of the pen stroke, the pixel size on paper becomes a factor. Smaller pixels make smoother borders around regions on the output image, but also decreases the number of squiggles that can be fit into a pixel before the pixel becomes solid dark. So at one extreme, the pixel is completely filled by a single stroke of the pen, while at the other extreme, the entire image is a single pixel.
Through testing, I found the best compromise between greyscale levels and output resolution is to have an image width of between 120 and 160 pixels, for a 7.5 inch (19 cm) square drawing area.
Tweaking Waveforms
First was experimenting with different waveforms. After creating ramp functions, I tried to do square waves. However, this created odd behavior whenever the squares shifted in and out of phase with the rows above and below them. It was quickly abandoned, apologies for the lack of pictures. What I settled on was triangle waves, which were always in phase like the ramp functions, yet more uniform like square waves, closer to the ideal of sine waves.
In an attempt to squeeze out more greyscale levels, I considered sharing squiggles across multiple pixels, up to four pixels. This way I could get even less dense pen strokes and therefore lighter colored intensities. The results were not so great though. This image allowed up to four combined pixels.
The pixel combining can be observed near the center of the drawing. |
A series of test drawings under various brightness settings. Scribbling in the corner is unrelated. |
Just below the center is an example of amplitude modulation of the squiggle. |
Color Mapping
Originally, color was mapped linearly to number of squiggles in each pixel. This caused for pretty dark images, as seen below.
Can barely make out differences in the greys. |
To correct for gamma, simply raise the pixel value, mapped from 0 to 1, to the (1/2.2) power to get the actual color intensity. This has an effect of making the output image brighter. Technically, any image processing should be done after gamma correction, for proper mixing of pixel values. However, this will involve a lot of going between gamma encoded and decoded, so I let it slide. I do scaling before gamma correction, and gamma correct once, right before converting the pixel value to squiggles. This makes the picture a little brighter.
The same picture, with gamma correction. A little brighter. |
Software
The image processing software was written in Python, using the OpenCV libraries for image processing and simple GUIs. One slider lets the user change the output resolution until the aliasing looks alright, and a second slider lets the user change the gamma value from the default of 2.2, to change the brightness of an image.
There is a subtlety in the image preview. While the screen can display the full 256 values of greyscale, the robot can only print a much smaller range of greyscale, which varies as a function of pixel size. This can be especially misleading for lighter colored regions, since full white pixels cover color intensities of 0.85 and 0.7, and for darker colored regions, where seemingly differently colored regions are both shaded the same.
To fix this, the output pixel values are grouped into buckets based on the number of squiggles in that pixel. This way, the output image has the same number of greyscale levels as the printed image. With some funky mapping, the colors on screen are closer to the colors as printed. This way, it is much easier to adjust the brightness and scaling and predict the actual results. Following are screenshots of the program in action after bucketing. You can see how there are significantly less than 256 levels of grey.
Default scaling of 120 pixels and gamma of 2.2 (220 / 100). |
Darker image. |
Brighter image. |
Appendix
- Code to convert image to a list of squiggles for the robot: https://github.com/aaronbot3000/deltadraw/blob/master/vectorizer/rasterize.py
- Web album of Pokemon drawn by Pythagoras: https://picasaweb.google.com/aaron.y.fan/PythagorasPokemon?authuser=0&feat=directlink