Monday, October 24, 2011

Pythagoras: Vectorizing Software

Having to program points in the robot is annoying at best and horribly inelegant and time consuming. What's better is the computer generating all these fine points, so all I have to do is say, "Robot! Draw!" and it does.

This program vectorizes raster images, but doesn't quite process (read: not at all) vector images, such as SVG or DXF. That is for a later project.


OpenCV is a really dandy computer vision library originally headed by Intel. It has since been open sourced and freely available to the public in C/C++, Python, Android, GPUs, and a work-in-progress iOS port. The library implements just about every computer vision algorithm you can think of. For the vectorizer, I use the Canny edge detection, contour finding, and polygonal approximation of curves.

Take Note

This guide is written assuming you know how to fiddle with OpenCV and program. If not, there are many other websites that teach you how to program and beginner's tutorials to OpenCV that do a better job than I would have.

TL; WR (Too Long, Won't Read)

Resize the image to around 800 pixels, apply Gaussian blur and Canny Edge Detection. Fiddle with thresholds and blurs until the edge image looks right. Run findContours, then approxPolyDP (polygon approximation). Increase approximation accuracy until output looks about right. Stream polygon points to robot over serial or your favorite communication protocol.

Links to source code in the appendix.

Start: Acquire Image and Resize

Why resize your source image? After all, if its big, then more detail, great. If its small, then process faster, great.

If the source image is too big, say greater than 1200 pixels across, then there is too much detail. Since the image processing algorithms are run only once, the speed of processing at is no issue, but the delta robot can only plot so fast. At larger sizes, very small, insignificant features will show up, creating a large number of points to plot, slowing down the drawing and cluttering the image. Of course, unless you want that kind of precision for a very feature full image.

If the source image is too small, what ends up happening is basically plotting at low resolution. The manipulator positions are quantized based on the resolution of the input image. Low res input, low res output. So even if the input image is small, resizing it to a larger image makes better results.

I have found the optimal image size of lower precision delta robots (such as version 1) to be around 400 pixels, and for higher precision delta robots (such as version 2) to be around 800 to 1200 pixels, depending on the input image.

Finally, make it grayscale, either by loading the image greyscale or by using cvtColor to pull out the green channel. CV algorithms love grayscale images. This is the source image I'm using.
yay Lena

Next: Blur and Edge Detect

Exactly as it says, run Gaussian blur and then Canny edge detection on the image, to get a binary image of pure, one pixel width lines representing all the detected edges. Why blur? Because Canny edge detection will find any and all lines in the image, regardless of the content or "importance" of the line. Without blur, the edge image looks like this.
Too much blur, and you lose too much data, and get an edge image like this.
A happy medium is nice, though it differs depending on the picture. For this image I used a seven pixel radius (not sure what the sigma is though, sorry) blur to get this.
Canny edge detection also has variables to adjust the hysteresis of the found lines, a high and low threshold. High threshold is the minimum "strength" of the edge for it to appear in the edge image. Low threshold is the minimum "strength" of the edge for it to continue being a line in the edge image. Drop below the low threshold, and the line stops in the edge image.

Finally: Contour Finding and Polygon Approximation

Currently, all the edges are just white pixels on a black image. They need to be grouped into lines. This is where the function FindContours comes in. It goes and finds all the groups of pixels that can become contours and makes them into unified contours. It has a bunch of fine ways to store the contours, such as in a hierarchy, showing which contours are inside other contours, or only finding outlines of things. In this case we want all the contours in any order, so we can pass it CV_RETR_LIST to get a list of contours.

We aren't done yet. Contours aren't nice straight lines that can be linearly approximated. The last step is ApproxPoly, that is, polygonal approximation. Give it the list of contours, and it will give you back a list of polygons, each a list of points. Exactly what the robot needs. There is one adjustable value: the precision of approximation. Low approximation gives very "shapy" images, such as below.
High approximation approaches an exact match to the edge image, but it can add many many extra points in the polygons. For example, the below image contains 23,000 points, while for a typical drawing I adjust for no more than 3000 points.
With some adjustment, I achieved a 2115 point image while still remaining quite faithful to the original edge image.
So now we have all these points, it is just a simple matter of mapping pixel values to real world inches (or {milli,centi,}meters). Of course, make sure you maintain aspect ratio, or you will get stretched out images as such.
One more thing to consider. If you flipped your coordinate axes like I did, with the Z axis pointing down, your images are mirrored about the Y axis. Simply flip back.

One more thing

Since the robot interpolates between points, extra points will need to be added to be sent. Traverse points. One to traverse to the start of the polygon, and one to raise the pen to prepare for traversing to the next polygon.

In the next post

Getting the data to the robot.


Source image Lena:

Implemented vectorizer code in Python (warning: dirty, dirty, hacked together code):

No comments:

Post a Comment