This program vectorizes raster images, but doesn't quite process (read: not at all) vector images, such as SVG or DXF. That is for a later project.
OpenCV is a really dandy computer vision library originally headed by Intel. It has since been open sourced and freely available to the public in C/C++, Python, Android, GPUs, and a work-in-progress iOS port. The library implements just about every computer vision algorithm you can think of. For the vectorizer, I use the Canny edge detection, contour finding, and polygonal approximation of curves.
This guide is written assuming you know how to fiddle with OpenCV and program. If not, there are many other websites that teach you how to program and beginner's tutorials to OpenCV that do a better job than I would have.
TL; WR (Too Long, Won't Read)
Resize the image to around 800 pixels, apply Gaussian blur and Canny Edge Detection. Fiddle with thresholds and blurs until the edge image looks right. Run findContours, then approxPolyDP (polygon approximation). Increase approximation accuracy until output looks about right. Stream polygon points to robot over serial or your favorite communication protocol.
Links to source code in the appendix.
Start: Acquire Image and Resize
Why resize your source image? After all, if its big, then more detail, great. If its small, then process faster, great.
If the source image is too big, say greater than 1200 pixels across, then there is too much detail. Since the image processing algorithms are run only once, the speed of processing at is no issue, but the delta robot can only plot so fast. At larger sizes, very small, insignificant features will show up, creating a large number of points to plot, slowing down the drawing and cluttering the image. Of course, unless you want that kind of precision for a very feature full image.
If the source image is too small, what ends up happening is basically plotting at low resolution. The manipulator positions are quantized based on the resolution of the input image. Low res input, low res output. So even if the input image is small, resizing it to a larger image makes better results.
I have found the optimal image size of lower precision delta robots (such as version 1) to be around 400 pixels, and for higher precision delta robots (such as version 2) to be around 800 to 1200 pixels, depending on the input image.
Finally, make it grayscale, either by loading the image greyscale or by using cvtColor to pull out the green channel. CV algorithms love grayscale images. This is the source image I'm using.
Next: Blur and Edge Detect
Exactly as it says, run Gaussian blur and then Canny edge detection on the image, to get a binary image of pure, one pixel width lines representing all the detected edges. Why blur? Because Canny edge detection will find any and all lines in the image, regardless of the content or "importance" of the line. Without blur, the edge image looks like this.
Finally: Contour Finding and Polygon Approximation
Currently, all the edges are just white pixels on a black image. They need to be grouped into lines. This is where the function FindContours comes in. It goes and finds all the groups of pixels that can become contours and makes them into unified contours. It has a bunch of fine ways to store the contours, such as in a hierarchy, showing which contours are inside other contours, or only finding outlines of things. In this case we want all the contours in any order, so we can pass it CV_RETR_LIST to get a list of contours.
We aren't done yet. Contours aren't nice straight lines that can be linearly approximated. The last step is ApproxPoly, that is, polygonal approximation. Give it the list of contours, and it will give you back a list of polygons, each a list of points. Exactly what the robot needs. There is one adjustable value: the precision of approximation. Low approximation gives very "shapy" images, such as below.
One more thing
Since the robot interpolates between points, extra points will need to be added to be sent. Traverse points. One to traverse to the start of the polygon, and one to raise the pen to prepare for traversing to the next polygon.
In the next post
Getting the data to the robot.
Source image Lena: http://sipi.usc.edu/database/database.php?volume=misc&image=12#top
Implemented vectorizer code in Python (warning: dirty, dirty, hacked together code): https://github.com/aaronbot3000/deltadraw/tree/master/vectorizer