Sunday, April 1, 2012

Progress on the Velociryder V4

Circuit Board

After some Eagling:
Blank space for your comfort.
Using Dorkbot PDX's PCB printing service, kapow in real life!


The EAGLE (version 6) board files can be found in the appendix on the bottom of the page. Unfortunately, the finished boards didn't come in time for my trip. 

Mechanicals

In the mean time, I learned a lesson the difference between mathematically ideal chain sprockets and realistic chain sprockets.

Good sprocket on the left, bad sprocket on the right.
The mathematically ideal sprocket had extra tall teeth and too narrow spacing. The chain can only fit a few teeth before they become misaligned. Foo, wasted a bunch of steel.

But once the good sprockets were made, ghetto beveling with the drill and belt sander, as learned from Jamison.


Otherwise, the build went smoothly.

All the waterjet parts
Frame assembled
And add wheels.
Wheels and motors and shaft and duck shaft clamps were borrowed from Velociryder V2.

Battery spot goes in the top.

With the deck placed.
Since, during the build, I didn't have the PCB, I breadboarded an equivalent circuit using sensors from Velociryder V1 and an Arduino. Also, I figured out I did not screw up the strain gauge circuitry. Yay!
While in the picture I used a switching regulator to bring the main battery's 22.2 V down to 5 V logic, I had previously used linear regulators. Three 6805s in parallel did the trick for a few hours before they died. Just enough time for a quick demo to MTV and the Inventure Prize judges. Law of Demos (see below) didn't curse me this time!

To be continued as I accumulate more pictures.

Appendix:

Code: https://github.com/aaronbot3000/velociryder/tree/master/arduino
Solidworks: https://github.com/aaronbot3000/velociryder/tree/master/solidworks%20model
PCB: https://github.com/aaronbot3000/velociryder/tree/master/board

Law of Demos: Anything, during a demo, will not cooperate. Things that should work will not, and things that should not work will.

Tuesday, March 27, 2012

Strain gauges

In the Velociryder versions 1-3, turning was achieved using a tilting rear section. This tilting section prevented the back foot from helping the rider balance forward and backwards. I would not stand for this in version 4. I planned to use an actual skateboard deck and strain gauges to measure the tilt of the rider. This will minimize moving parts and ideally make it easier to ride. Not only that, knowing the weight of the rider can improve the board's response.

Part 1: Acquiring Strain Gauges

After some Googling and ebaying, I found strain gauges or pressure sensors either come in tiny or extra large, 1-3 pound range or industrial strength. All were too expensive. Then I realized digital bathroom scales must have something to measure weight. And since they're designed to both precisely measure human weights and not break under the heaviest of people, they are perfect. A quick search revealed digital bathroom scales going on eBay for a dollar.

Kilograms only? Good enough!
Disassembly followed. The strain gauges look like this.
It is supported by the side prongs. Pressure is applied on the top of the middle prong.

Part 2: Analysis

Electrically, they look like a potentiometer.


The outer blue and white wires measure a constant resistance, in this case 2 kOhms, while the middle red wire is the wiper, with roughly 1 kOhm resistance between the red and blue wire and the red and white wire. Pressing the gauge is equivalent to turning the potentiometer, though only a very slight amount. When five volts was applied to the outer wires, the middle wire deviated by about 4 millivolts over the range of 100-ish pounds. Since I had four, I used two pairs, one pair each for the left and right side. The power supply for each pair was put in opposition, so pressing on a pair of strain gauges causes one output voltage to go up, and the other to go down.

Left strain gauge/potentiometer goes up under pressure, right side goes down.
This create the biggest difference to amplify for your buck.

Electrically, it forms a wheatstone bridge. Measuring the voltage across the outputs gives a reading that varies linearly with pressure applied.
A wheatstone bridge. Voltage is measured in the middle.
The two strain gauges fully connected. Remember, one is connected upside down to get a differential voltage.



Part 3: Using Them

The tiny voltage difference needs to be beefed up. In comes a differential amplifier!
One channel of the differential amplifier with zero point adjustment and low pass filters. The Velociryder uses two, for the right and left side.
This is an almost standard diff amp configuration, with a couple extra bits. The capacitors C8 and C10 are (very) low pass filters to get rid of noise. Instead of a constant resistor on the non-inverting input, a potentiometer is used so any zero offset can be tuned out. This converts the range from 0-4 mV to 0-3 V, an excellent range for a microcontroller's ADC.

Protips

Since the op-amp will likely be used in single supply (unipolar) mode, use a rail to rail op-amp so the output won't crap out near 0 and 3 volts.

Thursday, March 1, 2012

Velociryder V4!

I have all these parts and nothing to velociryde on.

First, we had Velociryder V1.

It actually worked.
Then, we had Velociryder V2 for the Inventure Prize.
Woo waterjet aluminum.
This one worked too!
We kinda had Velociryder V3, ambitious but never completed.
I couldn't find a picture, so here's a render. It didn't work.
Now, we will have Velociryder V4!
With 20% more height!

Giving a good view of the encoders and motors.
From the side, showing the small sprocket for the encoder.

New and improved, with:
  • Digital accelerometer and gyroscope, less analog signals to corrupt!
  • No pivoting rear section whatsoever; strain gauges for steering!
  • Actual skateboard deck to stand on!
  • Overengineered 80/20 frame for no structural failures!
  • PIC24H core for double the speed of an Arduino!
  • Quadrature encoders for real motor control!
  • 5 Ah longpack! (That's one more amp-hour than before)!
A bonus effect of using strain gauges to measure steering is that they can also measure the user's weight. With this and wheel encoder information, I can use actual inverted pendulum state space control, instead of PID.

In the time that I have been not posting updates, I have created a control board, complete with DC regulator, super differential amplifiers for the strain gauges, and sensors.

Ooh, aah. And a logo! The actual size is 3.75" x 1.00"
I have almost all the parts and two weeks to finish to go from computer drawings to Velociryding for a certain event and a certain trip. Including programming. Boy am I hoping my circuit design is correct.

There are still a couple kinks I hope will work itself out. The first is the incredibly wide motors are probably going to hit the ground.
Derp.
The second is that the strain gauge turning is completely untested. I hope I can get a good enough difference between the weight on the right and left sides to use.

Welp, here goes another build marathon. I thought I was done with them after the Inventure Prize last semester.

Appendix

Thursday, January 5, 2012

Pythagoras: New Calibration Method

A new, more accurate method of calibrating the stepper's initial position. Previously, I can calculate the arm's position at any position on the potentiometer. However, this requires quite a bit of measurement and interpolation and extrapolation.

Instead, I measure the potentiometer's value once, at a known angle. In this case, its 45 degrees up from horizontal. Upon startup, the arms each move until the potentiometers hit the set value, which means the arm is at (almost) exactly 45 degrees up from horizontal. Basically the potentiometers act like optical gates.

The improvement on calibration can be seen in the vertical traverses. Previously, the robot would deviate from a vertical line by around 1/3 of an inch. Now, the deviation is less than 1/32 of an inch.

A new Pythagoras video

With 50% more effort put into it. A time lapse video of some intense shading.


Thursday, December 29, 2011

Pythagoras: Structural Update

Solving the Flex
After dealing with the flexing aluminum supports that allowed the whole robot to wobble for so long, I finally went ahead and bought steel tubing. The legs are also longer, allowing expansion of the drawing field. Unfortunately, they are also taller, so for now, the paper needs a little booster to get it to the optimal drawing height.

Phone books to lift the paper to the pen

The tubing is connected by gusset plates, made of .03" galvanized steel.
Slots in the plates for adjusting exact lengths

Now the bot is so sturdy I could probably stand on it and it would hold.

Pythagoras Bitmap Mode: How It's Done

Introduction

Bitmap mode involves shading in the color of images, as opposed to just making the outlines. For some images, this makes vastly superior output compared to outlines. Faces are a prime example. When outlined, edges of shadows appear the same as edges of objects, causing a mess of lines to be drawn on a face. With shading, shadows and lower contrast features remain in the image, improving the familiarity of the face.
Its my face!
The downside is that the output resolution is greatly reduced, and the time it takes to draw is greatly increased.

The Concept

How do I generate levels of grey scale when my pen only creates a line of constant darkness? The answer: squiggles. Each pixel forms a 2D square on the paper. The more that square is shaded in by the pen, the darker the pixel. This idea can be clearly seen in one of the earlier test pictures. Ramp function patterns are drawn on the paper to fill the square. Then I tested with square functions, then triangle waves.

Since the shading depends on the width of the pen stroke, the pixel size on paper becomes a factor. Smaller pixels make smoother borders around regions on the output image, but also decreases the number of squiggles that can be fit into a pixel before the pixel becomes solid dark. So at one extreme, the pixel is completely filled by a single stroke of the pen, while at the other extreme, the entire image is a single pixel.
Through testing, I found the best compromise between greyscale levels and output resolution is to have an image width of between 120 and 160 pixels, for a 7.5 inch (19 cm) square drawing area.

Tweaking Waveforms

First was experimenting with different waveforms. After creating ramp functions, I tried to do square waves. However, this created odd behavior whenever the squares shifted in and out of phase with the rows above and below them. It was quickly abandoned, apologies for the lack of pictures. What I settled on was triangle waves, which were always in phase like the ramp functions, yet more uniform like square waves, closer to the ideal of sine waves.

In an attempt to squeeze out more greyscale levels, I considered sharing squiggles across multiple pixels, up to four pixels. This way I could get even less dense pen strokes and therefore lighter colored intensities. The results were not so great though. This image allowed up to four combined pixels.
The pixel combining can be observed near the center of the drawing.
The pixel combining created awkwardly shaded areas in an image. Even when combining less pixels.
A series of test drawings under various brightness settings. Scribbling in the corner is unrelated.
Pixel combining didn't create evenly lighter areas, just awkward gaps. The solution to this is to use amplitude modulation. To make a lighter region, the amplitude of the squiggle is reduced, instead of the number of squiggles. This can be seen clearly below.
Just below the center is an example of amplitude modulation of the squiggle.
This fills the gap in shading between one squiggle per pixel and no squiggle, making nicer transitions for lighter colors.

Color Mapping

Originally, color was mapped linearly to number of squiggles in each pixel. This caused for pretty dark images, as seen below.
Can barely make out differences in the greys.
The first step to fixing this is to add gamma correction. Since human eyes are more sensitive to darker colors than lighter colors, color values in images are shifted toward the darker regions according to an exponential rule. This way, more bits are devoted toward the darker colors, instead of highlights that eyes cannot differentiate anyways. 
To correct for gamma, simply raise the pixel value, mapped from 0 to 1, to the (1/2.2) power to get the actual color intensity. This has an effect of making the output image brighter. Technically, any image processing should be done after gamma correction, for proper mixing of pixel values. However, this will involve a lot of going between gamma encoded and decoded, so I let it slide. I do scaling before gamma correction, and gamma correct once, right before converting the pixel value to squiggles. This makes the picture a little brighter.
The same picture, with gamma correction. A little brighter.
The next step in color mapping is to fix the mapping of intensity to squiggles. The mapping is not linear, and is depending on the size of the pixel. The mapping I got from experimenting is intensities 0.7 and less are linearly mapped to squiggle density. Between 0.85 and 0.7 intensity is the region for amplitude modulation of the squiggles. Testing was done with printing gradients and comparing the result to the computer image.


Software

The image processing software was written in Python, using the OpenCV libraries for image processing and simple GUIs. One slider lets the user change the output resolution until the aliasing looks alright, and a second slider lets the user change the gamma value from the default of 2.2, to change the brightness of an image.

There is a subtlety in the image preview. While the screen can display the full 256 values of greyscale, the robot can only print a much smaller range of greyscale, which varies as a function of pixel size. This can be especially misleading for lighter colored regions, since full white pixels cover color intensities of 0.85 and 0.7, and for darker colored regions, where seemingly differently colored regions are both shaded the same.

To fix this, the output pixel values are grouped into buckets based on the number of squiggles in that pixel. This way, the output image has the same number of greyscale levels as the printed image. With some funky mapping, the colors on screen are closer to the colors as printed. This way, it is much easier to adjust the brightness and scaling and predict the actual results. Following are screenshots of the program in action after bucketing. You can see how there are significantly less than 256 levels of grey.
Default scaling of 120 pixels and gamma of 2.2 (220 / 100).
Darker image.
Brighter image.


Appendix