Sunday, May 10, 2020

Using a 3D printer as a CNC plastic welding machine

Intro:
A friend and I were discussing fusing plastic sheets and creating complex patterns and specific curves with the seams. This reminded me of a thought I had a while back to use a 3D printer as a CNC plastic fusing machine since the position is computer controlled and the print head can be set to specific temperatures for melting the plastic. There is a technique to use a soldering iron to fuse plastic sheets by hand, and this would be the robotic equivalent.

If this works with such thin plastic and on a smallish scale, then it could be a good method of producing inflatable pneumatic actuators for soft robots (which is something I've been trying to figure out how to do cheaply and easily for a little while now).

The Plan:
Before being able to use my 3D printer (Monoprice Maker Select Plus) as a CNC plastic fusing machine, I'd need to find out the correct temperature and speed to use on my material of choice: disposable Subway sandwich bags, the thin ones for a single sub.
According to Google, a thin plastic shopping bag is about 0.5mil, which comes out to about 12.7 microns, which is thin, and according to my friend might be harder to work with due to how thin the plastic is (I can confirm having tried to fuse this plastic manually with an unregulated soldering iron before), so this test is absolutely necessary.

The idea I came up with for the test looked something like this in order to test a variety of speeds and temperatures:
The rows would keep on increasing in temperature as they went up, and the speed would keep on getting faster.

I decided this wouldn't be able to fit enough rows on my tiny printer, so I made it more like a grid of temperatures with 12 lines each starting at a slow speed and ending with a fast speed (although this was later reversed to start fast and end slow).

If I was going to melt unknown (probably LDPE) plastic with my print head, I didn't want to contaminate it or risk it in any way, so I added 2 layers of aluminum foil to the print head and secured it with a wire loop.

Then I cut out a cardboard holder for the plastic and clamped it onto the build plate with binder clips. It's a sandwich with a cardboard square underneath the plastic sheets, the sheets themselves, then the cardboard frame above that.
The cardboard on the bottom also works to keep the print head from scratching the build plate if something went wrong, the worst that would happen is the plastic would be ripped and the cardboard would have indentations in it.

The Software:
I wrote the control software for this in Python 2.7 (I know, I know, Python 2.7 is dead, but until the python command stops bringing up version 2.7 in the Ubuntu terminal, then I'll probably keep on using it, and no I don't want to change the link to point to Python 3 😛). The repo is at https://github.com/koppanyh/Plastic-Fuser
All it has to do is generate the G-code to be sent to the printer and make sure the lines are within the cardboard frame, and at the proper height and temperatures and position and everything.
I spent an entire day writing it, which is sad because I thought this little experiment would last about half a day and be done, not take me 1.5 days to do in which more than a day was spent just on the software.

I tried to make a nice interface, so the output is color coded (error are even red and everything) and the calibration section was designed so you could only type in what you wanted and skip what was already good.

Since in my testing the calibration data didn't change often (only if you messed around with the cardboard frame) it made sense to be able to save and restore it from a file so you wouldn't have to go through the calibration multiple times.

The calibration step would ask you to input the coords for a specific corner, like "upper left", and would present you a prompt to type in X's value followed by a default value which was the last X value you typed in. If you type in a value, then it sets X at that value, if you hit enter it will use the previous value and if you type # then it moves onto the next corner. After X is typed in, it asks for Y and follows the same input convention. The Z axis stays at 10mm the whole time so it's hovering over the plastic.

Once all the corners are specified, then it calculates the center by averaging all the X coordinates for the center X and averaging all Y coordinates for the center Y. It looks like this (green point) and is better than the red point because it's the position that is more or less the farthest from all the edges at once (the red can get really close to the edges if you squish the square in just the right way).
 Then the software visits all 5 points and for each one asks the Z height. The prompt for each Z is the same as for X or Y in that you can enter new, reuse previous, or move onto the next action.

The software then calculates and instructs the printer to draw lines at the positions and temperatures and speeds detailed in the grid image above.

To make matters complicated, if the build platform quad is not made of right angles, then the software will calculate the paths so they are squished in the same manner as the work surface. So if the top is bigger than the bottom (upside down trapezoid) then the lines will be angled in a way to diverge following this expansion.

To make things even more complicated, the cardboard piece at the bottom was not flat, and so was warped. I added code to attempt "bed leveling" in software to correct the Z height for this. It worked ok, but I realized a major flaw. The software calculates the Z for the starting point and the ending point, then draws a 3D line directly between the 2. This probably does not follow the surface of the cardboard when modeled as a pyramid with the 5 points. My poorly illustrated drawing attempts to show this. On the left you have the pyramid with the line going from one side to the other. In an ideal world it would climb up the starting face, then across the middle face, then down the ending face (depicted on the right by the line that goes over the pyramid cross-section). In reality it goes straight through the pyramid (the line that goes through the cross-section). Fortunately the difference in heights is around a millimeter or less, so this inaccuracy isn't a big deal since the cardboard is a bit flexible.

The software was given a PRODUCTION flag that when set to true will output over serial port to the printer, or when false will output to a debug.gcode file. This G-code file was viewed in https://ncviewer.com to make sure the paths were correct and to see if the warp compensation was working good enough. I had to fix a few little bugs dealing with the coordinate system, but for the most part it seemed to work more or less the first time. Good thing I could preview the movements before actually using the printer.

The First Test:
With the software done it was finally time to run the first actual test. There was a pretest where I ran the software but gave the Z axis a +10 offset so it wouldn't touch the plastic and turned off thermals just so I could see the path once in software. It looked good so I was able to undo those little changes to have it run the real deal.

This test would be as planned, 9 thermal regions with 12 lines each. The temperature would start at 115°C and increase 15°C on each following region to end up with 280°C at the last region. The lines in each region would have the first one drawn at 11994mm/min and be decremented by 954mm/min until the 12th line would be drawn at 1500mm/min.

Here is a video of the first 115C region being drawn:
https://twitter.com/koppanyh/status/1259620567235170305?s=20

Issues:
There were some major issues with this test. At the start I could see that the plastic needed to be stretched tighter because the extruder seemed to be dragging the plastic a bit. Then I saw the hotend's blower fan nozzle hitting the binder clip at the bottom middle, I'm sure this contributed to a bit of misalignment. Then I noticed the temperatures were changing to the next temperature almost halfway through drawing each thermal zone. This was because apparently the M109 command doesn't wait for movements to be done before it tries to change the temperature, so it would be changing the temperature as soon as the buffer would have space for the command to be sent. Some of the faster lines would go really slow for some reason and then would finally run at the expected speed once the speed was at or below around 8000mm/min. And when the temperature was high enough the hotend would poke holes in the plastic because the plastic would stick to it and when it pulled away it would pull a hole in the plastic.

The Second Test:
With those issues, I killed the test less than halfway through, made minor code modifications, made tiny hardware adjustments, and started the test from scratch with fresh plastic. This was the progress of the first test:

I solved the loose plastic, blower nozzle collision, and hole poking problems in one go by manually taking care to keep the plastic tight and using 3 mini binder clips at the bottom.

The issue of temperatures changing before it was time was solved by adding the M400 command right before the M109 commands, this way it would wait for the current movements to be done before it changed the temperature and queued up the next set of movements.

I didn't know how to solve the issue where some fast lines run way slower than expected, but I didn't run into that issue this second time around. My theory is that it was some kind of weird behavior related to changing the Z height for warp compensation (which is theoretically a buggy feature anyways) and the new calibration after moving the cardboard somehow didn't trigger this behavior.

I changed the thermal zones to start at 145°C and increase 5°C on each following zone so it would end at 185°C. But I kept the 9 zones with 12 lines and speeds the same. This is what it looked like:

More Issues:
Of course, second time's not the charm, but this test did go much better than the first one. At thermal increments of 5°C, my printer did not have the accuracy to maintain this, so as it drew out one thermal zone I would notice the temperature wandering between ±5°C. This means that the results in one thermal zone would be similar to its neighbors' results. The Z height calibrations were too high on the lower right side, so you can see that some of those lines are only partially drawn. I forgot an M400 command at the end to delay turning off the heater, because of this the final thermal zone started at 185°C dropped to about 170°C by the end. There was still the issue of the plastic bunching up as it was dragged a bit by the hotend, this would create a skipping patter on some lines when the hotend would just run over those bunches. And at higher temperatures I noticed color bleeding from the plastic onto the cardboard, which would have stained my precious heated bed if the cardboard wasn't there to protect it.

Results:
Once this second test was done, the results were good enough to analyze the seams.

Immediately the bottom row (145°C, 150°C, 155°C) just peeled apart with pretty much no effort. It wasn't just the partially drawn lines because they had sections that were contacted by the hotend. These temperatures were discarded.

I labeled and cut the rest of the thermal zones out to analyze them one by one (picture was before I cut them to individual sections).

I peeled each one apart, starting from the left where the lines were drawn fast and ending where the lines were drawn slow. The common observation was that the first half (the highest speeds) always came apart without a fight.

At 160°C only the slowest speed (1500mm/min) didn't practically fall apart by itself, but it was still peeled apart too easily.

At 165°C it acted like at 160°C, but the last one required just a tiny bit more force to peel apart, still too easy though.

At 170°C the last 2 (1500mm/min and 2454mm/min) hung on nicely near the end, but the higher heat and bunching issue caused it to form holes along the seam as a pulled it before the seams finally gave up. The force to do this was starting to get acceptable though.

At 175°C the fastest 8 seams broke pretty easily. The next 2 seams looked nice and didn't break without a fight. The last 2 seams had holes from the heat, they seemed pretty strong though.

At 180°C it pretty much acted like at 175°C but the last 4 seams all looked nice and were pretty strong, but there was significant bunching on these. The nicest results were on 4362mm/min, just like at 175°C.

At 185°C the first 6 seams broke apart too easily, and this was the one where a missing M400 command caused it to start cooling off before all the lines were drawn. The remaining seams were pretty strong, but there was heavy bunching on all the lines and some small holes on the slowest 2 speeds.

Conclusion:
None of the seams produced in this experiment were particularly acceptable for pneumatic applications. Part of the reason could have been that variables such as contact surface or contact pressure were not regulated in any way, but could make big differences in how nicely the plastic was fused.

Based on my limited testing and experience trying to do this process manually, it seems that Subway bags are not a good material to use for manufacturing pneumatic bladders using thermal plastic fusing. It is possible that it could work, but it would most likely need either a more precise setup, or a different approach altogether.

It will be necessary to continue exploring ways to cheaply, quickly, and easily make soft robotic pneumatic actuators.

Regardless of the failure of this project, I learned a lot about G-code, and interfacing with a printer over serial connection. I also learned that the Subway bags I've hoarded are not as useful as I was hoping they'd be 😛.

Future Improvements/Research:
If anyone wants to replicate or continue this experiment, here are some suggestions for ways to improve or explore different options.

Use a better surface than warped cardboard. Something that is level by default and preferably still able to take a bit of a beating if your extruder goes down too far without damaging the extruder itself.

Maybe create a surface that is suspended by springs, so the hotend can go down a bit further and the springs will provide almost constant pressure at any point of the plastic instead of it being at the whim of the warpage and warp compensation/calibration.

Pay better attention to making the plastic as tight as possible. This is harder than I thought and I probably should have spent more time on this. Don't make the same mistake as me!

Having a rounded heating tip would probably be more effective than a point covered in aluminum foil. Maybe something like a brass screw with the same thread as the extruder with one side rounded into a hemisphere. This way it can screw into the hotend and provide a much nicer contact point that might help with the bunching problem.


Miscellaneous:
While analyzing the thermal zones, I found some dried Meatball Marinara sauce on the bag. It had been there for quite some time, but it was in a small enough amount that I didn't notice it before until I started looking closely to inspect the seams. Good thing there was cardboard and aluminum foil separating the printer from the plastic because I don't want that stuff defiling my printer 🤣. This most likely won't be a problem for normal people because normal people don't eat as many sandwiches as I do (at least I hope).

I also realized that I kind of wrote this like a detailed lab writeup essay from college, which is funny because I absolutely hated writing anything and there's no way I would have been able to write all of this in less than a day in one sitting the way I did here. It helps that the language and format are much less formal than what's expected in college, and the topic is much more interesting than the kind of stuff I had to do in college 😛.

Monday, September 11, 2017

Cheap n' Easy Optical Virtual Reality Position Tracking

Hello World! At this point I can confidently say that I've entered the Matrix.

(WARNING!!! This is probably going to be the longest blog post ever, you were warned.)

Intro

Over the course of 8 days (spanning from 2017-09-03 to 2017-09-10) I worked on a project that only existed in pure theory. The theory was that I could calculate my position in 3D space using only 2 cameras, an object to be tracked, and some math.

I had been wanting to do something like this for a few months, but I figured it would be really hard and take a bit of time, but I was wrong. After talking about this concept with the top programmer at the company that I work for, I realized it wouldn't be that hard at all and the math shouldn't have to exceed anything harder than basic trigonometry.

I set about to do it with materials that I had on hand, and a week later, I had a prototype of a working optical position tracking system for use in VR.

The Theory

I figured that I could get the position of something in 3D by defining it as the intersection of 2 lines in 3D space.
Those lines could be calculated by an image as perceived by 2 cameras, then the data of the lines could be sent to a server (or in my case the Galaxy Gear VR headset) and the intersection could be calculated there to find the position of an object (or in this case a person) in 3 dimensions.
This way most of the calculation would be offloaded to the processors of the devices with the cameras and the network traffic would be just a few bytes to define the lines.

Since this was just a prototype for me and I wanted to finish fast, I actually made it to only calculate the x and y position of a player but not their height. The principle is the same though, just with an additional slope to calculate the height of the player.

An example of what I mean can be seen in the little animation below.
There are 2 cameras, one at the middle of the left wall and one at the middle of the bottom wall.
Those white lines that converge on those points are the boundaries of the fields of view for those cameras, in my simulation I set them to about 60 degrees.
The grey lines that intersect at the moving point are the angles of the target relative to the cameras.

The little animation was actually more of a simulation that I used to develop the math and logic for the system.

The angle of the target relative to a camera was calculated by the position of the x coordinate of the target in the captured frame, subtracted by the width of the frame divided by 2, then multiplied by the angle-per-pixel ratio, then added by the angular offset of the camera to switch from relative angle to absolute angle.
The equation would be ((x_coord - (x_width / 2)) * angle_per_pix) + angular_offset
The angle-per-pixel ratio is calculated by the field of view angle divided by the total width of the image frame.
The equation would be (h_fov / x_width)
All the math is done with radian angles, but when describing it I use degrees (this actually caused me some problems when I forgot to convert degrees to radians >.< ).
I also use centimeters for the position values.

Once you have the angle, the equation for the line can be written in vector format following:
(camera_x_position, camera_y_position) + t <cosine(target_angle), sine(target_angle)>

Since this will be done by a computer, we can optimize by transmitting the camera_x_position and camera_y_position only once at the beginning (because the camera won't move) and then just transmit the vector part to calculate the x and y slopes as they are generated each frame.

At the beginning, you will also want to calculate 2 variables which will stay the same but depend on the positions of the cameras (this way you don't recalculate these over and over, they will only change once in the beginning).
c1bc2b = c1b - c2b
c1dc2d = c1d - c2d
Where c1b and c2b are the x coordinates of the first and second cameras, and c1d and c2d are the y coordinates of the first and second cameras.

Finally, to calculate the point of intersection, simply throw your values into this equation:
t = ((c1dc2d / c2tc) - (c1bc2b / c2ta)) / ((c1ta / c2ta) - (c1tc / c2tc))
gx = c1ta * t + c1b
gy = c1tc * t + c1d
Where c1ta is the x slope of the first camera, c1tc is the y slope of the first camera, c2ta is the x slope of the second camera, and c2tc is the y slope of the second camera.
The gx and gy variables will be the x and y coordinates of the intersection of the 2 lines.

That's really all there is to the math.
The value can then be thrown into a VR world to update the player's position.
As mentioned earlier, this will only generate the 2D position of a player and not their height, but it's easy enough to add the z position.

The Journey

Before I went through and figured out all the math, I started to make the hardware for the system.
Then I did everything else. Here is the story.

Sunday, September 3
I officially began development of the optical tracking system today.
I played around with a little Python script I had in the archives that tracked things with the webcam, but I found that it sometimes had a little trouble distinguishing the object from the background if the color wasn't different enough.
I noticed that if I shined a light through clear plastic objects the webcam registers them as almost pure RGB white, while the rest of the background wouldn't come close unless it was illuminated.
This made me think that I could use an illuminated tracker to keep the position, this way I wouldn't need a specially colored room but instead just a slightly dark (read "badly illuminated") one to separate the illuminated tracker from the darker setting.
Since I would be doing this with things I had on hand, I decided to 3D print a ball because I didn't have ping pong balls on me; I also thought that ping pong balls were a little small and a slightly larger ball would make things easier for the tracking program.
I quickly designed a hollow ball with a wall thickness of around 2(?) millimeters and a diameter of 50(?) millimeters and printed it out with "transparent" PLA filament.
It took about 5 hours to print (I really have to play with the settings on my printer one of these days) and came out pretty good considering there were no supports in the print.

Monday, September 4
I came home from work and went to find an LED for the ball.
I scrounged around in the LED section of my parts box and found an LED that came from an LED throwie I found once.
This is the part where I regretted not printing the ball with a hole for the LED because now I had to go drill one out.
A few minutes later I managed to make a very nicely sized hole using a drill bit that was just way too small.
I jammed the LED into the ball, hooked it up to a small variable power supply and started upping the voltage just a bit.
I figured that the LED took a standard 3.3 volts at 25 mA max.
I played around with voltages because I couldn't have something too bright or else it would cause lens flares which mess with the tracking program (lens flares on crappy webcams?).

Tuesday, September 5
I came home from work and started to design the power supply for the ball.
I didn't want to use normal batteries because they're bad for the environment and so annoying to replace all the time, so I decided to use a little 5 volt power bank that I had laying around.
Since I'd be working with 5 volts, I'd definitely need some resistors to bring the current down to manageable levels.
Ideally, to get the 25 mA from a 5 volt power supply, I'd need a 68 ohm resistor, but I didn't have anything that small and wasn't going to buy resistors.
I settled for 3 330 ohm resistors in parallel, which would give me 110 ohms, which would give me about 20 mA.
That 20 mA is still a little brighter than I would have liked, but it didn't seem to be causing any lens flare so I figured it would be fine.
I also diffused the LED with some sand paper because the light distribution inside the ball was too uneven.
After the sand paper treatment the ball was glowing much more evenly.

Thursday, September 9
I didn't do much today for the tracking project.
I started working on the program that you see in the animation so I could develop the math and logic required for turning 2 lines into a positional coordinate.
I didn't get too far on this, I was pretty tired.

Friday, September 8
It's been a few days since I've really worked on the system, but today was very productive.
I was at university all day today, but I used my time wisely and got a lot done at school.
I designed a clip that I would use to hold the ball to the top strap of my VR headset; I had 20 minutes to design it before class, but setting up my environment on a school computer took about 5 minutes, so I had 15 minutes to design a clip and get to class.
I designed the clip from memory and without any real measurements of the strap that it had to go on.
Then I exported it and realized that class was starting 30 seconds ago, needless to say I was a few minutes late, but we were only taking a test and I was still one of the first to finish so it was ok.
After class, I went to the university's maker space to use their (free) 3D printers to print out my clip, this way I wouldn't waste precious time at home printing it out.
About 40 minutes later I had a beautiful clip all printed out and I tested it on a flap on my backpack, it worked pretty good and didn't break from being flexed a little.
I had a bit of time during a period between classes, so I used this time to finish developing the math for finding the intersection of 2 lines that are in vector format.
I had to relearn how to work with vectors and I was finally able to come up with an equation to return the intersecting coordinates of 2 lines.
Then I worked on optimizing the equation as much as possible, but it was such a simple equation that there wasn't much to optimize.
I went home a few hours later and got to work on "professionalizing" the power cable for the LED.
I got an old USB cable with a bad data line and cut it so that I could have a nifty USB plug.
Then I went to the "workshop" (the top of the washing machine) and started to solder it all together.
I braided the wire from the resistors to the LED myself, this was to keep the 2 wires from making a mess, instead they stay as one bundle and there's even an extra wire to make it stronger.
I even added some heat-shrink tubing over the junction between the resistors, USB cable and the wires carrying power to the LED.
I did have a little trouble getting the tubing over the resistors because the resistors barely fit, but I managed to get the resistors into the tubing eventually
I plugged the LED into the power supply and then pushed the LED into the ball to see if my soldering was good and to see if there were no short-circuits (not recommended to test this way on a Li-ion battery by the way), and it worked.
I then began the hot-glue part of the project.
I glued the LED into the ball, then I glued the 3D printed clip to the bottom of the ball next to the LED.
Then I glued the wires to the clip just to prevent the solder joints on the LED from becoming too stressed and breaking off.
A closeup of the finished product is shown below.
The ball clipped to the top strap of the VR headset is pictured below.
Everything fit perfectly and I was quite pleased with the result.
Here is an extremely rare picture of (some of) my face and what I look like wearing the VR headset with the tracking ball attached to it.
It only looks like I'm wearing a weird set of goggles because this headset works by clipping my phone into it, but I had to use my phone to take the picture, so you can see my eyes through the lenses designed to keep my eyes from melting from the proximity of my phone screen.

Saturday, September 9
I wasted a lot of time today trying to do homework for my computer engineering class, but I didn't get too far since the homework covered a lot of concepts that we haven't even heard of yet.
I spent the rest of the day building the virtual reality world that I would try as my test platform.
It wasn't anything too spectacular, I actually spent more time trying to get a UDP server coded in C# for .Net 2.0 which was interesting because the only exposure I have to C# is other small programs I've written for past VR experiences.
The fact that the code that I was going to use for a simple UDP server was geared for .Net 4.0 didn't help, it took me a while to get it to work for .Net 2.0 but I eventually got it to work.
I made a simple test where I set the coordinates of a cube over the network.
I also added some blocks to play with and push around.
Then I added a little kiosk computer thing which reported the FPS for me, this way I could keep an eye on the FPS and reduce the effects of simulator sickness (which by this point became very common for me).
Once I had that much done, I figured I could get the rest of it done tomorrow.

Sunday, September 10
This would be the big day for my project, I was determined to get it all finished today because I knew that if I waited any longer, then I'd never get around to finishing it.
I spent the second half of the day finishing the VR world and then writing the full version of the UDP server that would receive the data for the lines and calculate the position from that.
Since I already had code written to find the intersection, it just had to port it to C#, which wasn't a big deal.
The bigger deal was fixing all the little nuances that would cause the VR experience to crash from errors that happened in the UDP server code and from other small things.
Then I added a castle that I found here so that I could have something fun to play with.
Then I worked on finalizing the tracking code.
There wasn't too much to do here, just get the coordinates, run it through some math to turn that into an angle and then to a vector line, and add some code to transmit it all to the VR headset.
I designed the tracking code to have a configuration file so that I would only have to configure and calibrate things like FOV and target color and camera position once.
I did have a little problem where the coordinates were being calculated in reverse, so moving left brought me right and visa versa.
The solution was simple enough, just mirror (or in my case un-mirror) the image that the webcam was returning and the motions were coming in just right.
I compiled the VR experience just one more time and uploaded it to my phone for later usage.
Then I set up 2 laptops with webcams and the tracking script.
I measured their positions, calculated their FOV's, set their unique identifiers, added their threshold for the tracking color and set their offset angle.
Then I got them ready to start tracking and transmitting, the server (headset) needs to be started first before the client tracking programs so I set it up to just have to hit enter on each laptop once I'm in VR.
I put my phone into the headset and turned on the tracking ball, then I put on the headset.
Once the VR experience started, I pressed enter on both of the laptops, then I officially entered the Matrix.
The below pictures show the laptops running the tracking program.
I navigated myself to the top of one of the towers on the castle (using the touchpad controller on the headset) and then started looking around.
Below is a rendering of what I experienced moving forward to look past a corner in the castle that I had in the world.
It was one of the most exciting things that had happened to me in a while.
The delay between when I moved in real life and when I moved in VR was noticeable.
The delay was less than a second, but still about 200 or 300 ms.
Nevertheless, I was moving around in a world I built, able to look around and not have the whole world move with me.
It was magical.
But it did mess up my depth perception and my perspective a bit.
I had somehow adjusted to those few minutes of VR so fast that coming back to real life was like entering VR, the perspective was weird and my motions weren't delayed.

The Future

As much of a success as this was, there are still a few things I'd like to change.

For one, I'd like to add the 3rd dimension to code for height because while moving around didn't make the whole world move with me, moving up and down did.
Plus, adding the 3rd dimension wouldn't be too hard, it would just take a little more time.

Another thing to fix is the issue where looking up makes my head obscure the tracking ball from one of the cameras.
Usually the tracking program finds the ball once I make my head level again, but it still happened a few times where it didn't reestablish tracking.
The easiest solution is to raise the height of the cameras, but I think that cameras with a wider FOV would also be good.

The other thing I'd like to fix is the latency issue.
I think the biggest issue is with getting frames from the camera.
I ran a test without the line that gets the image from the camera and the tracking program ran at over 100 FPS, but with the camera line the program only ran at around 12 FPS.
It's also possible that the network code adds a tiny noticeable amount of delay, but this is just a guess, I would think that it shouldn't be too bad over a LAN network, but then again, I'm no network admin.

Another thing I'd like to do is to write an Android app for the tracking part so that I would have a smaller and more portable way to deploy the cameras.
This would allow more people to try it out just by installing the app and changing some settings.
When (more like if) I do this, I'll probably use OpenCV since PyGame isn't available for Android.
It probably wouldn't be a bad idea to use OpenCV on Python too, it's probably more suited to camera tracking than PyGame.

Code and Resources

Everything you need to replicate this project can be found on GitHub.
https://github.com/koppanyh/VR-optical-tracking
But I give no guarantee that my documentation is good enough for you to figure it out from my code alone.

Saturday, May 20, 2017

Optimized WireWorld Algorithm

As usual, this entry was written because it is an uncommon subject that I just happen to be working on.

I've been slowly working on a virtual reality world and decided that it would be nice to add some technology to the simulation. I decided that a custom version of the WireWorld cellular automaton in 3 dimensions would be perfect since it is Turing complete, can run relatively fast, and would be rather simple to integrate with any other in-game technologies.

It had to be customized because it had to be fast (to prevent simulator sickness in VR), but there is very little interest in this automaton and all the code I've seen uses the traditional implementation. This is why I came up with this version (which someone probably has done and I just didn't know about it).

In this blog post, I will be describing an optimized version of the algorithm for WireWorld. In theory this version is a little slower than the classical 2 array version of WireWorld on small simulations, but should slowly become faster than the traditional implementation as the simulation gets bigger.

The rules of the automaton state that when a cell is an electron head, it will degrade into an electron tail. When the cell is an electron tail, it will degrade into a conductor. When the cell is a conductor, it can turn into an electron head only if one or two of its immediate neighbors are electron heads.

The way this is usually implemented is with an array where each element holds the state of the cell. A value of 0 is nothing, 1 is an electron head, 2 is an electron tail, and 3 is a conductor. The simulation goes through and writes the updated states to a second array in order to retain the new states without altering the current frame. Then the second array is copied back onto the first and is displayed and the process starts all over again. This works fine, but requires you to iterate through each array element and it if it has live neighbors and degrade its state until it becomes a conductor. Going through each element for every single frame is generally an O(n^2) time complexity, but is better on a small scale because it would use less memory and less time in general.

The way I will write about here will only go through the cells that actually have a state that isn't 0. Instead of holding the states in an array, the cells are held in a list of data structures that contain the x & y coordinates, the state, the addresses of the immediate neighbors, and a count of the live neighbors. The simulation goes through each cell a first time and if the cell is an electron head, then it will go through the list of neighbors and update the live neighbor count if the neighbor is a conductor. Then each cell will be iterated through a second time to degrade the state and then change its state based on if it had the 1 or 2 neighbors or not. This way all the empty "cells" that would have been in an array are skipped over since they are not in the list.

Of course, each cell in the second implementation takes more time and memory to execute on a small scale, but on a large scale there would be much more empty space that could be skipped and at a certain point the second implementation should be able to run faster than the first implementation. The time complexity for the second implementation should be something like O(n). In practice, this is only better in large scales because even though O(n) is less than O(n^2), it uses more memory and computing power per cell, but doesn't rise in power and memory as fast as O(n^2) would.

Enough of this theory stuff, lets get some actual pseudo code.

define cell data type
    x position
    y position
    state
    dynamic list 'neighbors'
    live neighbor count
 
initialize dynamic list 'wires'
set erase mode to false
 
define find function, input x and y coordinates
    loop through each element in 'wires', return index if coordinates match
    return -1 if no cell found at those coordinates
 
define get neighbors function, input cell
    loop through each coordinate around the cell's position
        if cells exist at those coordinates, add reference to cell's neighbor list

define add cell function, input x and y coordinates
    add new blank cell to wires list
    run get neighbors function on this new cell
    iterate through the neighbor list and add this cell to those neighbor's neighbor list

define delete cell function, input cell
    loop through each cell in cell's neighbor list
        remove the reference to this cell from the neighbor's neighbor list
    remove reference to this cell from the wires list

infinite loop
    clear screen
    if enter is pressed, toggle erase mode
    get mouse position and mouse buttons
    draw mouse cursor onto screen 
 
    if mouse click
        get index of cell at mouse coordinates with find function
        if left click and not erase mode and no cell existing at mouse coords
            call add cell function with mouse coordinates
        if left click and erase mode and cell existing at mouse coords
            call delete cell function with cell from index in wires list
        if right click and cell existing at mouse coordinates
            set cell at index to have stage of 1 (electron head)

    loop through cells in wires list
        if cell state is electron head
            loop through neighbors that are conductors
                increment live neighbor count of that neighbor cell
 
    loop through cells in wires list
        if cell is not 3 (conductor)
            increment cell state
        if cell's live neighbor count is 1 or 2
            change state to 2 (electron head )
        clear cell's live neighbor count to 0
        set color equal to black for 0 (void), blue for 1 (electron head), red for 2 (electron tail), yellow for 3 (conductor)
        draw cell on screen with coordinates and color

The Python source for this looks like this:
(Left click adds a conductor, right click activates it, pressing enter turns on delete mode)

#!/usr/bin/env python
import pygame
from pygame.locals import *
pygame.display.init()
pygame.font.init()
screen = pygame.display.set_mode((640,480))
pygame.display.set_caption("")
font = pygame.font.Font(None,20)
clock = pygame.time.Clock()
           # 0   1   2      3          4
wires = [] #[xp, yp, state, neighbors, live]
# 0 black void, 1 blue electron head, 2 red electron tail, 3 yellow wire
erase = False

def findWire(xp,yp):
 for w in xrange(len(wires)):
  if wires[w][0]==xp and wires[w][1]==yp: return w
 return -1
def updateNeighbors(ww):
 for x in xrange(-1,2):
  for y in xrange(-1,2):
   wp = findWire(ww[0]+x,ww[1]+y)
   if (x,y) != (0,0) and wp != -1:
    ww[3].append(wires[wp])
def insertWire(wx,wy):
 wires.append([wx,wy,3,[],0])
 updateNeighbors(wires[-1])
 for ww in wires[-1][3]:
  ww[3].append(wires[-1])
def deleteWire(ww):
 for w in ww[3]: #remove references from neighbors
  for wg in xrange(len(w[3])):
   if ww[0]==w[3][wg][0] and ww[1]==w[3][wg][1]:
    w[3].pop(wg)
    break
 wires.pop(findWire(ww[0], ww[1])) #remove reference from wires

while True:
 for event in pygame.event.get():
  if event.type == QUIT: exit()
  elif event.type == KEYDOWN:
   if event.key == K_RETURN: erase = not erase
   elif event.key == K_ESCAPE: exit()
 screen.fill((0,0,0))
 
 mx,my = pygame.mouse.get_pos()
 b = pygame.mouse.get_pressed()
 if erase: c = (100,0,0)
 else: c = (50,50,50)
 pygame.draw.rect(screen,c,(mx/5*5,my/5*5,5,5))
 mx /= 5
 my /= 5
 w = findWire(mx,my)
 if b[0]:
  if erase and w != -1: deleteWire(wires[w])
  elif not erase and w == -1: insertWire(mx,my)
 elif b[2] and w != -1:
  wires[w][2] = 1

 for w in wires:
  if w[2] == 1: #inc neighbor counters
   for v in w[3]:
    if v[2] == 3: v[4] += 1

 for w in wires:
  if w[2] != 3: w[2] += 1
  if w[4] == 1 or w[4] == 2: w[2] = 1
  w[4] = 0
  c = [(0,0,0),(0,0,255),(255,0,0),(255,255,0)][w[2]]
  #c = [(0,0,0),(0,255,255),(0,100,0),(40,40,40)][w[2]]
  pygame.draw.rect(screen,c,(w[0]*5,w[1]*5,5,5))
 
 clock.tick()
 screen.blit(font.render("%.2f"%clock.get_fps(),-1,(255,255,255)),(0,0))
 
 pygame.display.flip()

And as always, if you have some suggestions or ways to optimize this algorithm further, please let me know and I'll be more than happy to update this post with the new information.

Saturday, March 11, 2017

Generating Every Combination of a List

So today at work I had to modify a program to generate relative links to points in a database based on a customer's specifications.
The chosen implementation required that I come up with every possible combination for a list containing unique elements to be only used if absolutely necessary.
If the elements had to be used, then the specification called for using the minimum amount at the lowest level of our tree data structure as possible.

No problem, I was sure that generating all the combinations (not permutations) for a list in lexicographical order wouldn't be so hard.

I couldn't have been more wrong.

In popular languages, like Python, there's always some library that can help out with a task like this. Unfortunately, I didn't have the luxury of using a library because the language used at my work is called Axon and it only has whatever "libraries" we make for it.

Therefore, I had to make my own functions to handle this, but there were limitations.
  • The functions had to work without while loops because Axon doesn't support while loops.
  • I should avoid recursion as much as possible because it is particularly difficult in Axon, and the program would be running on embedded devices with low resources.
  • The combinations should be autmatically sorted using the lexicographic order that they came in from the input list.
  • No combinations could be repeated in a different order, e.g. [a,b] <=> [b,a]
  • It had to be fast because it would be part of a tool and users can't stand waiting.

Now, I won't show the code I wrote for work, because it's a little proprietary and seeing it in Axon would mean nothing to most people.

Instead I will show it in Python, this is the language that the algorithm was originally prototyped in. (My boss walked in on me prototyping the algorithm and he was like "You've been staring at the same line for 20 mintues already", which it true. I even worked on this through lunch.)

1  def combos(i,j,l): #i is input list, j is r in nCr, l is n in nCr
2   if j == 1:
3    return [[x] for x in xrange(l)] #[[0],[1],[2],...]
4   if j == l:
5    return [[x for x in xrange(l)]] #[[0,1,2,3,...]]
6   #remove lists that have too big starting number
7   i = [x for x in i if x[0] <= l-j]
8   t = []
9   for x in i:
10   if x[-1]+1 < l: #further remove lists that are too big
11    for y in xrange(x[-1]+1,l):
12     t.append(x+[y])
13  return t #returns list of index lists
14 
15 def genCombination(arr): # 2**n - 1 total combinations
16  o = []
17  s = len(arr)
18  g = []
19  #generate list combinations from lists of index lists
20  for x in xrange(1, s+1):
21    o = combos(o,x,s)
22   for y in o:
23    g.append([arr[z] for z in y])
24  return g
25 
26 #pretty print the combinations
27 for x in genCombination(['a','b','c','d']):
28  print x

Here is the output

['a']
['b']
['c']
['d']
['a', 'b']
['a', 'c']
['a', 'd']
['b', 'c']
['b', 'd']
['c', 'd']
['a', 'b', 'c']
['a', 'b', 'd']
['a', 'c', 'd']
['b', 'c', 'd']
['a', 'b', 'c', 'd']


I can't totally explain why it works, even though I made it, but I can explain a few of the design decisions.

The functions are designed to take the previous answer and use that to calculate the next answer, so no jumping to random combinations. It has to be sequential, which for my purposes was adequate.

Line 2 and 4 are just if statements that return a list of lists.
This is because if you want the combination of nCr where r = 1, then it will look like [0],[1],[2],[3],[4],...,[n-1].
If you want the combination of nCr where r = n, then it will look like [0,1,2,3,...,n-1].
This is mainly for speed, no point in calculating it if you know what it's supposed to be.

Then there's a line that removes rows from the previous input that can't be used.
This is because in cases like the rows of the output with only 2 elements, you can't add to the [d] row because there is nothing after d to add.

Then there is a nest of 2 for loops.
I'm not really sure what happens here. I'm not even sure how I figured this out, but this is where the magic happens.
It also contains an if statement to further filter out rows that can't work. (It originally didn't have an if statement there, but that gave me problems in Axon so I added it there to make it easier to port it to other languages.)

The combos() function that I just described doesn't actually make the list of combinations from the input list, instead it makes the list of combinations of indexes for the input list.

The genCombination() function is the one that starts the list for the previous input scheme, gets the combination for a different value of r in nCr, and maps the input list to the lists of indexes.

Then it returns the lists of the combinations of the inputs with the specifications listed above.

The listed output is exactly what you would get if you used ['a','b','c','d'] as your input list.

I hope this is helpful to someone.
It would have helped me quite a bit if I could find something like this when I had to incorporate this feature into the program I had to modify. Instead I wasted about a whole work day making this and porting it to Axon and debugging it.

Sunday, September 11, 2016

Super Simple Interpreter

I was wondering how to write a simple interpreter in assembly for a 6510 processor (#c64) and I dawned upon probably the simplest interpreter ever (aside from a brainfuck interpreter).

Here is the basic pseudo code of the program:
define array of lines
define dictionary of variables
define left variable
define right variable
define counter and set to 1

loop
    write '> '
    get line
    append line to array of lines if line not empty
    break loop if line is 'run'
end loop

loop for each line
    split line by spaces

    if first element is 'print' or 'write' then
        if second element is single quote do
            get second to last element
            join with spaces
            write string excluding first and last character
        end
        else do
            find second element from dictionary of variables
            get value associated with element
            turn value to string
            write value string
        end
        if first element is 'print' do
            write '\n'
        end
    end

    else if second element is '=' do
        if third element is number do
            set left to number
        end
        else do
            get variable name
            find value from variable dictionary
            set left to value
        end
        if line is > 3 elements do
            if fifth element is number do
                set right to number
            end
            else do
                get variable name
                find value from variable dictionary
                set right to value
            end
            if fourth element is '+' do
                left = left + right
            end
            if fourth element is '-' do
                left = left - right
            end
            if fourth element is '*' do
                left = left * right
            end
            if fourth element is '/' do
                left = left / right
            end
        end
        set dictionary position of first element to value of left
    end

    else do
        write 'Error on line ' + counter
        break loop
    end

    counter = counter + 1
end loop

The syntax of this interpreted language is this:
Each operation is its own line.
The mathematical operations are limited to +, -, *, /
All numbers are integers and operations return integers.
Strings are only allowed in print/write statements.
Strings are enveloped in single quotes.
Variable names must be single letters.
All individual parts of a line must be separated by 1 space.
Write writes to screen without newline.
Print is like write but with a newline.
No direct string concatenation, use write statements for that.

Example program:
a = 5
write 'A is: '
print a
b = a + 5
b = b / 7
write 'B is: '
print b

Example output:
A is: 5
B is: 1

To test its simple nature, I wrote the original version of the interpreter in Python 2.7 and was able to make it in 26 lines of code. Here is the source:
import sys
varis = {}
lines = []
write = sys.stdout.write
while 1: #get input
    inp = raw_input("> ")
    if inp == "run": break
    lines.append(inp)
for y in range(len(lines)): #parse program
    x = lines[y].split()
    if x[0] == "print" or x[0] == "write":
        if x[1][0] == "'": write(" ".join(x[1:])[1:-1])
        else: write(str(varis[x[1]]))
        if x[0] == "print": write("\n")
    elif len(x) > 2 and x[1] == "=":
        if ord(x[2][0]) > 96 and ord(x[2][0]) < 123: left = varis[x[2]]
        else: left = int(x[2])
        if len(x) > 3:
            if ord(x[4][0]) > 96 and ord(x[4][0]) < 123: right = varis[x[4]]
            else: right = int(x[4])
            if x[3] == "+": left += right
            elif x[3] == "-": left -= right
            elif x[3] == "*": left *= right
            elif x[3] == "/": left /= right
        varis[x[0]] = left
    else: print "Error: line "+str(y+1)

I then wrote a modified version of it in Axon (the main programming language I use at work) and I was able to make it in 31 lines of code. Here is the source:
(x) => do
  vars: {}
  x = x.replace("\n",";").split(";")
  t: []
  cou: 1
  left: ""
  right: ""
  x.each y=> do
    t = y.split(" ")
    if(t[0] == "print") do
      if(t[1][0..0] == "'") echo(t[1..-1].concat(" ")[1..-2])
      else echo(vars.trap(t[1]))
    end
    else if(t[1] == "=") do
      if(t[2][0] > 47 and t[2][0] < 58) left = parseInt(t[2])
      else left = vars.trap(t[2])
      if(t.size > 3) do
        if(t[4][0] > 47 and t[4][0] < 58) right = parseInt(t[4])
        else right = vars.trap(t[4])
        if(t[3] == "+") left = left + right
        else if(t[3] == "-") left = left - right
        else if(t[3] == "*") left = left * right
        else if(t[3] == "/") left = floor(left / right)
      end
      vars = vars.set(t[0], left)
    end
    else echo("Error: line "+cou)
    cou = cou + 1
  end
end

The differences in each of the implementations is that the Python one is more like a command line in which you type in your lines and then type in "run" when you want to execute the program, and it supports both "write" and "print". The Axon implementation only has "print" and is a function that takes in the whole program as a string parameter, so I made that to also work with semicolons between the statements.

Friday, April 8, 2016

Easily Install Oracle's Java On Ubuntu Linux

Hello world, today I have a quick "life hack" type thing for Ubuntu people.

Where I work, I am kind of the Linux guy, installing our software for clients who wish to run it on Ubuntu machines.
The problem is that our software runs on Java, and some of the people either don't know how to install Java or have it installed in a weird way.

I was asked what was the easiest way to install Java, and here you go:

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
(agree to the two prompts)
(wait for download to finish)
java -version


The first 2 commands installs the webupd8team Java repository and updates apt-get so you can install from there.
The next command actually installs it, with a license agreement and a download.
The last command just confirms that you have Java installed.

Sunday, March 20, 2016

Making C64 Program Tap Files

Before I go ahead and tell you guys how to make a tap file that is loadable in a C64 emulator, I'm going to give you the link to the source of all my info: http://c64tapes.org/dokuwiki/doku.php?id=loaders:rom_loader and http://c64tapes.org/dokuwiki/doku.php?id=analyzing_loaders

This is where I learned how to make tap files (although it took me two weeks to find that page).

Enough chat, here is the structure of a C64 tap file.
There are 3 different kinds of pulses:
-    a long pulse lasting about 672 µS represented as ASCII char 86 or hex 56
-    a medium pulse lasting about 512 µS represented as ASCII char 66 or hex 42
-    a short pulse lasting about 352 µS represented as ASCII char 48 or hex 30
Physically, these pulses are square wave pulses running at 50% duty with a period of the specified µS.
I will refer to long as L, medium as M, and small as S. Data encoded using a combination of these pulses will be referred to as LMS format.

Now these pulses alone don't make up binary data (as I learned the hard way).
Bits are represented as a pair of pulses, a 0 bit being SM, a 1 bit being MS, a start bit being LM, and a data end bit being a LS.
A LMS byte is represented as a certain sequence of pulses which looks like this:
LM     xx    xx    xx    xx    xx    xx    xx    xx    nn
start  bit0  bit1  bit2  bit3  bit4  bit5  bit6  bit7  check

There is a start bit in front of every byte, but each byte doesn't have its own end bit. The check bit at the end is calculated with the bitwise formula:
1 xor bit0 xor bit1 xor bit2 xor bit3 xor bit4 xor bit5 xor bit6 xor bit7
Or more conveniently, if the byte has an even number of 1's, the check bit is 1, if the byte has an odd number of 1's, the check bit is 0.

Now that single bytes are covered, it's time to learn how whole data is encoded in a tap file.

Every C64 tap file starts off with 20 bytes of information, kind of like meta data. All information should be coded in hex, as this is raw data.
Bytes 0 to 11 are C64-TAPE-RAW, byte 12 is the number 1, bytes 13 to 15 are just 0, and bytes 16 to 19 is the length of the actual tap data in least significant byte first (LSBF) format. The tap data length is usually added last, since we don't know how long our tap data is yet.

The rest of the bytes are the data.
We start with 27136 small (S) pulses, this is the leader block used to calibrate the datasette speed.
Then we start the header with 9 sync bytes, from 137 down to 129 (LMS).
Then we write the header file type, in this case just the number 1 (LMS).
Next is the starting address for the data, two bytes in LSBF format (LMS).
Next is the ending address for the data plus 1, also two bytes in LSBF format (LMS).
Then the bytes in PETSCII code for the file name, no more than 16 bytes (LMS).
Then 16-lengthOfFileName bytes of the number 32 for title padding (LMS).
Then write the number 32 for 171 times, this is the header body, not sure what it's for (LMS).
Then a checksum byte for the header, this is all the bytes from the header file type (byte1) to the end of the header body (last byte) (LMS), use this bitwise formula to calculate:
0 xor byte1 xor byte2 xor ... xor last byte
Then a data end bit of LS to end the header.
Then we write a small pulse 79 times as an interblock gap.

Now we write the header again, but with some changes.
Write the 9 sync bytes, but this time from 9 to 1 (LMS).
Then write the exact same header data from the header file type to the data end bit (LMS).
Next write 78 small pulses as the header end trailer.
Then write 5376 small pulses as the interrecord gap.

Now we can start writing our actual program data.
Before we do that, you'll need to know how to make program data (useful link: https://www.c64-wiki.com/index.php/BASIC_token) or get it from RAM. (Or use my tapwriter.py program on GitHub)
Start the program with 9 sync bytes, from 137 down to 129 (LMS).
Next write the bytes of your program exactly how it would appear in ram from the starting address to the ending address (LMS).
Then write your checksum byte, calculated just like on the header blocks, only from after the sync bytes all the way to before this checksum byte (LMS).
Then write your data end bit of LS to end the header.
Then write a small pulse 79 times as the interblock gap.

Now we write the program data again, just like with the header.
Write the 9 sync bytes, but from 9 to 1 (LMS).
Copy the program block from after the sync bytes all the way to the data end bit.
Then write a small pulse 78 times for the data end trailer.

That's all there is to it, just remember to update the tap data length in bytes 16 to 19, this is only the length from byte 20 to the end.

This isn't the most efficient way to store data, since it takes about 10 seconds just to get to the header after the leader block, but it works and it's pretty cool to have a program load off a tap file that you made (forged with your own hands in the heart of a dying star).

If you understand how to create a tap file, and can actually get it to work (took me 3 days), then it wouldn't be hard to reverse the protocol to read the data off of the cassette or even directly from the C64, just as I will be doing for my C64 Arduino Internet project.

GitHub program files:
https://github.com/koppanyh/C64Net/blob/master/tapwriter.py