part sorting demo video
Posted: Wed Oct 07, 2015 1:56 am
Okay, keep in mind this is still pretty rough... the hardware is stock LitePlacer but I've had less than 3 weeks to write all the code from scratch (except libraries: OpenCV, libUVC, and DirectFB). Most of that time was spent fighting with the stupid TinyG -- I think they should have aimed lower and not tried to offload everything (including GCode parsing, planning, and command-sequence flow control) to the Atmel chip.
Also this can be done much, much faster -- I have put zero effort into speed and in fact deliberately slowed down a lot of things to make accuracy troubleshooting easier. But even at this ultra-slow speed it can sort an entire reel of passives in a 2.5-day-long weekend (i.e. Friday evening to Monday morning).
Yes, the accuracy is not completely perfect. I don't know exactly why the second capacitor is so far off-center, but all the rest are pretty good. The drift you see towards the right end is, I think, because the dropoff needle is actually pushing the ceramic plate a tiny bit with each part it deposits. Also I have the tolerances for the upcam alignment set pretty low; there are still a lot of things I can tune up to improve accuracy. I just wanted to demonstrate that the concept is proven.
All of the vision (except the initial needle-tip alignment) is done using MSER. The needle-tip alignment is the only thing that uses Hough circles.
Actually the LitePlacer shown isn't 100% stock -- the TinyG's firmware is heavily modified and the gantry head has two extra cameras (the needle-view camera that you see used in the video and a plan-view camera you don't see), plus the front headplate is flipped backwards so the downcam is to the LEFT of the needle. This way the upcam and downcam can look at each other... I'm trying to use this to eliminate the silly blue-goo alignment (which needs a human to remove the previous alignment mark) but I don't have that working yet. You can see that the upcam's circular focus knob is painted silver so it makes a really perfect circle when the downcam looks at it. Also I broke my A-axis motor last week so I had to bolt on a badly-fitting replacement from a 3D printer... that's why it looks like it's about to fall off. A replacement is on the way.
The main restriction is that the parts can't be touching. This is partly for vision recognition, but mainly to avoid picking up two parts at once (which leads to catastrophic fail).
The parts below are all 0603s but I'm running it now with 0402s and (aside from requiring a smaller needle) it works... (update: actually for 0402s telecentricity on pickup becomes an issue... I have to move the head until the part is centered in the visual field before deciding what its location is... trying to calculate it for a non-visual-field-centered part using image coordinates and mm-per-pixel results in error large enough to cause tombstoning at least 5% of the time).
Photo before:
Photos after:
And yes, it really does pick up every last part (the camera can only hold ~16 minutes of video). Although MSER doesn't recognize every single capacitor in every single frame of the video, it will eventually recognize all of them within a few seconds' worth. So the last few parts take a while longer to find, but they do get found.
Video:
(coming soon, as soon as I finish jumping through the stupid hoops Google makes you go through to upload to youtube)
Well, Google is failing hard right now (what else is new?) so I just used Mega, which... wow just worked without any hassles.
https://mega.nz/#!JMxX3IhR!tuIC_EZS9ic2 ... Fltyz2SaCc
(click the small text that says "download through your browser", not the big huge buttons that advertise junkware)
If you want to listen to the machine, here's a copy with audio included (you have to turn the volume WAY up though)
https://mega.nz/#!9NhgxQ5a!OFdTkCLXXQVS ... 4KBSA6n0ZQ
Vimeo also didn't harass me for all my identifying information like Google does (egads), so here:
https://vimeo.com/141762090
Also this can be done much, much faster -- I have put zero effort into speed and in fact deliberately slowed down a lot of things to make accuracy troubleshooting easier. But even at this ultra-slow speed it can sort an entire reel of passives in a 2.5-day-long weekend (i.e. Friday evening to Monday morning).
Yes, the accuracy is not completely perfect. I don't know exactly why the second capacitor is so far off-center, but all the rest are pretty good. The drift you see towards the right end is, I think, because the dropoff needle is actually pushing the ceramic plate a tiny bit with each part it deposits. Also I have the tolerances for the upcam alignment set pretty low; there are still a lot of things I can tune up to improve accuracy. I just wanted to demonstrate that the concept is proven.
All of the vision (except the initial needle-tip alignment) is done using MSER. The needle-tip alignment is the only thing that uses Hough circles.
Actually the LitePlacer shown isn't 100% stock -- the TinyG's firmware is heavily modified and the gantry head has two extra cameras (the needle-view camera that you see used in the video and a plan-view camera you don't see), plus the front headplate is flipped backwards so the downcam is to the LEFT of the needle. This way the upcam and downcam can look at each other... I'm trying to use this to eliminate the silly blue-goo alignment (which needs a human to remove the previous alignment mark) but I don't have that working yet. You can see that the upcam's circular focus knob is painted silver so it makes a really perfect circle when the downcam looks at it. Also I broke my A-axis motor last week so I had to bolt on a badly-fitting replacement from a 3D printer... that's why it looks like it's about to fall off. A replacement is on the way.
The main restriction is that the parts can't be touching. This is partly for vision recognition, but mainly to avoid picking up two parts at once (which leads to catastrophic fail).
The parts below are all 0603s but I'm running it now with 0402s and (aside from requiring a smaller needle) it works... (update: actually for 0402s telecentricity on pickup becomes an issue... I have to move the head until the part is centered in the visual field before deciding what its location is... trying to calculate it for a non-visual-field-centered part using image coordinates and mm-per-pixel results in error large enough to cause tombstoning at least 5% of the time).
Photo before:
Photos after:
And yes, it really does pick up every last part (the camera can only hold ~16 minutes of video). Although MSER doesn't recognize every single capacitor in every single frame of the video, it will eventually recognize all of them within a few seconds' worth. So the last few parts take a while longer to find, but they do get found.
Video:
(coming soon, as soon as I finish jumping through the stupid hoops Google makes you go through to upload to youtube)
Well, Google is failing hard right now (what else is new?) so I just used Mega, which... wow just worked without any hassles.
https://mega.nz/#!JMxX3IhR!tuIC_EZS9ic2 ... Fltyz2SaCc
(click the small text that says "download through your browser", not the big huge buttons that advertise junkware)
If you want to listen to the machine, here's a copy with audio included (you have to turn the volume WAY up though)
https://mega.nz/#!9NhgxQ5a!OFdTkCLXXQVS ... 4KBSA6n0ZQ
Vimeo also didn't harass me for all my identifying information like Google does (egads), so here:
https://vimeo.com/141762090