I would like to provoke a discussion about why there is a demand for an up camera based part alignment.
The background: I have been working on a vision support based on the pipeline/stage concept of von Nieda (openpnp) which, btw. is a very ingenious and extremely flexible approach.
I have succeeded in using vonniedas minimum bounding rectangle algorithm (MBR) using the up camera to some degree but still received some offset and rotation misalignment.
My current main part of interest is placing white 5050 rgb LEDs out of their black pocketed plastic tape. These LEDs tend to be a bit offset and very slightly (<3°) rotated in their pockets.
My attempt to align these LEDS with the up cam failed...
But.. using the MBR algorithm on the down cam with a method analog to optical home / fiducial recognition I finally got excellent placing results.
How does it work?
The MBR delivers a center point and a rotation angle. Typical time is approx. 100ms.
Using an incremental loop the camera is first centered in on the part within a given tolerance just like the optical homing.
Then the measured angle is used and the 0° based pickup angle "corrected" so that after pickup and reestablishment of the 0° the part should be hanging exactly centered and aligned at 0° angle under the nozzle.
This works for the 5050 leds and now, after finding the optimal MBR settings,also for Atmel TQFP 32 chips and FT232RL USB chips.
The total pick & place time is even shorter due to the short "offset" movement of the down camera and not having to move the long distance to the up camera.
Now I ask myself: Do I still need the up camera?
What is your opinion? For what would i need a bottom view based alignment routine?
Is up camera really needed?
Is up camera really needed?
best regards
Manfred
Manfred
Re: Is up camera really needed?
mawa,
The biggest argument for bottom vision is that often after you pick a part it is in a different position on the nozzle than nominal due to nozzle "dance". This is when the part dances around a bit on the nozzle as it is picked up and before the vacuum is fully formed.
If you are getting good results using this method then I think there's no need to add bottom vision.
That said, I'd be interested to hear more about the trouble you had with my bottom vision code. I've done quite a bit of testing with 5050 LEDs and had good success. It could be that there are other factors contributing to that.
I won't pollute this thread with all that, but if you'd like to get in touch privately or on my mailing list and tell me a little more about what went wrong I'd very much appreciate it.
Jason
The biggest argument for bottom vision is that often after you pick a part it is in a different position on the nozzle than nominal due to nozzle "dance". This is when the part dances around a bit on the nozzle as it is picked up and before the vacuum is fully formed.
If you are getting good results using this method then I think there's no need to add bottom vision.
That said, I'd be interested to hear more about the trouble you had with my bottom vision code. I've done quite a bit of testing with 5050 LEDs and had good success. It could be that there are other factors contributing to that.
I won't pollute this thread with all that, but if you'd like to get in touch privately or on my mailing list and tell me a little more about what went wrong I'd very much appreciate it.
Jason
Re: Is up camera really needed?
Thank you Jason for your anwers. I think our discussion could be of interest to all, so I answer her in puplic.
The problem seems to be the correct XY offset measurement between the nozzles center - camera center - part center and compensating the rotation offset when the XY offset is above a certain vector size. Your trigonometric rotation is impressive but sometimes delivers (in my C# .NET version) funny results.
Its not the 5050 that makes the real problem. It's the FT232RL and the TQFP chips. I do not pick them from from tape but from a tray. By intension I rotated the part by an angle < 10° for test purposes and also used a perfect 0° rotation. I all cases I came up with a misplacement > 0.5mm which is unacceptable for the FT232RL.
One problem might be that we measure at different distances.
The parts pin bottoms are approx. 1.8mm from the nozzle tip.
Now take a mm ruler, display a 1mm grid on the camera vision and alternatively move the ruler between these heights.
You will see that the pixel/mm ratio is not the same, but varies significantly.
That has BTW nothing to do with any lens distortion (fisheye effect) but is plain optics in the center of view. Look at you hand and move it closer and farther away and so you see what I mean.
Additionally the camera and or the Z axis could not be perpendicular which leads to a offset depending on the distance from the camera. I cared about this and could see that there is very little offset within the distance range. So it is probably a matter of getting the correct offset mm from the pixels.
I had the same problem with the down cam and got rid of the difference by leveling all areas where I measure to the same Z height. I decided to use 1.6mm as a common PCB thickness for that height.
I will try to get the ratio right and test the bottom view after that.
I think that that "dancing" could occur if the nozzle size is not the largest diameter to pickup a part without the danger of looking over the edge of the part and the pickup distance is too large and has an "air gap".vonnieda wrote:mawa,
The biggest argument for bottom vision is that often after you pick a part it is in a different position on the nozzle than nominal due to nozzle "dance". This is when the part dances around a bit on the nozzle as it is picked up and before the vacuum is fully formed.
Jason
My idea was to discuss the pros and cons of bottom vision.vonnieda wrote:mawa,
If you are getting good results using this method then I think there's no need to add bottom vision.
Jason
Jason you code is totally ingenious and I copied your solution (pipeline, stages) and added several additional stage types (e.g. capture down image before placing and compare after placing to verify the part has been placed) and also added, where available, EmguCV and Aforge alternatives for the same functions to see which performs best. With that the MBR approach works like a charm.vonnieda wrote:mawa,
That said, I'd be interested to hear more about the trouble you had with my bottom vision code. I've done quite a bit of testing with 5050 LEDs and had good success. It could be that there are other factors contributing to that.
Jason
The problem seems to be the correct XY offset measurement between the nozzles center - camera center - part center and compensating the rotation offset when the XY offset is above a certain vector size. Your trigonometric rotation is impressive but sometimes delivers (in my C# .NET version) funny results.
Its not the 5050 that makes the real problem. It's the FT232RL and the TQFP chips. I do not pick them from from tape but from a tray. By intension I rotated the part by an angle < 10° for test purposes and also used a perfect 0° rotation. I all cases I came up with a misplacement > 0.5mm which is unacceptable for the FT232RL.
One problem might be that we measure at different distances.
The parts pin bottoms are approx. 1.8mm from the nozzle tip.
Now take a mm ruler, display a 1mm grid on the camera vision and alternatively move the ruler between these heights.
You will see that the pixel/mm ratio is not the same, but varies significantly.
That has BTW nothing to do with any lens distortion (fisheye effect) but is plain optics in the center of view. Look at you hand and move it closer and farther away and so you see what I mean.
Additionally the camera and or the Z axis could not be perpendicular which leads to a offset depending on the distance from the camera. I cared about this and could see that there is very little offset within the distance range. So it is probably a matter of getting the correct offset mm from the pixels.
I had the same problem with the down cam and got rid of the difference by leveling all areas where I measure to the same Z height. I decided to use 1.6mm as a common PCB thickness for that height.
I will try to get the ratio right and test the bottom view after that.
best regards
Manfred
Manfred
Re: Is up camera really needed?
Hi Manfred,mawa wrote:Thank you Jason for your anwers. I think our discussion could be of interest to all, so I answer her in puplic.
One problem might be that we measure at different distances.
The parts pin bottoms are approx. 1.8mm from the nozzle tip.
Now take a mm ruler, display a 1mm grid on the camera vision and alternatively move the ruler between these heights.
You will see that the pixel/mm ratio is not the same, but varies significantly.
This part in particular stuck out to me. OpenPnP always puts the part *bottom* at the same position above the camera. This way there is never any discrepancy between the configured pixel/mm and the measured one. It does this by positioning the nozzle at the camera's focal plane + the height of the part. In other words, the nozzle with a part on it is always some amount higher than just the nozzle itself.
Does that make sense? And does your code do that?
Jason
Re: Is up camera really needed?
Yes, but as I wrote I will need to recalibrate the focal point and the pixel/mm ratio more carefully.vonnieda wrote: Does that make sense? And does your code do that?
Jason
As you are using Juki nozzles with the green ring and filter that green via HSV you probably get a nice black background.
Using the semi mat black samsung nozzles, that Juha sells, the up cam still sees a blurred gray ring in the background.
if the lighting is bright this ring is detected in part by the edge and contour detection. I ran into a similar problem looking down into the 5050 tape pockets but was able to cut away this edge with a rectangular mask.
As an info / idea: I modified the MBR stage and added minimum and maximum rectangle size values for the resulting MBR to be valid.
The algorithm sometimes has "spikes" which would falsify the mean value computation.
The I added a maximum tolerance for the offset vector. (rect.center to camera.center) Finally I reduced the size of pixel scan to the maximum sizes plus offset tolerance which highly speeded up MBR recognition as I am using HD cameras.
best regards
Manfred
Manfred
Re: Is up camera really needed?
Yes, I do recommend to anyone using bottom vision that they try very hard to make sure the part has a black background either physically or with image filtering. It's very important to not get shiny reflections from things other than the part. On my system I've installed a little 3D printed circular background that mounts above the nozzle, and I have my image filters set to mask everything outside of that circle.mawa wrote:Yes, but as I wrote I will need to recalibrate the focal point and the pixel/mm ratio more carefully.vonnieda wrote: Does that make sense? And does your code do that?
Jason
As you are using Juki nozzles with the green ring and filter that green via HSV you probably get a nice black background.
Using the semi mat black samsung nozzles, that Juha sells, the up cam still sees a blurred gray ring in the background.
if the lighting is bright this ring is detected in part by the edge and contour detection. I ran into a similar problem looking down into the 5050 tape pockets but was able to cut away this edge with a rectangular mask.
As an info / idea: I modified the MBR stage and added minimum and maximum rectangle size values for the resulting MBR to be valid.
The algorithm sometimes has "spikes" which would falsify the mean value computation.
The I added a maximum tolerance for the offset vector. (rect.center to camera.center) Finally I reduced the size of pixel scan to the maximum sizes plus offset tolerance which highly speeded up MBR recognition as I am using HD cameras.
Min/max rectangle size is a good check. I've been intending to add that as a sanity check in my code, but so far I've found I didn't need it.
Another easy thing you can do is run the bottom vision code multiple times and re-center after each one. This will eliminate (most) of the pixel/mm errors.
Finally, something to consider is that if you are using nozzle runout compensation, this would definitely affect things and would need to be taken into account. I haven't worked on that yet, so I'm not sure what would need to change.
Re: Is up camera really needed?
One more thought: If you send me some of your troublesome images I'd be happy to make any suggestions that I can think of.