Is up camera really needed?
Posted: Mon Jan 30, 2017 3:41 pm
I would like to provoke a discussion about why there is a demand for an up camera based part alignment.
The background: I have been working on a vision support based on the pipeline/stage concept of von Nieda (openpnp) which, btw. is a very ingenious and extremely flexible approach.
I have succeeded in using vonniedas minimum bounding rectangle algorithm (MBR) using the up camera to some degree but still received some offset and rotation misalignment.
My current main part of interest is placing white 5050 rgb LEDs out of their black pocketed plastic tape. These LEDs tend to be a bit offset and very slightly (<3°) rotated in their pockets.
My attempt to align these LEDS with the up cam failed...
But.. using the MBR algorithm on the down cam with a method analog to optical home / fiducial recognition I finally got excellent placing results.
How does it work?
The MBR delivers a center point and a rotation angle. Typical time is approx. 100ms.
Using an incremental loop the camera is first centered in on the part within a given tolerance just like the optical homing.
Then the measured angle is used and the 0° based pickup angle "corrected" so that after pickup and reestablishment of the 0° the part should be hanging exactly centered and aligned at 0° angle under the nozzle.
This works for the 5050 leds and now, after finding the optimal MBR settings,also for Atmel TQFP 32 chips and FT232RL USB chips.
The total pick & place time is even shorter due to the short "offset" movement of the down camera and not having to move the long distance to the up camera.
Now I ask myself: Do I still need the up camera?
What is your opinion? For what would i need a bottom view based alignment routine?
The background: I have been working on a vision support based on the pipeline/stage concept of von Nieda (openpnp) which, btw. is a very ingenious and extremely flexible approach.
I have succeeded in using vonniedas minimum bounding rectangle algorithm (MBR) using the up camera to some degree but still received some offset and rotation misalignment.
My current main part of interest is placing white 5050 rgb LEDs out of their black pocketed plastic tape. These LEDs tend to be a bit offset and very slightly (<3°) rotated in their pockets.
My attempt to align these LEDS with the up cam failed...
But.. using the MBR algorithm on the down cam with a method analog to optical home / fiducial recognition I finally got excellent placing results.
How does it work?
The MBR delivers a center point and a rotation angle. Typical time is approx. 100ms.
Using an incremental loop the camera is first centered in on the part within a given tolerance just like the optical homing.
Then the measured angle is used and the 0° based pickup angle "corrected" so that after pickup and reestablishment of the 0° the part should be hanging exactly centered and aligned at 0° angle under the nozzle.
This works for the 5050 leds and now, after finding the optimal MBR settings,also for Atmel TQFP 32 chips and FT232RL USB chips.
The total pick & place time is even shorter due to the short "offset" movement of the down camera and not having to move the long distance to the up camera.
Now I ask myself: Do I still need the up camera?
What is your opinion? For what would i need a bottom view based alignment routine?