Re: Code Contributions to the software
Posted: Wed May 27, 2015 12:19 pm
Hi Reza,
I had more success with your auto calibration this morning
There is some sort of circular reference though: For Juha's optical homing to work, the X / Y box size needs to have approximately correct values - otherwise the machine will not home in on the circle but move to far away to recognize it. This is why homing did no longer work after failed auto calibration (see my previous post).
I have set some approximate values for box size, tweaked the settings for homing mark visual filters and voilá - finally got auto calibration to work. Auto calibration now returns deterministic results, i.e. I get the same result for each run now
The measurements still did not match my own, but I found the cause and it was rather simple: My homing mark is on table surface, while I measured the box size on PCB level, which is table surface + 1.6mm.
As I use a standard lense, the closer objects are to the lense, the larger is their projection - alas the larger they appear on camera. Given the high resolution of my downward facing camera, this effect is quite significant and results in a change in box size of about 0.5 to 0.6 mm. I do use custom component trays which are even higher (table surface + 3mm), so the effect is also stronger and so the projected pickup locations do not match the camera picture.
I am not sure how to fix this yet and also not sure if I absolutely have to. I found some telecentric lenses for M12 (S-mount) which would solve the problem optically, but these are rather expensive. Also, the ones I found have a relatively long working distance (WD > 40mm) so I would have to mount the camera much higher. Another option would be compensation in software, but then I would need to "predefine" the height / Z-level of different spaces in the work area and measure pixel to mm ratio for each Z-level... Seems complicated.
For pickup, the box size / pixel-mm ratio has no effect: Even though your projected pickup locations do not fit the image, the locations are correct in reference to the machine's coordinate system. Once the sprocket hole is detected, the relative location of the component is found without a problem. And for placement, PCB surface will always be the same.
Uplooking camera does not have that problem as needle and components will always be lowered to the same Z-level (PCB surface level).
By the way, have you thought about implementing a similar auto calibration feature for the up looking camera? Optically center the needle, move the machine by a known distance, measure offset, calculate box size?
Thanks and regards
Malte
I had more success with your auto calibration this morning
There is some sort of circular reference though: For Juha's optical homing to work, the X / Y box size needs to have approximately correct values - otherwise the machine will not home in on the circle but move to far away to recognize it. This is why homing did no longer work after failed auto calibration (see my previous post).
I have set some approximate values for box size, tweaked the settings for homing mark visual filters and voilá - finally got auto calibration to work. Auto calibration now returns deterministic results, i.e. I get the same result for each run now
The measurements still did not match my own, but I found the cause and it was rather simple: My homing mark is on table surface, while I measured the box size on PCB level, which is table surface + 1.6mm.
As I use a standard lense, the closer objects are to the lense, the larger is their projection - alas the larger they appear on camera. Given the high resolution of my downward facing camera, this effect is quite significant and results in a change in box size of about 0.5 to 0.6 mm. I do use custom component trays which are even higher (table surface + 3mm), so the effect is also stronger and so the projected pickup locations do not match the camera picture.
I am not sure how to fix this yet and also not sure if I absolutely have to. I found some telecentric lenses for M12 (S-mount) which would solve the problem optically, but these are rather expensive. Also, the ones I found have a relatively long working distance (WD > 40mm) so I would have to mount the camera much higher. Another option would be compensation in software, but then I would need to "predefine" the height / Z-level of different spaces in the work area and measure pixel to mm ratio for each Z-level... Seems complicated.
For pickup, the box size / pixel-mm ratio has no effect: Even though your projected pickup locations do not fit the image, the locations are correct in reference to the machine's coordinate system. Once the sprocket hole is detected, the relative location of the component is found without a problem. And for placement, PCB surface will always be the same.
Uplooking camera does not have that problem as needle and components will always be lowered to the same Z-level (PCB surface level).
By the way, have you thought about implementing a similar auto calibration feature for the up looking camera? Optically center the needle, move the machine by a known distance, measure offset, calculate box size?
Thanks and regards
Malte