Page 1 of 1

Lessons learned and to be learned

Posted: Fri Jul 15, 2016 6:39 pm
by sgraves
I thought I would put a few things out here and get some feedback. The biggie is that for the up camera the GetClosestCircle does not think that the center is where the cross hairs are. My new needle offset algorithms take this into account, but I have implemented an up camera place feature and the position of the cross hairs must be accounted for when preparing the part for placement.

I have also implemented a VerifyNeedle button which takes the needle back to the Up Camera and checks its calibration. We have found that the smaller needles are almost immediately knocked out of calibration(bent). Reducing the pickup and placement Zs (pressure) have helped a lot. We also tried cutting the needles down (10 mm or so long). That helped a lot with the bending, but we haven't seemed to burnish the end of the needle properly. In particular, it probably isn't square to the surface. So it isn't picking up parts well.

The up camera is working very well, so my assistant is using it to place parts and has put getting the shorter needle to work on the back burner.

BTW I will make this stuff available on GitHub eventually. I was trying to keep it all straight with various branches, but it was getting to be a problem. I was trying to do a local merge, but that takes committing to the branch before I test the changes. That was leading to commits that were just fixing little typos. And testing without my other changes is frustrating because then I am dealing with problems I have already solved. So I am making changes now to one catchall and will sort out the branches later.

I have been unhappy with the mapping from the nominal to the measured locations (reading the fiducials). In our camera aided placement, when we go to the first part in a row, the down camera goes to the measured location. Then one can jog the position and rotate the cross hairs and hit an "Update Location" button. That corrects the measured location and rotation. I am also backing into correcting the nominal location and rotation by taking the differences from the measured values and applying them to the nominal. This should result in a proper measured location on the next mapping.

This is not the solution to the mapping problem however. I have no reason to believe that the nominal locations are not right when generated by my layout program (Diptrace). I think using HomographyEstimation is the problem. The transform derived from this thinks it is dealing with an image. Images have distortions that we will not have i.e. fish eye effect. Even though our points are measured with a camera they represent machine coordinates. Each fiducial is centered on the screen and the machine coordinates recorded. If the camera does not move in the mount between readings, that is the center of the view is always the same relative to the machine, then we are dealing with mapping from a board coordinate system to a machine coordinate system. I believe we can use affine transforms to do this mapping.

I am writing a function that uses affine transforms. I am expecting a better mapping. I am looking for and correcting, translation, rotation, scale (X and Y) and shear. I believe that a combination of these are sufficient to map from board coordinates to machine coordinates.

Shear brings up a lesson learned. One thing I have seen is an issue with the squareness correction. My fiducials are in a rectanglar pattern. The machine coordinates were a little off and the corners had angles that were slightly above and below 90 degrees. I will be correcting this in general with a shear transformation, but the real issue is the squareness correction. Even though we carefully measured it. It was slightly off. I was able to tweak it and to get the machine locations for the fiducials to have 90 degree corners (be rectangular). In any case, I expect that my mapping function will recognize shear and use it in mapping so that even an incorrectly squared machine will map OK.