Page 1 of 4

BGA vision/placement demo

Posted: Fri Mar 04, 2016 3:36 am
by WayOutWest
A bunch of people have asked for this, please excuse the crappy camera-pointed-at-monitor video quality, it was a quick-and-dirty thing.

First video is what I see, second video is the actual machine (which has been moved to a different room on account of nasty flux vapors):

https://vimeo.com/157673682
https://vimeo.com/157673710

Also please ignore the deafening vacuum pump in the second video; I just upgraded to two-stage dual vacuums (they rule) to reduce the number of different nozzles I need -- but I haven't redone the soundproofing.

The chip is an 0.8mm-pitch 84-ball DDR2 memory chip. There are 16 of them on this board (eight on each side). I've built over 200 of these boards so far.

If you want to see the chip actually deposited on the board you need to keep an eye on the middle left-hand pane in the first video.

The upper-left pane is a wide-angle "plan view" camera. You can see the rectangle formed by the four PCB fiducials called out in this view.

The QR code you see is my origin marker. I've still got a few lingering problems with lost steps (the set screws on the Y-axle are the weakest link by far) so before the really ultra-critical placements (basically the BGAs) I reverify that the machine hasn't lost any steps. If it has, I dump the part, rehome, and start that placement over again.

MSER, people, I'm telling you, MSER is the way to go. Hough circles and canny edges are a big waste of time. Even FFT-based template matching is unnecessary. I've dumped all of the vision code except for the MSER routine (which I rewrote) and the QR code detector (switched to zxing). You don't need anything else. For ultra-precise homing do a QR recognition, then use the QR code coordinates to guide an MSER search for the three corner-markers. When you see the corner-markers flash RED that's the MSER locking on them.

Re: BGA vision/placement demo

Posted: Fri Mar 04, 2016 5:52 am
by WayOutWest
Here's another one showing board fiducial autodetection followed by placing the "parking lot" of 0402 decaps on the backside.

https://vimeo.com/157682693

The fiducial detection is done first, coarsely, using the wide-angle plan view camera that can see an entire board at once. Then it uses the high-resolution narrowfield downcam to zero in on the exact locations.

At the end of the video you can see the final results and also a bit of my crude 1980s-style UI, along with the more modern stuff like how artificial features (like the fiducial rectangle) "float" over the viewfield, kinda like the computer-generated lines on football TV broadcasts. You can see the camera lens distortion in how the artificial features are dead-on when they're in the center of the screen but misaligned as they get farther from the center of the viewfield. FireSight has a transform to correct for this, but it's incredibly computationally expensive... it's actually faster to move the camera!

The 0402s are all aligned using the upcam, although at this point the feeders are working well enough that I probably don't have to do that anymore and can simply skip that step.

Re: BGA vision/placement demo

Posted: Fri Mar 04, 2016 8:52 am
by mawa
First of all I must congratulate you for the IMO great progress and success.
WayOutWest wrote:A
MSER, people, I'm telling you, MSER is the way to go. Hough circles and canny edges are a big waste of time. Even FFT-based template matching is unnecessary. I've dumped all of the vision code except for the MSER routine (which I rewrote) and the QR code detector (switched to zxing). You don't need anything else. For ultra-precise homing do a QR recognition, then use the QR code coordinates to guide an MSER search for the three corner-markers. When you see the corner-markers flash RED that's the MSER locking on them.
Sounds very interesting. Can you share you MSER solution with us on this forum, please?
A short explanation of your solution and some code sniplets would help others and me to enhance our placement results.

What are the tolerances in XY displacement and A angle offset to successfully align a component?

Re: BGA vision/placement demo

Posted: Fri Mar 04, 2016 9:16 am
by smdude
WayOutWest, nice job!!!! And nice boards too. Oh and sweet auto feeders!

I might be asking the same question as mawa, what is your step resolution(mm/step) for x and y and also minimum deg/step for the head rotation?
Are you using .9deg steppers for x and y?

Cheers

Re: BGA vision/placement demo

Posted: Fri Mar 04, 2016 7:05 pm
by vonnieda
Hi WayOutWest,

Really nice work, and I love the use of MSER. It seems like a great solution to this problem. As others have asked, would you be willing to post source code or a description of your work? I note that you mentioned FireSight, so even if it's just your FireSight pipelines that would be very helpful!

Thanks,
Jason

Re: BGA vision/placement demo

Posted: Sat Mar 05, 2016 1:36 am
by WayOutWest
mawa wrote: Sounds very interesting. Can you share you MSER solution with us on this forum, please?
I originally started using the FireSight MSER, which is an enhanced version of the OpenCV MSER routine, which in turn is an enhanced version of the Linear Time MSER research prototype. You really should read the paper on it; I did and afterwards was able to write a customized implementation of the algorithm that, for my specific task, is MUCH faster and more flexible.

I've since written my own MSER detector which is much, much faster for my purposes and can do things like shape-specific stability filtering (i.e. filter non-circular regions from the extremal region tree before computing stability). But the FireSight routine should be sufficient to do pretty much everything you see in the video, just a bit slower.
smdude wrote:WayOutWest, nice job!!!! And nice boards too. Oh and sweet auto feeders!
Thanks; I did not design the feeders, they are commercial. The brand is "Quad"; the company was later acquired by "Tyco" so you might see either brand on them. They are 100% autonomous and don't need to communicate with the PnP; they detect pickup using an IR led/sensor pair that is tripped by the pickup head. However the original needle-tip pickup head isn't thick enough to trip the sensor so I hotwired it and I *do* send comms from the TinyG to advance the feeders. Also the feeders were discontinued before 0402s became popular, so you can't make them advance any less than 4mm per pickup -- if you want to use 2mm-pitch 0402s you MUST override the feeder-advance so it only advances on even-numbered pickups (and then tell the pnp to do odd-numbered pickups from a position 2mm away from the even-numbered pickup location). So even though the juki heads are big enough to trip the sensor reliably I still needed to hotwire the 0402 feeder.
smdude wrote: I might be asking the same question as mawa, what is your step resolution(mm/step) for x and y
and also minimum deg/step for the head rotation?
Are you using .9deg steppers for x and y?
Totally stock on the x/y axes: 0.9 degree steppers. Here are the vision-calibrated steps-per-revolution:

Code: Select all

cal.travel_per_x_revolution=40.27140873307289
cal.travel_per_y_revolution=40.505520087159475
What are the tolerances in XY displacement and A angle offset to successfully align a component?
Alignment tolerance (i.e. at the upcam) is set for 0.1mm on the x/y axes and 0.2 degrees on the rotational axis. I would like to have a smaller rotational tolerance but it's impossible with the 1.8 degree A-axis stepper and you can't get 0.9-degree hollow-shaft steppers. I plan on converting BACK to the original liteplacer off-axis A-motor so I can put gearing in between the a-motor and shaft and use a 0.9-degree motor. Been thinking of using a 15:1 planetary gearbox for ultimate accuracy.
vonnieda wrote: Really nice work, and I love the use of MSER. It seems like a great solution to this problem. As others have asked, would you be willing to post source code or a description of your work? I note that you mentioned FireSight, so even if it's just your FireSight pipelines that would be very helpful!
It's been almost a month since I removed FireSight, but here's a scrap of what I was using back then... ping me again for more details and I will explain more when I have time, have to run now...

Code: Select all

    public static HashSet<RectRegion> mser(RawImage img, double minArea, double maxArea, boolean show, double delta) {
        HashSet<RectRegion> ret = new HashSet<RectRegion>();
        StringBuffer pipeline = new StringBuffer();
        pipeline.append("{'op':'cvtColor', 'code':'CV_BGR2GRAY'},");  // color mser is ultra-slow!
        pipeline.append("{'op':'MSER'");
        pipeline.append(",'name':'mser'");
        pipeline.append(",'detect':'rects'");
        pipeline.append(",'minArea':"+minArea);
        pipeline.append(",'maxArea':"+maxArea);
        pipeline.append(",'edgeBlurSize':0");
        //pipeline.append(",'maxVariation':0.075");
        pipeline.append(",'delta':"+delta);
        if (show) pipeline.append(",'color':[-1,-1,-1,-1]");
        pipeline.append("}");
        if (show) {
            //pipeline.append(",{'op':'cvtColor', 'code':'CV_GRAY2BGR'}");
            pipeline.append(",{'op':'drawRects', 'model':'mser', 'color':[255,0,0]}");
        }

        JsonObject jsonObject = invoke("["+pipeline.toString().replace('\'', '\"')+"]", img, show, show);
        for(String stage : jsonObject.names()) {
            JsonValue stageResult = jsonObject.get(stage);
            if (!stageResult.isObject()) continue;
            JsonValue rects = stageResult.asObject().get("rects");
            if (rects == null) continue;
            for(JsonValue qrItem : rects.asArray()) {
                Point p = firesight2point(img, qrItem.asObject().get("x").asDouble(),qrItem.asObject().get("y").asDouble());
                double width = qrItem.asObject().get("width").asDouble();
                double height = qrItem.asObject().get("height").asDouble();
                double angle = -qrItem.asObject().get("angle").asDouble();
                ret.add(RectRegion.newRectRegion(Rect.newRect(p, width, height), Angle.Mod360.newDegrees(angle)));
            }
        }
        return ret;
    }


Re: BGA vision/placement demo

Posted: Sat Mar 05, 2016 11:43 pm
by smdude
Thanks!

With the 1.8deg stepper for the nozzle, do you think going to 0.9 would be enough to line up .5mm bga's? Though, I spose, once you move away from the hollow shaft stepper, you might as well just use a 10 or 15:1 reduction box on a smaller stepper and be done with it!! Keeps it compact and less weight. Then you have to come up with a way to get the vacuum to the nozzle, but that is not really that much of a challenge, the z limit switch would be a bit more of a challenge.

Re: BGA vision/placement demo

Posted: Sun Mar 06, 2016 12:17 am
by smdude
Hmmm, a gearbox has it's own downside in that the backlash with no load could be up to 1 degree. Though if you made it so your vacuum coupling had a bit of drag and only rotated the nozzle in one direction I don't think it would matter, and or have a calibration for how many steps from fwd/rev until the nozzle moves. Belt drive is looking more attractive! Or putting up with 0.9 steps...

Re: BGA vision/placement demo

Posted: Sun Mar 06, 2016 2:15 am
by WayOutWest
smdude wrote:Hmmm, a gearbox has it's own downside in that the backlash with no load could be up to 1 degree.
Backlash is easy to solve on PnP machines, just make sure you always approach your goal from the same direction. So, in the case of rotation, always make sure your last move is clockwise. If you're at 3:00 and want to get to 2:00, go to 1:00 first then 2:00.

Backlash is a much more serious problem for machines like mills and lathes where the tool's path matters (and not just its final position).

In my video you'll notice the BGA alignments always move upward and to the right prior to the vison operation. That's why you see that jerking diagonal motion -- so that the last movement before the vision operation is always in the +X +Y direction. It's not optimized much at all, I'm sure a bunch of those movements are unnecessary.

By the way the needle camera (left middle pane) is mirrored horizontally. I do this so that "left" is always "negative X direction" in every video feed, even though the needlecam is pointing towards the front of the machine. The upcam is mirrored vertically for the same reason.

Re: BGA vision/placement demo

Posted: Sun Mar 06, 2016 2:23 am
by WayOutWest
smdude wrote: With the 1.8deg stepper for the nozzle, do you think going to 0.9 would be enough to line up .5mm bga's?
Yes, definitely.

I think the existing X+Y axes are also definitely enough for 0.5mm bgas. Not sure about the Juha-original rotational axis; [s]it's a 1.8-degree stepper but it has some gearing[/s] (edit: see my posting below, I was wrong). I know that without gearing a 1.8-degree stepper with 1/8 microstepping (which isn't anywhere near 8x as accurate as full stepping) can just barely do 0.8mm bgas. Definitely could not do 0.5mm reliably unless they are very small arrays like 4x4. But with the stock liteplacer design it shouldn't be hard to upgrade the solid-shaft NEMA14 to a better 0.9-degree stepper.