JuKu wrote:Odrive is a control board to drive brushless DC motors with acceleration trajectory and encoder feedback loop.
Yes I know... i was one of the kickstarter backers for
mechaduino, and probably one of the first people outside TropicalLabs to have one because I got impatient and oredered my own boards using TropicalLabs' GERBER files. My LitePlacer has been all-closed-loop control on the XYZ axes for a while now so this is nothing new. But the sheer power and acceleration in that video is mind-blowing.
Some issues on his side prevented XY demoing,
Can you elaborate? This was actually the only suspicious part of the demo. Is it simply a lack of multi-axis kinematics code, or something more serious?
Don't get too exited from this quite yet: The Odrive is at alpha development, and will take a good while before it is usable in a product. And ODrive alone is not sufficient, we'll need four channel control, valve and pump switches and so on.
Well, if you ask me that's the easy part, I already had to do all that to mechaduinoize my liteplacer (TinyG is gone gone gone and nowhere near it anymore). I actually started doing it before I surgically excised the TinyG from my machine, the less tasks I had it handling the happier I was.
Personally I'm working towards exactly the opposite of that. Centralizing all of the control logic means you've got large bundles of wires running through the dragchain. My goal is to have nothing but +48V, ground, and two-wire RS485 running through the dragchain (video is sent via wifi using UDP packets). Chips are cheap, wires are a headache. Centralizing all the brains in one place like that leaves you with octopus-cables...
I studied Vbesmen's work pretty closely while rewriting the mechaduino firmware to get it to a point where I understood it. There's no need to go the TinyG route of all-singing-all-dancing-all-in-one JSON-GCode-to-wire-toggling. Vbesmen's approach, which I very strongly advocate, was to have each closed-loop motor run a PID loop internally and accept a stream of (time,goal-position) commands (like, 1.2ms from now you should be at rotational position "negative 947.1 degrees") and report back a stream of (time,actual-position) pairs. That (plus tuning the PID loop) is a very simple, clean API on top of which you can build pretty much whatever you need. For example, parse G-Code, plan out a bounded-jerk trapezoidal acceleration profile, and reduce it to a (time,position) stream. Unfortunately TinyG smashes these two components together and gets them all tangled up in each other. It's a shame, especially since there is really no need to be using GCode for a pick and place. Anyways, point being, you are probably closer than you think, just don't get trapped in the box of approaching everything the way TinyG does.
This is still in development, so no real world results. But this is a closed loop control with encoder position feedback so in principle, it should be encoder limited.
Since the encoder is on-axis rotary the limitation will be in the slack/backlash of the rest of the machine "downstream" from the rotary encoder. That's what I was mainly curious about.