Interestingly, this isn't the best (minimax or least mean square) linear fit over that range. But it's pretty good and has zero error on both ends, so it can be stitched together into a continuous four-quadrant approximation that covers all finite inputs to the two-argument atan2(β,α):

One common implementation determines the quadrant based on α and β and then runs the linear approximation on either x = β/α or x = α/β, whichever is in the range {-1 ≤ x ≤ 1} in that quadrant. The combination of a quadrant offset and the local linear approximation determines the final result.

It's possible to extend this method to three inputs, a set of three-phase signals assumed to be balanced. Instead of quadrants, the input domain is split based on the six possible sorted orders of the three-phase signals. Within each sextant, the middle input (the one crossing zero) is divided by the difference of the other two to form a normalized input, analogous to selecting x = β/α or x = α/β in the atan2() implementation:

This normalized input, which happens to range from -1/3 to 1/3, is multiplied by a linear fit constant to create the local approximation. To follow the pattern of the four-quadrant approximation, a constant of π/2 gives a fit that's not (minimax or least mean square) optimal, but stitches together continuously at sextant boundaries. As with the atan2() implementation, the combination of a sextant offset and the local approximation determine the final result.

For this three-phase approximation the maximum error is ±1.117º, significantly lower than the four-quadrant approximation. If starting from three-phase signals anyway, this method may also be faster, or at least nearly the same speed. The conditional section for selecting a sextant is more complex, but there are fewer intermediate math operations. (Both still have the single pesky floating-point divide for normalization.)

To put this to the test, I tried directly computing the phase of the three flux observer signals on TinyCross's dual motor drive. This usually isn't the best way to derive sensorless rotor angle: An angle tracking observer or PLL-type method can do a better job at filtering out noise by enforcing physical bandwidth constraints. But for this test, I just compute the angle directly using either atan2f(β,α) or one of the two approximations above.

Computation times for different angle-deriving algorithms. |

The three-phase approximation does turn out to be a little faster in this case. To keep the comparison fair, I tried to use the same structure for both approximations: the quadrant/sextant selection conditional runs first, setting bits in a 2- or 3-bit code. That code is then used to look up the offset and the numerator/denominator for the the local linear approximation. This is running on an STM32F303 at 72MHz. The PWM loop period is 42.67μs, so a 1.5-2.0μs calculation per motor isn't too bad, but every cycle counts. It's also "free" accuracy improvement:

The ±4º error ripple in the four-quadrant approximation shows up clearly in real data. The smaller error in the three-phase approximation is mostly lost in other noise. When the error is taken with respect to a post-computed atan2f(), the four-quadrant approximation looks less noisy. But I think this is just a mathematical symptom. When considering error with respect to an independent angle measurement (from Hall sensor interpolation), they show similar amounts of noise.

I don't have an immediate use for this, since TinyCross is primarily sensored and the flux signals are already synchronously logged (for diagnostics only). But clock cycle hunting is a fun hobby.

Hi Shane,

ReplyDelete1.5us sounds quite long isn't a pll structure less time consuming? The biggest deal with pll is the calculation of the input error of the pll. I do this with the trigonimitric add theorem sin(x-y) = sin(x)*cos(y) - cos(x)*sin(y). If u then assume the Flux components normalised by 1/(nominal Flux) u can use them as sin(x) and cos(x). The sin(y) and cos(y) u already have as u would use them to calculate the park transform. With the approximation sin(x-y) = x-y u can use the value directly as pll input.

Ah, you're right, that should be faster in the context of rotor angle tracking. It still requires converting to (α,β), but the normalization step is a multiply instead of a divide. Then, only a couple more multiplies and adds to do the cross product with the components of the existing estimate and add it to the tracked angle. I'll benchmark that method this weekend, but I agree it should be faster. (No divide and no conditionals!)

DeleteI suppose the only time calculating the angle directly might be justified is if you don't have any prior context to work with, such as before PLL lock or in an application that doesn't continuously track the angle. Then it might make sense to have conditionals since there's no readily-available small angle approximation.

If u do not want do track the angle continuously then I agree. But with the pll u gain the speed info for free. It all depends on what you want to know.

ReplyDeleteBench test time for the PLL is 1.00us (72 clock cycles)! I think most of that is just LDR and MLA instructions at this point. I'm sure it can be optimized even a bit further.

DeleteGood point about speed - that saves one subtraction at least.

One issue is that I don't have the α/β components of the previous loop's estimate, cos(y)/sin(y) in your notation, readily available. I don't compute them every PWM loop since my current controller only runs at 1kHz. That does add back some time, but not a big deal though since they're just look-ups. Overall I think it's definitely the better approach for angle tracking.

Interesting, of course for a small cost you could improve the precision by a lot by increasing the degree of the polynomial. To save anyone the bother I've done it myself:

ReplyDeleteDegree 1, error 2.9°: 0.82843*x

Degree 3, error 0.31°: (-0.1895*x^2 + 0.970563)*x

Degree 5, error 0.039°: ((0.078*x^2 - 0.28706)*x^2 + 0.9949494)*x

Degree 7, error 0.0053°: (((-0.03825*x^2 + 0.145)*x^2 - 0.32053)*x^2 + 0.99913345)*x

Degree 9, error 0.00075°: ((((0.02042*x^2 - 0.0842)*x^2 + 0.17944)*x^2 - 0.330105)*x^2 + 0.999851323)*x

So generally for one extra multiplication and addition (and a one-time extra multiplication) you can improve the precision by a factor of ~7x.