Can you perform any operation you could on a unit rotation matrix on a non unit rotation matrix?

I know you can’t do the nice inverse that orthogonal matrices have but aside from that is it possible?
Also is there a good proof of this?
I am referring specifically to 3x3 rotational matrices but other dimensions could be cool too
Maybe a better way to phrase this question is is “normalizing” (?? is that the word for this? since i think normal matrix might be something else) commutative and distributive?

Also if my definitions are wrong please correct me
I define a unit rotation matrix like Roblox cframes with rightVector,upVector, and backVector, each normalized (with magnitude 1)

I define a non unit rotation matrix like Roblox cframes with rightVector,upVector, and backVector, except they DO NOT have to be normalized and can have different magnitudes from each other

A unit rotation matrix is slightly more special than having normalized vectors–the vectors must also be pairwise orthogonal.

The existence of a matrix inverse can be characterized in many different ways, but it is often stated in terms of the determinant. If det(M) = 0, then there is no inverse. In practice, matrix inversion is numerically unstable when det(M) is very small.

As for an easy proof, start with det(AB) = det(A) det(B) and apply it to a general matrix M and its inverse.

1 Like

What about for matrix multiplication though?

Any two matrices can be multiplied. Actually, you can store a general 3x3 matrix into the rotation components of a CFrame and use it for matrix multiplication. (I have yet to see anyone make significant use of this in a game, though.)

1 Like

What I mean is suppose you have two matrices A and B (both are not necessarily “normalized” but normalizing the columns will make them orthogonal matrices with inverses A^t and B^t), is this a true statement?

normalizeColumns(A)*normalizeColumns(A*B) == normalizeColumns(A*B)

Normalizing the columns will not make them orthogonal. I also would guess, with high confidence, that an operation that orthogonalizes matrices and has the above property does not exist.

1 Like

Part of the assumption is that if you were to normalize the columns of matrix A is that it would become an orthogonal matrix

The question is whether with that assumption in mind would this distribute?

you can assume that A = A' * S
where A` is an orthogonal matrix
and S is
random(),0,0,
0,random(),0,
0,0,random()

Yes, I forgot to answer the original question. I’m assuming you meant to write

normalizeColumns(A)*normalizeColumns(A) == normalizeColumns(A*B)

instead of

normalizeColumns(A)*normalizeColumns(A*B) == normalizeColumns(A*B)

Assuming that, the operation does not distribute. The reason is that diagonal matrices like S do not in general commute with a general matrix.

2 Likes

normalizeColumns(A)*normalizeColumns(A*B) == normalizeColumns(A*B)
was a typo but I meant to write
normalizeColumns(A)*normalizeColumns(B) == normalizeColumns(A*B)

Why is it a question of whether S commutes or not?

Let N(X) be the “normalized” part of X, D(X) be a diagonal matrix such that N(X) * D(X) = X. We have that N(X) = X * D(X)^-1, which exists assuming that X has nonzero columns.

Suppose N(XY) = N(X) N(Y). Then one can show that N(X^-1) = [N(X)]^-1. But N(X) = X * D(X)^-1, so that [N(X)]^-1 = D(X) * X^-1 =/= X^-1 * D(X^-1)^-1 in general.

If the above were true, then X * D(X)^-1 * X^-1 = D(X^-1), which is a rather special condition.

1 Like

I don’t understand at all how you are getting that logic nor how it relates to S commuting

That’s because it doesn’t relate :sweat_smile: I was so focused on just providing a proof that I forgot about that.

To expand upon what I said about “S commuting”, we have that N(AB) = N(A) N(B), which would mean that AB * D(AB)^-1 = A * D(A)^-1 * B * D(B)^-1. Basically, AB * D1 = A * D2 * B * D3 for diagonal D1,D2,D3.

It isn’t exactly true that D2 * B must equal B * D2, but something close is true: that is, D2 * B = B * D2’ for some diagonal D2’. This only holds for special values of D2 and B, and since D2 is determined by A, it only holds for special values of A and B.

2 Likes

I’m still not convinced that you can make this leap

How can you prove there is no (relevant) relation between D1 and [the ordering of D2 and D3 with B in between]? (since afterwards you are treating D2 and D3 as independent from D1

I don’t think writing out a full proof will be very helpful for anyone without some background in group theory or linear algebra. I didn’t think a proof would be very complicated, but I didn’t quite think things through.

I could prove to you that, given a few very weak assumptions about A and B, your original equation implies that B itself would have to be a generalized permutation matrix. But it’s missing the point of what I wanted to express, which is that the relationship between diagonal matrices and arbitrary matrices gets complicated in two dimensions and higher, where things don’t commute, and that the existence of a function f which “distributes” over matrix multiplication in the way you described is very restrictive on what type of function f can be. In group-theoretic language, such functions are termed homomorphisms.

Anyway, it might be more helpful if you explain why you’re looking for a function like normalizeColumns which satisfies the above property. If you explain the context for your question, I might be able to give a more helpful answer.

Edit: Corrected “B would have to be diagonal” to “B would have to be a generalized permutation matrix”.

4 Likes

dang dude well i appreciate the time you spent on this

this came up for me because I was scripting my own CFrame lookAt function and I wanted to avoid normalizing the lookVector (lookAt-pos) and therefore all the columns until I absolutely needed to

1 Like

That’s cool, but the connection between this and the original question isn’t obvious to me. CFrame lookAt just requires 2 cross products, and you don’t have to bother normalizing any intermediate vectors, just the end result.

2 Likes

Don’t you have to normalize all three direction vectors at the end?

Also I wanted to have a normalize operation on my orientation such that for example after multiplying two of these look at constructed orientations it would still be the same as how if they were normalized individually then multiplied

Yes. That’s the end result I meant.

Ah OK. Unfortunately there is no savings to be had there, since as suremark showed you above, this doesn’t work. If you have two of these not-normalized orthogonal matrices, A and B, and you need the rotation that is the product of their rotations, you have to either:

  1. Normalize A and B, then multiply
  2. Normalize A, multiply, then normalize the resulting product.

The difference will be down to floating point artifacts.

4 Likes