ÄàD(Ÿsgãsgãã.ΔD2FsN._`VεUYøεX*O}}}-Q
Extremely slow since I'm using a brute-force approach with three cartesian products. But performance is irrelevant for code-golf challenges I guess (would love to see this same challenge as fastest-code or fastest-algorithm).
Try it online (only works for matrices of size 2x2, and even then usually times out if the absolute maximum of the integers in the matrix \$M\$ is too large..)
Verify whether the output \$[A,B]\$ results in the input \$M\$.
Explanation:
Assumes all the integers used in the resulting matrices \$[A,B]\$ will be within the range \$[-\max(\lvert M\rvert), \max(\lvert M\rvert)]\$. If you know any way to prove or disprove this assumption, let me know.
Note that there are of course matrices \$[A, B]\$ with integers larger than the integers of the resulting matrix \$M\$. But this \$M\$ will have other possible \$[A, B]\$ which are within this min-max range. I.e. \$[A,B]=\$ [[[1,2],[3,4]], [[-1,-2],[-3,-3]]]
will have \$M=\$ [[0,2],[-3,0]]
, but using this \$M\$ to find a possible \$[A,B]\$ will result in [[[3,2],[3,3]], [[2,2],[3,3]]]
, which are all within the range \$[-3,3]\$.
Let me start with an explanation of 2FsN._`VεUYøεX*O}}}-
, which will calculate \$M = AB-BA\$ of a given matrix-pair \$[A,B]\$ (which is the same code as in the verifier TIO-link above).
2F # Loop 2 times:
s # Swap the top two values on the stack
# (so the matrix-pair is at the top again after the first iteration)
N._ # Rotate the pair of matrices the (0-based) loop-index amount of times
# (to reverse the order from [A,B] to [B,A] in the second iteration)
` # Push both matrices separated to the stack
V # Pop the top one, and store it in variable `Y`
ε # Map over the rows of the second one:
U # Pop the current row, and store it in variable `X`
Yø # Push matrix `Y` and zip/transpose it, swapping rows/columns
ε # Map over each row (or actually column, since we've transposed):
X* # Multiply each value at the same positions with row `X`
O # And sum this result
}}} # Close both nested maps and loop
- # And subtract the values at the same positions in the two matrices from each other
As for the rest of the program to create all possible matrices and brute-force over them:
Ä # Convert each integer in the (implicit) input-matrix `M` to its absolute value
à # Pop and push the flattened maximum of these absolute values of `M`
D( # Duplicate it, and negate the maximum
Ÿ # Pop both and push a list in the range [max, -max]
s # Swap to get the input matrix again
g # Pop and push its length to get the matrix dimension
ã # Create the cartesian product of the [max, -max]-list that many times
# (i.e. if the input-matrix has a dimension of 3, we'll now have all possible
# triplets of integers within the [max, -max]-range)
sgã # Do the same, to create all possible combinations of these triplets
# (so we now have all possible matrices of the same dimensions as the input-matrix,
# using integers within the [max, -max]-range)
ã # Cartesian product yet again (defaults to 2) to create all possible pairs of matrices
.Δ # Then find the first pair of matrices which is truthy for:
D # Duplicate the current pair
2FsN._`VεUYøεX*O}}}-
# Calculate M as mentioned above
Q # And check if its equal to the (implicit) input-matrix
# (after which this matrix-pair is output implicitly as result)
Just to give an idea of how inefficient this brute-force approach is:
The given example input-matrix \$M=\$ [[11,12,12],[1,1,1],[1,2,3]]
of the challenge description, will create all possible matrix-pairs of the same 3x3 dimensions with integers in the range \$[-12,12]\$. This means the list of matrix-pairs will contain \$(((12^2+1)^3)^3)^2=802830827198685151406498569488525390625\$ matrix-pairs.
And the given example input-matrix \$M=\$ [[-11811,-9700,-2937],[3,14,15],[2,71,82]]
would result in \$(((11811^2+1)^3)^3)^2=400239694511528196808960294025932709663435907226315389994060779277222573307615163703296535923733538440194289670070277830289553752923910908747710464\$ matrix-pairs.
"Math = winning :)" Well, my math skills suck, and I don't 100% understand the challenge because of it. ;) Mainly the examples, and what $AB$ and $BA$ actually means we should do with the matrices. I understand that $M=0$ implies $AB=BA$ and the diagonal from the top-left to bottom-right in $M$ sums to 0. I don't really understand what $AB$ is though. Without looking at the examples, seeing this I would assume we multiple the values in $A$ and $B$ at the same coordinates, but in that case $AB-BA$ is always 0. Could you perhaps explain the first 3x3 example to me? – Kevin Cruijssen – 2019-08-16T07:46:07.190
^ How $A$
[[1,1,1],[0,1,1],[0,0,1]]
and $B$[[1,2,3],[4,5,6],[7,8,9]]
would becomes the matrix $M$[[11,12,12],[7,4,0],[0,-7,-15]]
with $M=AB-BA$? EDIT: Actually, only for $M=0$ the $AB=BA$ applies I now re-read. So for a matrix containing only 0s, any $A$ and $B$ would be a correct result. I still don't really understand the challenge for non-$M=0$ cases though. (Probably my bad Math and matrices knowledge in general. ;p) – Kevin Cruijssen – 2019-08-16T07:48:42.5874@KevinCruijssen, $AB$ is matrix multiplication of $A$ and $B$. Matrix multiplication is non-commutative: in general $AB \neq BA$. Multiplication works on the basis (using more programming-languagy syntax than the usual syntax for matrix indexing) $$(AB)[x][y] = \sum_i A[x][i] B[i][y]$$ Subtraction is pointwise: $$(C - D)[x][y] = C[x][y] - D[x][y]$$ – Peter Taylor – 2019-08-16T07:55:45.750
@PeterTaylor Ah ok, your explanation, and taking a close look at the Wikipedia Matrix Multiplication page made it clearer, thanks! So $AB$ takes the dot-product of the $i$'th row of $A$ and $j$'th column of $B$ for every coordinate $i,j$ in the two matrices. I.e. calculating $AB-BA$ for the $11$ in the top-left of the matrix $M$ is $((1×1 + 1×4 + 1×7)-(1×1 + 2×0 + 3×0))$.
– Kevin Cruijssen – 2019-08-16T08:38:19.370@KevinCruijssen Yes that is correct! – flawr – 2019-08-16T10:28:46.200
@KevinCruijssen and anyone else who is unfamiliar with matrix multiplication: Matrix "multiplication" is really the composition of linear functions. Each matrix represents a linear function on vectors and the result of "multiplication" represents the function which is the composition of the two functions represented by the input matrices. This also explains why it is non-commutative. – Post Rock Garf Hunter – 2019-12-13T14:35:36.517