A "matrix polynomial" is a polynomial in a variable, say, z, with matrix coefficients. Analogous to finding roots (zeros) of polynomials, and analogous to finding eigenvalues of matrices, one frequently wants to find values of z so that the matrix polynomial is singular. One standard method for doing so is to convert the matrix polynomial into a (usually larger) generalized eigenvalue problem---this process is called "linearization". There are infinitely many possible linearizations, some better than others; for instance, if the original polynomial is expressed in a basis other than the usual monomial basis, it is better to directly linearize than it is to convert first to the monomial basis. But while some of those linearizations were known to work and had been proved to be at least strong local linearizations, for others there was no direct proof of strict equivalence. By using computer algebra tools creatively on small scalar examples we were able to detect patterns that allowed us to give direct proofs for Lagrange bases and Bernstein bases. Both the results and the method might be of interest to the reader.

## Why is it important?

The proofs of strict equivalence have a "comfort" value only; really, the equivalence was known, and these direct proofs only provide independent confirmation. That the method worked---that we were able to identify patterns by computation on scalar examples, which we were later able to prove, is perhaps more important, as an example of what is known as "computational discovery" or "experimental mathematics".