Yahoo Answers is shutting down on May 4th, 2021 (Eastern Time) and beginning April 20th, 2021 (Eastern Time) the Yahoo Answers website will be in read-only mode. There will be no changes to other Yahoo properties or services, or your Yahoo account. You can find more information about the Yahoo Answers shutdown and how to download your data on this help page.

Every Hermitian matrix has an eigenvalue, without FTA?

It is true that every Hermitian matrix* has an eigenvalue. In particular, every real symmetric matrix has an eigenvalue. This follows immediately from the fundamental theorem of algebra even in the real case: the characteristic polynomial has at least one (a priori complex) root; that eigenvalue is real since the matrix is Hermitian.

Question: Is there a proof that doesn't rely on the fundamental theorem of algebra (FTA)?

I've looked at 4 different proofs of this version of the spectral theorem, all of which use the above fact and the FTA: Wikipedia's; PlanetMath's (apparently; it actually just states the above result without reference, saying it's "well-known"); a linear algebra book of mine (which similarly just assumes the existence of such an eigenvalue); and a random blog's.

In a similar vein, can the above fact be used to deduce FTA? If so that would give a nice intuitive reason why the proof of the above seems like it has to run through FTA.

*That is, n by n matrix A over R or C such that A* = A where * denotes the (conjugate) transpose.

Update:

Perhaps I didn't make myself clear. It's completely obvious to me why every Hermitian matrix has real eigenvalues; a proof of that fact is not at all what I was looking for.

Moreover it is not true that every matrix has eigenvalues. For instance, the real matrix

[1 1]

[-1 1]

has no (real) eigenvalues [of course it's basically trivial that a matrix has eigenvalues in its algebraic closure, but that's not the point].

For real matrices of odd dimension, the existence of at least one eigenvalue follows from the intermediate value theorem, since then the characteristic polynomial has real coefficients and is of odd degree. So, in this case, the fundamental theorem is unnecessary. My question is, are there clever ways of dispensing with the fundamental theorem similarly in other cases? And if not, can the general existence of an eigenvalue be used to prove the fundamental theorem?

Update 2:

@gianlino: Wonderful, thank you! I don't read French, but I was able to decipher enough to get it. Considering the quadratic was very clever.

For the reverse direction (which I doubt can be made to work, but which still interests me some), you say there aren't enough characteristic polynomials. Why? Naively, there are ~n^2 / 2 degrees of freedom in choosing an n by n Hermitian matrix, whereas only n degrees are needed to choose an arbitrary degree n polynomial.

Figuring out which matrix corresponds to a given arbitrary polynomial is probably unworkable, though.

Update 3:

Yes, of course--I had even noted that these matrices would only give polynomials with real roots a few days ago when I posted this question, but it slipped my mind. Still, restricting the fundamental theorem to polynomials without complex roots gives a non-trivial result, I believe. Extending Hermitian to normal matrices would also entirely overcome the difficulty.

2 Answers

Relevance
  • 10 years ago
    Favorite Answer

    Yes one can totally dispense with the FTA.

    Here is a proof (in French): go page 40.

    http://lm250.fr/LM223polyter.pdf

    However to use this to prove the FTA is another story since characteristic polynomials of hermitian polynomials are never going to "generate" all polynomials in any appropriate sense.

    Edit Obviously you'll get all polynomials with only real roots. How can this help you for others?

    edit But what statement can you have on polynomials which have only real roots and how would relate it to hermitian matrices?

  • 10 years ago

    We define a Hermitian matrix A such that A* = A, where * is given by your footnote in the question.

    Certainly, every Hermitian matrix has an eigenvalue, can you think of a matrix that has zero eigenvalues? That statement does not make sense to me, does not have to be Hermitian for it to have eigenvalues, even a matrix of zeroes has eigenvalues (albeit they are all zero themselves), and you would have to fall back on the FTA for completeness, what other crutch can you use to state that roots exist to an equation?

    Did you want to ask, are all eigenvalues of a Hermitian matrix real? For sure, in fact I kind of lol'd when I read "it actually just states the above result without reference, saying it's "well-known," that is unfortunate you cannot find an actual proof, but it surely is well-known that eigenvalues of Hermitian matrices are strictly real, this should have been a homework problem for you at some point in your education (it is not something people usually publish in books, but leave for the student to prove as exercises).

    Proof that all eigenvalues of a Hermitian matrix are real:

    The eigenvalue problem is defined by the generic equation Ax = lambda x, where lambda denotes eigenvalues, and x represent corresponding eigenvectors.

    the proof that all eigenvalues are real is furnished by dotting the eigenvalue equation on the left and right (pre or post dot multiplication) and using the key defining property of Hermitian matrices.

    I will use a period to mean dot multiplication.

    Ax = lambda x

    x . (Ax) = x . (lambda x)

    x. (Ax) = lambda* (x . x) ..........Eqn. (1)

    where lambda* is the complex conjugate of lambda, and was furnished by the definition of dot multiplication: x . y = x^T y*, where ^T means transpose, and * means complex conjugate

    Now compare this with dotting on the right side

    Ax = lambda x

    (Ax) . x = (lambda x ) . x

    (Ax) . x = lambda (x . x) by associativity

    Now, (Ax) . x = x . (A*x) = x . (Ax) since A* = A given A is Hermitian, so the above becomes

    x . (Ax) = lambda (x . x) ..........Eqn. (2)

    Subtracting equations (1) and (2) gives

    0 = (lambda* - lambda) (x . x)

    x . x is not generally true, this must mean that

    lambda* - lambda = 0

    lambda* = lambda

    which means Im [ lambda ] = 0, i.e. lambda and the eigenvalues are all real.

    Is this something you wanted to know?

Still have questions? Get your answers by asking now.