Building on what we’ve learned in the first part of quantum mechanics, we can now explore more complex concepts in quantum theory. In this second part, we will cover the theory that allows us to do more advanced calculations in quantum mechanics
In studying complex multi-state quantum systems, numerical methods are often the only way to solve a variety of problems, as solving the Schrödinger equation by hand becomes impossible. One important feature of these numerical methods is utilizing the Heisenberg picture of quantum mechanics, also known as matrix mechanics.
In matrix mechanics, we describe the system not through its total wavefunction, but by its operators. The quantum state ∣Ψ⟩ stays constant; the operators (energy, momentum, etc.) are what evolve through time. In particular, the operators can be expressed in specific bases (bases are plural of “basis”). For discrete operators, as we have seen for the spin matrices, we can choose a discrete basis. In our case, the S^z operator can be expressed using the basis of the two spin states, as given by:
Meanwhile, for continuous (differential) operators, we can choose a functional basis. For instance, we can use the Legendre, Laguerre, and Hermite polynomials, all of which form a complete basis set, to find the matrix representation of the operator. Within this section, we will give a brief introduction on determining the matrix representation of various operators, based on the book Basic Theory of Lasers and Masers by Jacques Vanier.
In the time-independent regime, the state of a quantum system is represented by its state-vector ∣Ψ⟩, which can be expanded into a set of eigenstates ∣ψn⟩ by ∣Ψ⟩=n∑∣ψn⟩. Each eignestate satisfies eigenvalue equation H^∣ψn⟩=En∣ψn⟩, where H^ is the Hamiltonian and En is the energy eigenvalue of a particular eigenstate. We may extract the energy eigenvalues En as follows. First, we multiply both sides by a bra ⟨ψm∣:
Now, the eigenstates ∣ψn⟩ can theoretically be expressed in any basis, but we typically choose to use a complete orthonormal basis. Such bases include Fourier series as well as the Legendre, Laguerre, and Hermite polynomials. The specific basis doesn’t matter; it simply matters that a complete and orthonormal basis satisfies ⟨ψm∣ψn⟩=δmn. Therefore, we have ⟨ψm∣En∣ψn⟩=En⟨ψm∣ψn⟩ (since En is a constant and can be factored out of the expression). Given that we have an orthonormal basis, En⟨ψm∣ψn⟩ is only nonzero when m=n, in which case we have En⟨ψn∣ψn⟩=En, and thus:
⟨ψm∣H^∣ψn⟩⟨ψm∣H^∣ψn⟩⟨ψn∣H^∣ψn⟩=⟨ψm∣En∣ψn⟩=En⟨ψm∣ψn⟩=Enδmn=Enδmn(zero if m=n)=En
This is a diagonal matrix - so one might wonder, why bother writing this when each side can just be written as a vector multiplied by an identity matrix? The reason is that if we don’t know the eigenstates, we can always choose to expand our quantum state ∣Ψ⟩ in some other complete and orthonormal basis instead, which we will refer to as ∣ϕn⟩ (since we can always choose any basis to write out our state-vector). As both bases are complete and orthogonal, we can then express a state in our new basis ∣ϕn⟩ in terms of our eigenstate basis ∣ψn⟩ as a linear sum:
Where all the ank's are constant coefficients. We can write this in matrix form as:
new basis⎝⎛∣ϕ1⟩∣ϕ2⟩∣ϕ3⟩⋮∣ϕn⟩⎠⎞=matrix Ank⎝⎛a11a21a31⋮an1a12a22a32⋮an2a13a23a33⋮an3………⋱…a1ka2ka3k⋮ank⎠⎞old basis⎝⎛∣ψ1⟩∣ψ2⟩∣ψ3⟩⋮∣ψk⟩⎠⎞
We can extract out the components of Ank using another orthogonality argument. If we take the components of our new basis ∣ϕn⟩, then multiply by a bra ⟨ψj∣, then we have:
Thus we have Ank=∣ϕn⟩⟨ψk∣. We may also derive an expression for the inverse, i.e. A−1. Recall that the inverse of A must satisfy AA−1=I where I is the identity matrix. We recall that the identity matrix is given by I=n∑∣ϕn⟩⟨ϕn∣. Then, we have:
Where we were able to swap the order of the inner product since the inner product is commutative, and we were able to pull the ∣ψk⟩ into the sum since the sum does not sum over the index k (so ∣ψk⟩ can essentially be treated as a constant in the sum). Now if we take the inner product with the bra ⟨ϕn∣ we have:
Where again, since ∣ψk⟩ is not summed over, we were able to pull it out of the sum, and we used orthogonality to collapse the sum into a single term.
Having computed A and A−1, we will now show that our transformation of basis actually leads to a very nice expression for the matrix representation of the Hamiltonian. The Hamiltonian in our new basis, which we write as H^, is given by H^=AH^A−1 (this is a standard result of linear algebra). If we then substitute our expressions for A and A−1, we have:
Where the last step comes from ⟨ψn∣H^∣ψn⟩=En (the index does not matter at this point because we use only one index in the expression). Thus we find that:
Let us demonstrate the same with, in the functional picture. Consider a system described by wavefunction ψ(r). By the postulates of quantum mechanics, the wavefunction may be expanded in any complete and orthonormal set of functions ϕn(r), such that:
Now, Amn is not necessarily a diagonal matrix, so it does not necessarily give the energy eigenvalues. However, if we diagonalize it, we are left with the matrix Emn, which is diagonal (that is, for all m=n, Emn=0, and whose diagonals Enn=En are equal to the energy eigenvalues.
The matrix operator approach therefore condenses the difficult problem of solving the Schrödinger equation into a more straightforward problem of finding the correct matrix A that diagonalizes the Hamiltonian in a particular basis; from there, we can simply read off the energy eigenvalues from the diagonal.
We will use the matrix representation approach to solve for the quantum harmonic oscillator, which will act as a toy model. We want to obtain a matrix representation for the Hamiltonian of the quantum harmonic oscillator and find the energy eigenvalues.
The well-known Hamiltonian for the quantum harmonic oscillator is given by:
We will now need to pick a basis to be able to obtain its matrix representation. In theory, when we don’t know the precise eigenstates of the matrix, any set of basis functions would do (as long as they are complete and orthogonal) - but luckily for us, we already know the eigenstates of the Hamiltonian. So for demonstrative purposes, it is easiest to choose the basis of the eigenstates of the Hamiltonian, which we can write as ∣ψ1⟩,∣ψ2⟩,∣ψ3⟩,…,∣ψn⟩. Then, we have En=⟨ψn∣H^∣ψn⟩. But recall that in the example of the quantum harmonic oscillator, H^∣ψn⟩=ℏω(n+21)∣ψn⟩. Thus, ⟨ψn∣H^∣ψn⟩ is given by:
Let us now show that even though this matrix representation method for the quantum harmonic oscillator is a “toy model”, it can still predict real-world results. The transition energy ΔE from our result is the difference between energy levels n1,n2, and thus:
The angular frequency of the vibrations, ω, can be found through ω=k/μ where μ is the reduced mass of the molecule and k is its force constant. Let us compute the vibrational transitions of carbon dioxide, which, although not strictly speaking a diatomic molecule, can be somewhat treated as such. The reduced mass of the triatomic carbon dioxide molecule is given by[5]:
Where mC is the mass of the carbon atom and similarly mO is the mass of the oxygen atom. The force constant of CO2 is approximately 1680 N/m[4] and thus ω≈251.47 THz. Substituting these values, we find that the spectral lines are given by: