Skip to article frontmatterSkip to article content
Project Elara

Quantum mechanics, Part 2

Building on what we’ve learned in the first part of quantum mechanics, we can now explore more complex concepts in quantum theory. In this second part, we will cover the theory that allows us to do more advanced calculations in quantum mechanics

Introduction to matrix mechanics

In studying complex multi-state quantum systems, numerical methods are often the only way to solve a variety of problems, as solving the Schrödinger equation by hand becomes impossible. One important feature of these numerical methods is utilizing the Heisenberg picture of quantum mechanics, also known as matrix mechanics.

In matrix mechanics, we describe the system not through its total wavefunction, but by its operators. The quantum state Ψ|\Psi\rangle stays constant; the operators (energy, momentum, etc.) are what evolve through time. In particular, the operators can be expressed in specific bases (bases are plural of “basis”). For discrete operators, as we have seen for the spin matrices, we can choose a discrete basis. In our case, the S^z\hat S_z operator can be expressed using the basis of the two spin states, as given by:

χz+=(10),χz=(01)\chi_{z^{+}} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \chi_{z^{-}} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}

Meanwhile, for continuous (differential) operators, we can choose a functional basis. For instance, we can use the Legendre, Laguerre, and Hermite polynomials, all of which form a complete basis set, to find the matrix representation of the operator. Within this section, we will give a brief introduction on determining the matrix representation of various operators, based on the book Basic Theory of Lasers and Masers by Jacques Vanier.

The Hamiltonian in general bases

In the time-independent regime, the state of a quantum system is represented by its state-vector Ψ|\Psi\rangle, which can be expanded into a set of eigenstates ψn|\psi_n\rangle by Ψ=nψn|\Psi\rangle = \displaystyle \sum_n |\psi_n\rangle. Each eignestate satisfies eigenvalue equation H^ψn=Enψn\hat H |\psi_n \rangle = E_n |\psi_n\rangle, where H^\hat H is the Hamiltonian and EnE_n is the energy eigenvalue of a particular eigenstate. We may extract the energy eigenvalues EnE_n as follows. First, we multiply both sides by a bra ψm\langle\psi_m|:

ψmH^ψn=ψmEnψn\langle \psi_m | \hat H |\psi_n\rangle = \langle \psi_m | E_n |\psi_n\rangle

Now, the eigenstates ψn|\psi_n\rangle can theoretically be expressed in any basis, but we typically choose to use a complete orthonormal basis. Such bases include Fourier series as well as the Legendre, Laguerre, and Hermite polynomials. The specific basis doesn’t matter; it simply matters that a complete and orthonormal basis satisfies ψmψn=δmn\langle \psi_m | \psi_n \rangle = \delta_{mn}. Therefore, we have ψmEnψn=Enψmψn\langle \psi_m | E_n | \psi_n \rangle = E_n \langle \psi_m |\psi_n\rangle (since EnE_n is a constant and can be factored out of the expression). Given that we have an orthonormal basis, EnψmψnE_n \langle \psi_m |\psi_n\rangle is only nonzero when m=nm = n, in which case we have Enψnψn=EnE_n \langle \psi_n |\psi_n \rangle = E_n, and thus:

ψmH^ψn=ψmEnψn=Enψmψn=EnδmnψmH^ψn=Enδmn(zero if mn)ψnH^ψn=En\begin{align} \langle \psi_m | \hat H |\psi_n\rangle &= \langle \psi_m | E_n |\psi_n\rangle \\ &= E_n \langle \psi_m | \psi_n \rangle \\ &= E_n\, \delta_{mn} \\ \langle \psi_m | \hat H |\psi_n\rangle &= E_n \delta_{mn} \quad (\text{zero if } m \neq n) \\ \langle \psi_n | \hat H |\psi_n\rangle &= E_n \\ \end{align}

Thus, to find EnE_n, we “only” need to calculate ψnH^ψn\langle \psi_n | \hat H |\psi_n\rangle, which can also be written in terms of the Schrödinger formalism as:

En=ψn(x)H^ψn(x)dxE_n = \int_{-\infty}^\infty \psi_n^*(x) \hat H \psi_n(x) \, dx

But returning to the matrix representation - we may write the set of eigenvalues EnE_n with the following matrix equation:

(ψ1H^ψ10000ψ2H^ψ20000ψ3H^ψ30000000ψnH^ψn)=(E10000E20000E30000000En)\begin{pmatrix} \langle \psi_1 | \hat H | \psi_1 \rangle & 0 & 0 & \dots & 0 \\ 0 & \langle \psi_2 | \hat H | \psi_2 \rangle & 0 & \dots & 0 \\ 0 & 0 & \langle \psi_3 | \hat H | \psi_3 \rangle & \dots & 0 \\ 0 & 0 & 0 & \ddots & \vdots \\ 0 & 0 & 0 & \dots & \langle \psi_n | \hat H | \psi_n \rangle \end{pmatrix} = \begin{pmatrix} E_1 & 0 & 0 & \dots & 0 \\ 0 & E_2 & 0 & \dots & 0 \\ 0 & 0 & E_3 & \dots & 0 \\ 0 & 0 & 0 & \ddots & \vdots \\ 0 & 0 & 0 & \dots & E_n \end{pmatrix}

This is a diagonal matrix - so one might wonder, why bother writing this when each side can just be written as a vector multiplied by an identity matrix? The reason is that if we don’t know the eigenstates, we can always choose to expand our quantum state Ψ|\Psi\rangle in some other complete and orthonormal basis instead, which we will refer to as ϕn|\phi_n\rangle (since we can always choose any basis to write out our state-vector). As both bases are complete and orthogonal, we can then express a state in our new basis ϕn|\phi_n\rangle in terms of our eigenstate basis ψn|\psi_n\rangle as a linear sum:

ϕn=kankψk=an1ψ1+an2ψ2+an3ψ3++ankψk\begin{align} |\phi_n\rangle &= \sum_k a_{nk} |\psi_k \rangle \\ &= a_{n1} |\psi_1\rangle + a_{n2} |\psi_2 \rangle + a_{n3}|\psi_3\rangle + \dots + a_{nk} |\psi_k\rangle \end{align}

Where all the anka_{nk}'s are constant coefficients. We can write this in matrix form as:

(ϕ1ϕ2ϕ3ϕn)new basis=(a11a12a13a1ka21a22a23a2ka31a32a33a3kan1an2an3ank)matrix Ank(ψ1ψ2ψ3ψk)old basis\underbrace{\begin{pmatrix} |\phi_1\rangle \\ |\phi_2\rangle \\ |\phi_3\rangle \\ \vdots \\ |\phi_n\rangle \end{pmatrix}}_\text{new basis} = \underbrace{\begin{pmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1k} \\ a_{21} & a_{22} & a_{23} & \dots & a_{2k} \\ a_{31} & a_{32} & a_{33} & \dots & a_{3k} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & a_{n3} & \dots & a_{nk} \end{pmatrix}}_{\text{matrix } A_{nk}}\, \underbrace{\begin{pmatrix} |\psi_1\rangle \\ |\psi_2\rangle \\ |\psi_3\rangle \\ \vdots \\ |\psi_k \rangle \end{pmatrix}}_\text{old basis}

We can extract out the components of AnkA_{nk} using another orthogonality argument. If we take the components of our new basis ϕn|\phi_n\rangle, then multiply by a bra ψj\langle \psi_j|, then we have:

ϕn=kAnkψkϕnψj=ψjkAnkψk=kAnkψjψk=Ankδjk\begin{align} |\phi_n\rangle &= \sum_k A_{nk} |\psi_k \rangle \\ |\phi_n\rangle \langle \psi_j | &= \langle \psi_j| \sum_k A_{nk} |\psi_k \rangle \\ &= \sum_k A_{nk} \langle \psi_j| \psi_k\rangle \\ &= A_{nk} \delta_{jk} \end{align}

Where ψjψk=δjk\langle \psi_j |\psi_k\rangle = \delta_{jk} since our bases are othogonal. The only case were ψjϕn=Ankδjk\langle \psi_j | \phi_n \rangle = A_{nk} \delta_{jk} does not vanish is when j=kj = k and therefore we have:

ϕnψk=Ankδkk=Ank|\phi_n\rangle \langle \psi_k | = A_{nk} \delta_{kk} = A_{nk}

Thus we have Ank=ϕnψkA_{nk} = |\phi_n\rangle \langle \psi_k|. We may also derive an expression for the inverse, i.e. A1A^{-1}. Recall that the inverse of AA must satisfy AA1=IA A^{-1} = I where II is the identity matrix. We recall that the identity matrix is given by I=nϕnϕnI = \displaystyle \sum_n |\phi_n \rangle \langle \phi_n |. Then, we have:

AA1=IϕnψkA1=iϕiϕi\begin{align} A A^{-1} &= I \\ |\phi_n\rangle \langle \psi_k| A^{-1} &= \sum_i |\phi_i \rangle \langle \phi_i | \\ \end{align}

Now, if we take the inner product of both sides with the ket ψk|\psi_k\rangle, then:

ψk(ϕnψk)A1=ψkiϕiϕi(ϕnψk)ψkA1=(iϕiϕi)ψkϕnψkψkA1=iϕiϕiψkϕnψkψkA1=iϕiϕiψkϕnA1=iϕiϕiψk\begin{align} |\psi_k\rangle \bigg( |\phi_n\rangle \langle \psi_k| \bigg) A^{-1} &= |\psi_k\rangle \sum_i |\phi_i \rangle \langle \phi_i | \\ \bigg( |\phi_n\rangle \langle \psi_k| \bigg)|\psi_k\rangle A^{-1} &= \left(\sum_i |\phi_i \rangle \langle \phi_i |\right)|\psi_k\rangle \\ |\phi_n\rangle \langle \psi_k|\psi_k\rangle A^{-1} &= \sum_i |\phi_i \rangle \langle \phi_i |\psi_k\rangle \\ |\phi_n\rangle \cancel{\langle \psi_k|\psi_k\rangle} A^{-1} &= \sum_i |\phi_i \rangle \langle \phi_i |\psi_k\rangle \\ |\phi_n\rangle A^{-1} &= \sum_i |\phi_i \rangle \langle \phi_i |\psi_k\rangle \\ \end{align}

Where we were able to swap the order of the inner product since the inner product is commutative, and we were able to pull the ψk|\psi_k\rangle into the sum since the sum does not sum over the index kk (so ψk|\psi_k\rangle can essentially be treated as a constant in the sum). Now if we take the inner product with the bra ϕn\langle \phi_n| we have:

ϕnϕnA1=ϕniϕiϕiψkϕnϕnA1=ψkϕniϕi=δniϕiA1=ψkδniϕiA1=ψkϕn\begin{align} \langle \phi_n|\phi_n\rangle A^{-1} &= \langle \phi_n|\sum_i |\phi_i \rangle \langle \phi_i |\psi_k\rangle \\ \cancel{\langle \phi_n|\phi_n\rangle} A^{-1} &= |\psi_k\rangle\underbrace{\langle \phi_n|\sum_i |\phi_i \rangle}_{= \delta_{n i}} \langle \phi_i | \\ A^{-1} &= |\psi_k\rangle\delta_{ni}\langle \phi_i\rangle \\ A^{-1} &= |\psi_k\rangle \langle \phi_n| \end{align}

Where again, since ψk|\psi_k\rangle is not summed over, we were able to pull it out of the sum, and we used orthogonality to collapse the sum into a single term.

Having computed AA and A1A^{-1}, we will now show that our transformation of basis actually leads to a very nice expression for the matrix representation of the Hamiltonian. The Hamiltonian in our new basis, which we write as H^\hat{\mathscr{H}}, is given by H^=AH^A1\hat{\mathscr{H}} = A \hat H A^{-1} (this is a standard result of linear algebra). If we then substitute our expressions for AA and A1A^{-1}, we have:

H^=AH^A1=ϕnψkH^ψkϕnϕnH^=ϕn(ϕnψkH^ψkϕn)=(ϕnψkH^ψkϕn)ϕn=ϕnψkH^ψkϕnϕnϕnϕnH^=ϕnϕnψkH^ψkϕnϕnH^=ϕnϕnψkH^ψkH^=ψkH^ψk=Ek\begin{align} \hat{\mathscr{H}} &= A \hat H A^{-1} \\ &= |\phi_n\rangle \langle \psi_k | \hat H |\psi_k\rangle \langle \phi_n| \\ |\phi_n \rangle \hat{\mathscr{H}} &= |\phi_n \rangle\bigg(|\phi_n\rangle \langle \psi_k| \hat H |\psi_k\rangle \langle \phi_n |\bigg) \\ &= \bigg(|\phi_n\rangle \langle \psi_k| \hat H |\psi_k\rangle \langle \phi_n |\bigg) |\phi_n \rangle \\ &= |\phi_n\rangle \langle \psi_k| \hat H |\psi_k\rangle \cancel{\langle \phi_n |\phi_n\rangle} \\ \langle \phi_n |\phi_n \rangle \hat{\mathscr{H}} &= \langle \phi_n |\phi_n\rangle \langle \psi_k| \hat H |\psi_k\rangle \\ \cancel{\langle \phi_n |\phi_n \rangle} \hat{\mathscr{H}} &= \cancel{\langle \phi_n |\phi_n\rangle} \langle \psi_k| \hat H |\psi_k\rangle \\ \hat{\mathscr{H}} &= \langle \psi_k| \hat H |\psi_k\rangle \\ &= E_k \end{align}

Where the last step comes from ψnH^ψn=En\langle \psi_n | \hat H |\psi_n\rangle = E_n (the index does not matter at this point because we use only one index in the expression). Thus we find that:

H^=AH^A1=En\hat{\mathscr{H}} = A \hat H A^{-1} = E_n

Let us demonstrate the same with, in the functional picture. Consider a system described by wavefunction ψ(r)\psi(\mathbf{r}). By the postulates of quantum mechanics, the wavefunction may be expanded in any complete and orthonormal set of functions ϕn(r)\phi_n(\mathbf{r}), such that:

ψ(r)=n=1cnϕn(r)=c1ϕ1+c2ϕ2+c3ϕ3++cnϕn\psi(\mathbf{r}) = \sum_{n = 1}^\infty c_n \phi_n(\mathbf{r}) = c_1 \phi_1 + c_2 \phi_2 + c_3 \phi_3 + \dots + c_n \phi_n

Where we assume all ϕn\phi_n are normalized. From here, we may prepare a matrix AmnA_{mn} with elements given by:

Amn=ϕmH^ϕndVA_{mn} = \int_{-\infty}^\infty \phi_ m^* \hat H \phi_n dV

Now, AmnA_{mn} is not necessarily a diagonal matrix, so it does not necessarily give the energy eigenvalues. However, if we diagonalize it, we are left with the matrix EmnE_{mn}, which is diagonal (that is, for all mnm \neq n, Emn=0E_{mn} = 0, and whose diagonals Enn=EnE_{nn} = E_n are equal to the energy eigenvalues.

The matrix operator approach therefore condenses the difficult problem of solving the Schrödinger equation into a more straightforward problem of finding the correct matrix AA that diagonalizes the Hamiltonian in a particular basis; from there, we can simply read off the energy eigenvalues from the diagonal.

Example: quantum harmonic oscillator

We will use the matrix representation approach to solve for the quantum harmonic oscillator, which will act as a toy model. We want to obtain a matrix representation for the Hamiltonian of the quantum harmonic oscillator and find the energy eigenvalues.

The well-known Hamiltonian for the quantum harmonic oscillator is given by:

H^=p^22m+12mω2x^2\hat H = \dfrac{\hat p^2}{2m} + \dfrac{1}{2} m \omega^2 \hat x^2

We will now need to pick a basis to be able to obtain its matrix representation. In theory, when we don’t know the precise eigenstates of the matrix, any set of basis functions would do (as long as they are complete and orthogonal) - but luckily for us, we already know the eigenstates of the Hamiltonian. So for demonstrative purposes, it is easiest to choose the basis of the eigenstates of the Hamiltonian, which we can write as ψ1,ψ2,ψ3,,ψn|\psi_1\rangle, |\psi_2\rangle, |\psi_3\rangle, \dots, |\psi_n\rangle. Then, we have En=ψnH^ψnE_n = \langle \psi_n | \hat H |\psi_n\rangle. But recall that in the example of the quantum harmonic oscillator, H^ψn=ω(n+12)ψn\hat H |\psi_n\rangle = \hbar \omega \left(n + \dfrac{1}{2}\right) |\psi_n\rangle. Thus, ψnH^ψn\langle \psi_n | \hat H | \psi_n\rangle is given by:

ψnH^ψn=ψnω(n+12)ψn=ω(n+12)ψnψn=ω(n+12)δnn\begin{align} \langle \psi_n |\hat H | \psi_n\rangle &= \langle \psi_n|\hbar \omega \left(n + \dfrac{1}{2}\right) |\psi_n\rangle \\ &= \hbar \omega \left(n + \dfrac{1}{2}\right) \langle \psi_n|\psi_n\rangle \\ &= \hbar \omega \left(n + \dfrac{1}{2}\right) \delta_{nn} \end{align}

Thus our resulting energy matrix becomes[3]:

ψnH^ψn=ω(120000320000520000000(n+12))\langle \psi_n|\hat{H}|\psi_n\rangle = \hbar \omega\begin{pmatrix} \frac{1}{2} & 0 & 0 & \dots & 0 \\ 0 & \frac{3}{2} & 0 & \dots & 0 \\ 0 & 0 & \frac{5}{2} & \dots & 0 \\ 0 & 0 & 0 & \ddots & \vdots \\ 0 & 0 & 0 & \dots & (n + \frac{1}{2}) \\ \end{pmatrix}

And thus our energy eigenvalues can be found (as expected) by just reading off the diagonals the diagonals, from which we can surmise that:

En=(n+12)ωE_n = \left(n + \dfrac{1}{2}\right)\hbar \omega

Let us now show that even though this matrix representation method for the quantum harmonic oscillator is a “toy model”, it can still predict real-world results. The transition energy ΔE\Delta E from our result is the difference between energy levels n1,n2n_1, n_2, and thus:

ΔE=(n2+12)ω(n1+12)ω=(n2n1)ω\begin{align} \Delta E &= \left(n_2 + \dfrac{1}{2}\right)\hbar \omega - \left(n_1 + \dfrac{1}{2}\right)\hbar \omega \\ &= (n_2 - n_1) \hbar \omega \end{align}

From which we may derive the spectral lines for various different transitions, given that ΔE=hc/λ\Delta E = hc/\lambda, with:

1λ=ω2πc(n2n1)λ=2πc(n2n1)ω\dfrac{1}{\lambda} = \dfrac{\omega}{2\pi c}(n_2 - n_1) \quad \Rightarrow \quad \lambda = \dfrac{2\pi c}{(n_2 - n_1)\omega}

The angular frequency of the vibrations, ω\omega, can be found through ω=k/μ\omega = \sqrt{k/\mu} where μ\mu is the reduced mass of the molecule and kk is its force constant. Let us compute the vibrational transitions of carbon dioxide, which, although not strictly speaking a diatomic molecule, can be somewhat treated as such. The reduced mass of the triatomic carbon dioxide molecule is given by[5]:

μ=2mCmO+mO22mC+mO2.567×1027 kg\mu = \dfrac{2m_{\mathrm{C}} m_{\mathrm{O}} + m_{\mathrm{O}}^2}{2 m_{\mathrm{C}} + m_{\mathrm{O}}} \approx 2.567 \times 10^{-27} \text{ kg}

Where mCm_{\mathrm{C}} is the mass of the carbon atom and similarly mOm_{\mathrm{O}} is the mass of the oxygen atom. The force constant of CO2\mathrm{CO2} is approximately 1680 N/m\text{1680 N/m}[4] and thus ω251.47 THz\omega \approx \text{251.47 THz}. Substituting these values, we find that the spectral lines are given by:

TransitionSpectral lineSpectrum
10|1\rangle \to |0\rangle7596 nmMid-infrared
20|2 \rangle \to |0\rangle3748 nmMid-infrared
30|3\rangle \to |0\rangle2499 nmNear-infrared
40|4\rangle \to |0\rangle1874 nmNear-infrared
50|5\rangle \to |0\rangle1499 nmNear-infrared
60|6\rangle \to |0\rangle1249 nmNear-infreared
Footnotes