In Inverse Matrix we saw that we were able to find the multiplicative inverse, or show that no such inverse existed, by augmenting the matrix with the identity matrix and row reducing. What we noticed, however, was that this could be a time consuming process. Because of this, we decided it would be helpful if we could determine whether or not an inverse existed before having to go through this process.

**When does an inverse exist?**

This first thing we noticed was that an inverse could only exist if we were dealing with a square matrix. Otherwise, the multiplication can’t commute with another matrix, because the sizes would be different. Therefore, we began by looking at \(1 \times 1 \) matrices.

**\(1 \times 1\) matrices**

Note that a \(1\times 1\) matrix consists of a single entry. If we let \(A=[a]\), then this would have an inverse if and only if there existed a matrix \(B\) such that \(AB=BA=I\). Therefore, we would need \([a]*[b]=[ab]=[1]\). This would then reduce to having to solve the equation \(ab=1\). Note that, for the real numbers, this equation has a solution for every \(a\) except for \(a=0\).

What we get here, then, is that a \(1 \times 1\) matrix will be invertible if and only if the entry, \(a\) is non-zero.

**\(2 \times 2\) matrices**

Here we will suppose that we have an arbitrary \(2 \times 2\) matrix. That is, let

\[\begin{bmatrix}

a & b \\

c & d \end{bmatrix}\]
where \(a,b,c\) and \(d\) are real numbers. If we were to find an inverse for this matrix, notice that we would have to augment with the identity and row reduce. We then get,

\[\begin{bmatrix}

a & b & 1 & 0 \\

c & d & 0 & 1 \end{bmatrix}.\]
As we row reduce this, we get

\begin{align*}

&\frac{1}{a}R_{1}\to R_{1} \\

&\begin{bmatrix}

1 & \frac{b}{a} & \frac{1}{a} & 0 \\

c & d & 0 & 1 \end{bmatrix}\end{align*}

We then get

\begin{align*}

&-cR_{1}+R_{2} \to R_{2} \\

&\begin{bmatrix}

1 & \frac{b}{a} & \frac{1}{a} & 0 \\

0 & d-\frac{cb}{a} & -\frac{c}{a} & 1 \end{bmatrix}\end{align*}

Next, we need to divide the second row by \(d-\frac{cb}{a}\). However, before doing so, we will simplify this by combining the fractions. This is then equivalent to \(\frac{ad-bc}{a}\). We, therefore, can further row reduce.

\begin{align*}

&\frac{a}{ad-bc}R_{2} \to R_{2} \\

&\begin{bmatrix}

1 & \frac{b}{a} & \frac{1}{a} & 0 \\

0 & 1 & \frac{-c}{ad-bc} & \frac{a}{ad-bc} \end{bmatrix}\end{align*}

Finally, we can arrive at

\begin{align*}

&-\frac{b}{a}R_{2}+R_{1} \to R_{1} \\

&\begin{bmatrix}

1 & 0 & \frac{d}{ad-bc} & \frac{-b}{ad-bc} \\

0 & 1 & \frac{-c}{ad-bc} & \frac{a}{ad-bc} \end{bmatrix}.\end{align*}

Therefore, the inverse matrix will be

\begin{align*}

\begin{bmatrix}

\frac{d}{ad-bc} & \frac{-b}{ad-bc} \\

\frac{-c}{ad-bc} & \frac{a}{ad-bc} \end{bmatrix}.\end{align*}

Note that, in the process of row reducing, we will end up dividing by \(ad-bc\). That is, we took the first row first entry and multiplied by the entry in the matrix obtained by deleting the first row and first column. We then multiplied the entry in the first row second column by the matrix obtained by deleting the first row second column as well as \(-1\). We then added these together and noted that, in order to have an inverse, this number must be non-zero.

The number we arrive at here, we name the determinant, written as \(|A|\). Note that the we are stating that for both \(1 \times 1\) and \(2 \times 2\) matrices, we will be able to find an inverse matrix if and only if the determinant is non-zero.

**\(3 \times 3\) and beyond**

While I won’t go through all the calculations we did for the \(2 \times 2\) matrix, we do know that if we want to find an inverse of larger matrix, we can still augment with the identity and row reduce the resulting matrix. If we did this, a quick outline would be, let \(A=[a_{ij}]\), then

- Divide the first row by the leading term in the first column. Note that this would result in the first row being divided by \(a_{11}\).
- For the remaining rows, perform the row operation \(-a_{i1}R_{1}+R_{i} \to R_{i}\).
- This results in an \(a_{11}\) denominator of all of the terms in the inverse matrix.
- If we then divide by the result in \(a_{22}\), we get that we would need to divide by \(a_{11}a_{22}-a_{12}a_{21}\). By subtracting this from the rest of the second columns, this end up in the denominator of each entry in the inverse.
- We now divide row three by \(a_{33}\) that has resulted. This would involve dividing by \(a_{33}(a_{11}a_{22}-a_{12}a_{21})-a_{23}(a_{11}a_{32}-a_{12}a_{31})+a_{13}(a_{21}a_{32}-a_{22}a_{31})\).
- We would find that, in general we would have to divide by

\begin{align*}

\sum_{i=1}^{n}(-1)^{i+1}a_{1i}|M_{1i}|

\end{align*}

where \(M_{1i}\) is the matrix found by deleting the first row and the \(i\)th column of the matrix.

Now, we can define the determinant recursively using this formula, and we will find that if

\begin{align*}

|A|=\sum_{i=1}^{n}(-1)^{i+1}a_{1i}|M_{1i}|,

\end{align*}

then \(A\) has an inverse matrix if and only if the determinant of \(A\) is not \(0\), that is \(|A| \neq 0\).

As a further note here, it can be shown (though we won’t do so here) that

\begin{align*}

|A|&=\sum_{i=1}^{n}(-1)^{i+1}a_{1i}|M_{1i}|\\

&=\sum_{i=1}^{n}(-1)^{i+j}a_{ij}|M_{ij}| \\

&=\sum_{j=1}^{n}(-1)^{i+j}a_{ij}|M_{iJ}|,

\end{align*}

for any \(1 \leq i,j \leq n\).

**Conclusion**

We have found that, instead of having to row reduce the entire matrix to determine if the matrix is invertible, we can instead find the determinant of the matrix. If this number is non-zero, then we must have an inverse. Whereas, if this number is \(0\), we will not have an inverse. While it still takes work to calculate this determinant, it is still much less work than row reducing the entire augmented matrix.

Note that, in our next post, we will go through an example of finding the determinant of a given \(3 \times 3\) matrix. Please read through that for more help on matrices, or visit our other linear algebra posts for more help in the course.

## 1 thought on “Does that Matrix Have an Inverse?”