Blogs, Linear Algebra

Inverse Matrix

We have seen some properties of matrix operations, but one thing we did not look at were multiplicative inverses of matrices. Today, we will take some time to explore what the multiplicative inverse of a matrix is.

Identity Matrix

In properties of matrix operations we took the time to look at commutativity of matrix multiplication. Note that in this case, we saw that this multiplication was not even always defined. However, when we restricted ourselves to working with square matrices of the same size, we saw that we were able to multiply any two matrices in either order. Even though this multiplication was not commutative, it will at least be defined.

As we continue to look at matrix multiplication, we will work under the assumption that we are looking at all of the square matrices of a given size. In this case, we want to find a multiplicative identity matrix. Therefore, suppose that we have \(A\) is square of size \(m\). Now let
\[I=\begin{bmatrix}
1 & 0 & \ldots & 0 \\
0 & 1 & \ldots & 0 \\
& & & & \\
0 & 0 & \ldots & 1
\end{bmatrix}\] That is, we take \(I\) to be the square matrix of size \(m\) with \(1\)s along the main diagonal and \(0\)s everywhere else.

Next, if we take \(A * I\), we get that \(a_{ij}\) is the \(i\)th row of \(A\) multiplied by the \(j\)th column of \(I\). In this case, we will note that \(a_{ij}*1_{jj}=a_{ii}*1=a_{ij}\). Otherwise \(a_{ik}*a_{kj}=a_{ik}*0=0\) for \(k \neq j\). Hence, the only term that is not zero is the term \(a_{ij}\). Therefore, the entry in the \(i\)th row and \(j\)th column of \(A*I\) is precisely \(a_{ij}\). Since this is true for all \(1 \leq i,j \leq m\), we have that \(A* I=A\).

By a very similar argument, we would also get that \(\mathbf{1}*A=A\). Therefore, the \(I\) is the multiplicative identity for square matrices of size \(m\).

Multiplicative Inverse

Now that we know what the multiplicative identity for square matrices of size \(m\), we can ask what the multiplicative inverse would be. In general, we say that we have a multiplicative inverse of \(A\), denoted \(A^{-1}\), is \(A*A^{-1}=A^{-1}*A=1\). In this case, this means that if we multiply the two matrices we will get the identity matrix.

Now let’s suppose that such a matrix did exist. In order to see what would happen, we will denote the \(i\)th row of \(A\) as \(\mathbf{a}_{i}\) and the \(j\)th column of \(A^{-1}\) as \(\mathbf{x}_{j}\). Then we must have that
\[\mathbf{a}_{i}\mathbf{x}_{j}=
\begin{cases}
1 \text{ if } i =j \\
0 \text{ otherwise}
\end{cases}.\]

If we focus on the first column of \(A^{-1}\), this gives us the matrix equation
\begin{align*}
A*\mathbf{x}_{1}=\begin{bmatrix}
1 \\ 0 \\ \\ 0 \end{bmatrix}.
\end{align*}
Now, if we want to solve this system of equations, we know that we could augment the matrix \(A\) with the column with \(1\) in the first row and \(0\) elsewhere. If we could row reduce this so that we arrive at \([I \mathbf{y}]\), the identity matrix augmented with a column matrix, we would note that the entries in \(\mathbf{x}_{1}\) would just be the entries in \(\mathbf{y}\). That is, if we can row reduce the augmented matrix in this way, we could find the first column of \(A^{-1}\).

Continuing this for each of the columns of \(A^{-1}\), we would see that in order to the \(j\)th column of \(A^{-1}\) we would need to row reduce \(A \mathbf{c}\) where \(\mathbf{c}\) is a column matrix with \(1\) in the \(j\)th row and is \(0\) elsewhere. Note that we will be augmenting \(A\) with each of the columns of the identity matrix. Therefore, instead of doing this one at a time, we can do them all at once. That is, we could start with the augmented matrix \([A I]\) and row reduce it. If we arrive at a matrix \([I B]\), we then get the each of the entries in \(A^{-1}\) would precisely be the entries in \(B\). Therefore, we will have \(A^{-1}\).

Can there be no inverse?

I should not it may not be possible to arrive at such a matrix. However, if we cannot, then we will not be able to solve at least one of the systems of equations. Therefore, we would not have an inverse matrix. For an example of this note that if we let
\begin{align*}
A=\begin{bmatrix}
1 & 0 \\
0 & 0
\end{bmatrix}\end{align*}
Then the row reduced form of \([A I]\) is precisely \([A I]\). Looking at the second row, we would notice that we would then have to solve the equation \(0=1\). Since this is never true, there indeed would be no inverse of \(A\).

Example

We are now ready to find the inverse of a given matrix. Therefore, let
\[A=\begin{bmatrix}
1 & 1 & 1 \\
3 & 5 & 4 \\
3 & 6 & 5
\end{bmatrix}\] and find \(A^{-1}\).

In order to do this, we start with the augmented matrix
\[A=\begin{bmatrix}
1 & 1 & 1 & 1 & 0 & 0\\
3 & 5 & 4 & 0 & 1 & 0\\
3 & 6 & 5 & 0 & 0 & 1
\end{bmatrix}\] We now row reduce. First we want a \(1\) in the first row and first column. Since we have this, we won’t need to make a change at this point. We now want \(0\)s in the rest of the first column. In order to do this, we will perform the row operations \(-3R_{1}+R_{2} \to R_{2}\) and \(-3R_{1}+R_{3} \to R_{3}\). We now have
\[\begin{bmatrix}
1 & 1 & 1 & 1 & 0 & 0\\
0 & 2 & 1 & -3 & 1 & 0\\
0 & 3 & 2 & -3 & 0 & 1
\end{bmatrix}.\]

Next, we want a \(1\) as the left most nonzero number in the second row. We must, therefore, perform the row operation \(\frac{1}{2}R_{2} \to R_{2}\) and get the matrix
\[\begin{bmatrix}
1 & 1 & 1 & 1 & 0 & 0\\
0 & 1 & \frac{1}{2} & \frac{-3}{2} & \frac{1}{2} & 0\\
0 & 3 & 2 & -3 & 0 & 1
\end{bmatrix}.\]

Now we want \(0\)s in the other entries in the second column. We, therefore, take \(-R_{2}+R_{1} \to R_{1}\) and \(-3R_{2}+R_{3} \to R_{3}\). This gives us
\[\begin{bmatrix}
1 & 0 & \frac{1}{2} & \frac{5}{2} & \frac{-1}{2} & 0\\
0 & 1 & \frac{1}{2} & \frac{-3}{2} & \frac{1}{2} & 0\\
0 & 0 & \frac{1}{2} & \frac{3}{2} & \frac{-3}{2} & 1
\end{bmatrix}.\]

Moving on to the third row, we want the leading nonzero term to be \(1\). Hence, we take \(2R_{3} \to R_{3}\) and get
\[\begin{bmatrix}
1 & 0 & \frac{1}{2} & \frac{5}{2} & \frac{-1}{2} & 0\\
0 & 1 & \frac{1}{2} & \frac{-3}{2} & \frac{1}{2} & 0\\
0 & 0 & 1 & 3 & -3 & 2
\end{bmatrix}.\]

In order to make the other entries in the third column \(0\), we now take \(-\frac{1}{2}R_{3}+R_{1} \to R_{1}\) and \(-\frac{1}{2}R_{3}+R_{2} \to R_{2}\) and get
\[\begin{bmatrix}
1 & 0 & 0 & 1 & 1 & -1 \\
0 & 1 & 0 & -3 & 2 & -1\\
0 & 0 & 1 & 3 & -3 & 2
\end{bmatrix}.\]

Now that we have row reduced the matrix, we note that the right hand side is indeed the identity matrix. Hence, we will get that
\[A^{-1}=\begin{bmatrix}
1 & 1 & -1 \\
-3 & 2 & -1\\
3 & -3 & 2
\end{bmatrix}.\]

Because of the amount of arithmetic involved in the problem it is definitely worth checking our work at this point. In order to check our work, we should find \(A*A^{-1}\) and \(A^{-1}*A\). I will leave it to you to check both of this. If you would like extra work with matrix multiplication, make sure to look back at Matrix Operations.

Conclusion

We have seen that if we want to find a multiplicative inverse of a matrix, we can augment the matrix with the identity matrix and row reduce. If the resulting row reduced matrix is of the form \([I B]\), we get that \(B\) is precisely the inverse matrix.

I hope this helps you better understand and find inverse of matrices. If you need more help with linear algebra, check out our other posts or our YouTube videos. While checking out the videos, be sure to subscribe to the channel.

1 thought on “Inverse Matrix”

We'd love to hear your thoughts!

This site uses Akismet to reduce spam. Learn how your comment data is processed.