The product of a matrix with its adjugate gives a diagonal matrix (entries not on the main diagonal are zero) whose diagonal entries are the determinant of the original matrix:
where I is the identity matrix of the same size as A. Consequently, the multiplicative inverse of an invertible matrix can be found by dividing its adjugate by its determinant.
In more detail, suppose R is a unital+ commutative ring and A is an n × n matrix with entries from R. The (i, j)-minor of A, denoted Mij, is the determinant of the (n − 1) × (n − 1) matrix that results from deleting row i and column j of A. The cofactor matrix of A is the n × n matrix C whose (i, j) entry is the (i, j)cofactor of A, which is the (i, j)-minor times a sign factor:
The adjugate of A is the transpose of C, that is, the n × n matrix whose (i, j) entry is the (j, i) cofactor of A,
The adjugate is defined so that the product of A with its adjugate yields a diagonal matrix whose diagonal entries are the determinant det(A). That is,
The above formula implies one of the fundamental results in matrix algebra, that A is invertibleif and only ifdet(A) is an invertible element of R. When this holds, the equation above yields
It is easy to check the adjugate is the inverse times the determinant, −6.
The −1 in the second row, third column of the adjugate was computed as follows. The (2,3) entry of the adjugate is the (3,2) cofactor of A. This cofactor is computed using the submatrix obtained by deleting the third row and second column of the original matrix A,
The (3,2) cofactor is a sign times the determinant of this submatrix:
This can be proved in three ways. One way, valid for any commutative ring, is a direct computation using the Cauchy–Binet formula. The second way, valid for the real or complex numbers, is to first observe that for invertible matrices A and B,
Because every non-invertible matrix is the limit of invertible matrices, continuity of the adjugate then implies that the formula remains true when one of A or B is not invertible.
A corollary of the previous formula is that, for any non-negative integerk,
If A is invertible, then the above formula also holds for negative k.
From the identity
we deduce
Suppose that Acommutes with B. Multiplying the identity AB = BA on the left and right by adj(A) proves that
If A is invertible, this implies that adj(A) also commutes with B. Over the real or complex numbers, continuity implies that adj(A) commutes with B even when A is not invertible.
Finally, there is a more general proof than the second proof, which only requires that an n × n matrix has entries over a field with at least 2n + 1 elements (e.g. a 5 × 5 matrix over the integers modulo 11). det(A+tI) is a polynomial in t with degree at most n, so it has at most nroots. Note that the ij th entry of adj((A+tI)(B)) is a polynomial of at most order n, and likewise for adj(A+tI) adj(B). These two polynomials at the ij th entry agree on at least n + 1 points, as we have at least n + 1 elements of the field where A+tI is invertible, and we have proven the identity for invertible matrices. Polynomials of degree n which agree on n + 1 points must be identical (subtract them from each other and you have n + 1 roots for a polynomial of degree at most n – a contradiction unless their difference is identically zero). As the two polynomials are identical, they take the same value for every value of t. Thus, they take the same value when t = 0.
Using the above properties and other elementary computations, it is straightforward to show that if A has one of the following properties, then adj A does as well:
If A is skew-symmetric, then adj(A) is skew-symmetric for even n and symmetric for odd n. Similarly, if A is skew-Hermitian, then adj(A) is skew-Hermitian for even n and Hermitian for odd n.
If A is invertible, then, as noted above, there is a formula for adj(A) in terms of the determinant and inverse of A. When A is not invertible, the adjugate satisfies different but closely related formulas.
If rk(A) ≤ n − 2, then adj(A) = 0.
If rk(A) = n − 1, then rk(adj(A)) = 1. (Some minor is non-zero, so adj(A) is non-zero and hence has rank at least one; the identity adj(A) A = 0 implies that the dimension of the nullspace of adj(A) is at least n − 1, so its rank is at most one.) It follows that adj(A) = αxyT, where α is a scalar and x and y are vectors such that Ax = 0 and ATy = 0.
Let b be a column vector of size n. Fix 1 ≤ i ≤ n and consider the matrix formed by replacing column i of A by b:
Laplace expand the determinant of this matrix along column i. The result is entry i of the product adj(A)b. Collecting these determinants for the different possible i yields an equality of column vectors
Separating the constant term and multiplying the equation by adj(A) gives an expression for the adjugate that depends only on A and the coefficients of pA(t). These coefficients can be explicitly represented in terms of traces of powers of A using complete exponential Bell polynomials. The resulting formula is
where n is the dimension of A, and the sum is taken over s and all sequences of kl ≥ 0 satisfying the linear Diophantine equation
Abstractly, is isomorphic to R, and under any such isomorphism the exterior product is a perfect pairing. Therefore, it yields an isomorphism
Explicitly, this pairing sends v ∈ V to , where
Suppose that T : V → V is a linear transformation. Pullback by the (n − 1)st exterior power of T induces a morphism of Hom spaces. The adjugate of T is the composite
If V = Rn is endowed with its canonical basise1, …, en, and if the matrix of T in this basis is A, then the adjugate of T is the adjugate of A. To see why, give the basis
Fix a basis vector ei of Rn. The image of ei under is determined by where it sends basis vectors:
On basis vectors, the (n − 1)st exterior power of T is
Each of these terms maps to zero under except the k = i term. Therefore, the pullback of is the linear transformation for which
that is, it equals
Applying the inverse of shows that the adjugate of T is the linear transformation for which
Consequently, its matrix representation is the adjugate of A.
If V is endowed with an inner product and a volume form, then the map φ can be decomposed further. In this case, φ can be understood as the composite of the Hodge star operator and dualization. Specifically, if ω is the volume form, then it, together with the inner product, determines an isomorphism
This induces an isomorphism
A vector v in Rn corresponds to the linear functional
By the definition of the Hodge star operator, this linear functional is dual to *v. That is, ω∨∘ φ equals v ↦ *v∨.
Let A be an n × n matrix, and fix r ≥ 0. The rth higher adjugate of A is an matrix, denoted adjrA, whose entries are indexed by size rsubsetsI and J of {1, ..., m}[citation needed]. Let Ic and Jc denote the complements of I and J, respectively. Also let denote the submatrix of A containing those rows and columns whose indices are in Ic and Jc, respectively. Then the (I, J) entry of adjrA is
where σ(I) and σ(J) are the sum of the elements of I and J, respectively.
Basic properties of higher adjugates include [citation needed]:
^Claeyssen, J.C.R. (1990). "On predicting the response of non-conservative linear vibrating systems by using dynamical matrix solutions". Journal of Sound and Vibration. 140 (1): 73–84. Bibcode:1990JSV...140...73C. doi:10.1016/0022-460X(90)90907-H.
^Chen, W.; Chen, W.; Chen, Y.J. (2004). "A characteristic matrix approach for analyzing resonant ring lattice devices". IEEE Photonics Technology Letters. 16 (2): 458–460. Bibcode:2004IPTL...16..458C. doi:10.1109/LPT.2003.823104.