What exactly is the determinant, anyway?

by greenturtle3141, Nov 26, 2021, 11:19 AM

Reading Difficulty: 3-4/5

Prerequisites: You should know what $\begin{vmatrix}3 & 1 & 4 \\ 1 & 5 & 9 \\ 2 & 5 & 6\end{vmatrix}$ denotes and how to compute it.

Preliminary notes:
  • The discussion here generalizes over vectors spaces over any field, but for simplicity's sake we'll pretend that the vector space is $\mathbb{R}^n$ throughout.
  • Moreover, we'll take the standard basis of $\mathbb{R}^n$ for our matrix representations. If you don't understand what that means, worry not.

So what's a determinant...? We learn what it is and what it does sometimes, but what motivates looking at it?


Part 0: Matrices vs. Linear Transformations

Recall (or, learn right now) that a linear transformation is a function $T: \mathbb{R}^n \to \mathbb{R}^n$ that is... er, linear. That is:
  • $T(v+w) = T(v) + T(w)$ for any vectors $v,w \in \mathbb{R}^n$
  • $T(\lambda v) = \lambda T(v)$ for any vector $v \in \mathbb{R}^n$ and scalar $\lambda \in \mathbb{R}$.

Now the big question that linear algebra teachers forget to ask.

Q: What is the difference between matrices and linear transformations?
A: THEY'RE THE SAME. EXACT. THING.

Yes! That's the whole flippin point of matrices! They're LITERALLY FUNCTIONS. FUNCTIONS THAT TURN VECTORS INTO OTHER VECTORS.

Let me spend some time convincing you that this is true (caveat: only works in finite dimensional vector spaces because infinitely large matrices aren't nice).

Case Study 1: Multiplying by a column vector

Consider the matrix $\begin{bmatrix}1 & 2 & 3 \\ 4 & 5 & 6 \\ 3 & 1 & 4\end{bmatrix}$. This can be viewed as a function from $\mathbb{R}^3$ to $\mathbb{R}^3$. How?

To figure out where the matrix takes the coordinates/vector $v = (x,y,z)$, all you have to do is compute $\begin{bmatrix}1 & 2 & 3 \\ 4 & 5 & 6 \\ 3 & 1 & 4\end{bmatrix}\begin{bmatrix}x \\ y \\ z\end{bmatrix}$. If you remember your multiplication rules, this is easy:
$$\begin{bmatrix}1 & 2 & 3 \\ 4 & 5 & 6 \\ 3 & 1 & 4\end{bmatrix}\begin{bmatrix}x \\ y \\ z\end{bmatrix} = \begin{bmatrix}x+2y+3z \\ 4x+5y+6z \\ 3x+y+4z\end{bmatrix}$$So the matrix may be viewed as a function $T: \mathbb{R}^3 \to \mathbb{R}^3$ defined by $T(x,y,z) = (x+2y+3z, 4x+5y+6z, 3x+y+4z)$. Hey, isn't that a linear transformation?

Case Study 2: Composition

From the above discussion, what sort of function is represented by an $m \times n$ matrix $M$? With some thinking, you should conclude that $M$ is a linear transformation from $\mathbb{R}^n$ to $\mathbb{R}^m$.

If you recall matrix multiplication rules, the product $MN$ is only valid if the number of columns of $M$ matches the number of rows of $N$. That is, $N$ needs to be an $n \times p$ matrix for some $p$. In this case, $N$ would be a function $\mathbb{R}^p \to \mathbb{R}^n$.

Ok cool, but I promised that matrices are the same thing as linear transformations. If we're supposed to view all matrices as linear transformations, what sort of function is $MN$ supposed to be?

Spoiler Alert: Matrix multiplication is just a composition of linear transformations! That is, if $M$ is viewed as a linear transformation $T_M :  \mathbb{R}^n \to \mathbb{R}^m$ and $N$ is viewed as a linear transformation $T_N : \mathbb{R}^p \to \mathbb{R}^n$, then $MN$ corresponds to the composition $T_M \circ T_N : \mathbb{R}^p \to \mathbb{R}^m$. This makes sense because $MN$ ends up being an $m \times p$ matrix. Now do you see why you have the weird dimension-matching rule for matrix multiplication? It's literally just to make sure that the range of $T_M$ will match the domain of $T_N$.

To summarize:
  • Any matrix $A$ is literally just a linear transformation. This is because it's pretty easy to show that $A(v+w) = Av + Aw$ and $A(\lambda v) = \lambda \cdot Av$.
  • Conversely, and linear transformation $T : \mathbb{R}^n \to \mathbb{R}^n$ also has a unique matrix representation. I leave this as an exercise!

As such, linear transformations and matrices are essentially the same thing, and hence should be treated as the same thing. This lets us discuss not only the determinants of matrices, but also the determinants of linear transformations. Indeed, this is ultimately what we're going to do.


Part 1: Bad Definitions

Definition 1: The determinant is a number associated with a matrix, computed by multiplying and adding a bunch of random numbers in it, and it's really special because you need to know it to pass the ACT math section. Concretely, it is given by:
$$\det A := \sum_{\pi \in S_n} \operatorname{sgn}(\pi)\prod_{i=1}^n a_{i,\pi(i)}$$And can be computed more easily using expansion by minors. The determinant has some nice properties, just trust me I swear!

Verdict: This definition sucks. But a bunch of people teach it this way. Probably because they have no idea what it does either!

This is bad and it doesn't tell us ANYTHING about this weird number. Another major drawback is that it's really not obvious how to prove important properties such as $\det AB = \det A \cdot \det B$. Getting these properties requires a deeper understanding of the determinant... So much so that it requires a whole new definition.


Definition 2: Let $A$ be an $n \times n$ matrix. The key is to interpret $A$ visually as a linear transformation. $A$ "stretches" the unit cube by some amount.

https://freight.cargo.site/t/original/i/681a448c3813e5afd02aabf077c414b38addb31b60c0db05795d926cf59b1b75/ratio.png
(Image stolen from https://emoy.net/Determinant)

The determinant is just how many times bigger the cube gets in volume. In general, $A$ increases the volume of any shape by a factor of $\det A$.

Verdict: This is a better definition in the sense that it gives you an idea of why you'd ever care about it. In fact, it makes some of the properties of determinants make sense. For example, $\det AB = \det A \cdot \det B$ can be read as "If you take a shape and transform it under $A$ and then under $B$, then its volume is first multiplied by $\det A$ and then multiplied by $\det B$". The only issue is that this isn't quite rigorous and it doesn't give us a way to actually compute the determinant of some given matrix.


With that out of the way, we can begin sussing out the "true identity" of the matrix.

Part 2: Bilinear maps and Multilinear maps

We discussed what a linear map (or, linear transformation) was in Part 0. Now get ready for bilinear maps! It's nothing complicated, I promise.

Definition: A function $\phi(x,y)$ from $\mathbb{R}^n \times \mathbb{R}^n$ to $\mathbb{R}^m$ is called bilinear if it is linear in each component. That is:
  • (Linearity in the first component) $\phi(v_1+v_2,w) = \phi(v_1,w) + \phi(v_2,w)$ and $\phi(\lambda v,w) = \lambda \phi(v,w)$
  • (Linearity in the second component) $\phi(v,w_1+w_2) = \phi(v,w_1) + \phi(v,w_2)$ and $\phi(v,\lambda w) = \lambda \phi(v,w)$

Test Your Understanding
  • Is the map $\phi(v,w) = (2v,2w)$ bilinear? Answer
  • Is the map $\cdot : \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ defined by $\cdot(x,y) := xy$ bilinear? Answer
  • Is the map $\text{Add}: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}^n$ defined by $\text{Add}(v,w) = v+w$ bilinear? Answer
  • Is the map $\phi:\mathbb{R}^2 \times \mathbb{R}^2 \to \mathbb{R}$ defined by $\phi((w,x), (y,z)) := wz-xy$ bilinear? Answer
  • Is the map $\phi(v) = 0$ bilinear? Answer
  • Come up with a bilinear map from $\mathbb{R}^2 \times \mathbb{R}^2$ to $\mathbb{R}^3$. Make it as simple/complex as you desire.

Not so bad eh? Let's move on to multilinear maps.

Definition: A function $\phi(v_1,v_2,\cdots,v_k)$ from $\mathbb{R}^n \times \mathbb{R}^n \times \ldots \times \mathbb{R}^n$ to $\mathbb{R}^m$ is called multilinear if it is linear in each component.

Wait, isn't that basically the same exact definition?

Test Your Understanding
  • Are all bilinear maps also multilinear? Answer
  • Is the map $\psi: \mathbb{R}^3 \times \mathbb{R}^3 \times \mathbb{R}^3 \to \mathbb{R}$ defined by $\psi((a,b,c),(d,e,f),(g,h,i)) := adg+beh+cfi$ multilinear? (This is just the dot product of the input vectors) Answer
  • Let $\psi: (\mathbb{R}^n)^n \to \mathbb{R}$ be multilinear, and let $\alpha:\mathbb{R} \to \mathbb{R}$ be a linear transformation. Does it follow that $\alpha \circ \psi$ is multilinear? Answer

Multilinear maps aren't that fantastic. They really only start to shine once you give them one extra property...


Part 3: Alternating Multilinear Maps

Definition: A multilinear map $\psi:(\mathbb{R}^n)^k \to \mathbb{R}^m$ is alternating if its value is negated upon switching two components.

For example, consider the multilinear map $\psi((a,b), (c,d)) := ad-bc$. This is alternating because if I switch two components (er, the only two components in this case), then I'd get $\psi((c,d),(a,b)) = cb-da = -\psi((a,b), (c,d))$.

Test Your Understanding
  • Suppose $\psi:(\mathbb{R}^n)^5 \to \mathbb{R}$ is multilinear and alternating. If $\psi(v_1,v_2,v_3,v_3,v_5) = 2$, what is $\psi(v_2,v_3,v_1,v_5,v_4)$? Answer
  • In general, let $\pi:\{1,\cdots,k\} \to \{1,\cdots,k\}$ be a permutation. If $\psi(v_1,\cdots,v_k) = 1$, what is $\psi(v_{\pi(1)},\cdots,v_{\pi(k)})$? Answer
  • Prove that a multilinear map $\psi:(\mathbb{R}^n)^k \to \mathbb{R}^m$ is alternating iff $\psi(\cdots, v, \cdots, v, \cdots) = 0$. That is: if two of the components are equal, then $\psi$ comes out to zero. Solution
  • Let $\psi$ be multilinear and alternating. Prove that $\psi(\cdots,v,\cdots,w,\cdots) = \psi(\cdots,v,\cdots,v+w,\cdots)$. That is, I can add one component to another and nothing would change. Solution

The above stuff is important, but this next result is even more important, so it gets the stage here. Let $e_1 = (1,0,0,\cdots,0)$, $e_2 = (0,1,0,\cdots,0)$, ..., $e_n = (0,0,0,\cdots,1)$. We call this the "standard basis".

THEOREM: If $\psi:(\mathbb{R}^n)^n \to \mathbb{R}^m$ is multilinear and alternating, then $\psi$ is fully determined by the value of $\psi(e_1,\cdots,e_n)$. That is, if you tell me just the value of $\psi(e_1,\cdots,e_n)$, then I can give you the value of $\psi$ everywhere else!

Proof. I'm going to cheat and let $n=3$. The key idea is that since $e_1,e_2,e_3$ form a basis, I can write every vector in $\mathbb{R}^3$ in terms of them, i.e. every $v \in \mathbb{R}^3$ takes the form $v = ae_1 + be_2 + ce_3$ for some $a,b,c \in \mathbb{R}$. I mean, that's pretty easy to see because if $v = (a,b,c)$ then $v = a(1,0,0) + b(0,1,0) + c(0,0,1) = ae_1 + be_2 + ce_3$.

So, what's the point? Basically, for any $u,v,w \in \mathbb{R}^3$, I can first write:
$$\psi(u,v,w) = \psi(ae_1 + be_2 + ce_3,de_1 + ee_2 + fe_3,ge_1 + he_2 + ie_3)$$Then I can abuse the fact that $\psi$ is multilinear to keep expanding this more and more! Expanding out the first component:
$$ = \psi(ae_1,de_1 + ee_2 + fe_3,ge_1 + he_2 + ie_3) + \psi(be_2,de_1 + ee_2 + fe_3,ge_1 + he_2 + ie_3) + \psi(ce_3,de_1 + ee_2 + fe_3,ge_1 + he_2 + ie_3)$$Expanding out the second component:
$$ = \psi(ae_1,de_1,ge_1 + he_2 + ie_3) + \psi(ae_1,ee_2,ge_1 + he_2 + ie_3) + \psi(ae_1,fe_3,ge_1 + he_2 + ie_3)$$$$ + \psi(be_2,de_1,ge_1 + he_2 + ie_3) + \psi(be_2,ee_2,ge_1 + he_2 + ie_3) + \psi(be_2,fe_3,ge_1 + he_2 + ie_3)$$$$ + \psi(ce_3,de_1,ge_1 + he_2 + ie_3) + \psi(ce_3,ee_2,ge_1 + he_2 + ie_3) + \psi(ce_3,fe_3,ge_1 + he_2 + ie_3)$$Expanding out the thir- lol I'm kidding, it's 4 AM why would I do that? But you can see that in the end, we're left with $27$ terms of the form $\psi(\lambda_1e_i,\lambda_2e_j,\lambda_3e_k)$.
  • Some of these terms use the same $e_i$, like $\psi(be_2,de_1,he_2)$. By multilinearity, this is just $bdh\psi(e_2,e_1,e_2)$. But hey, two of those components match! So, since $\psi$ is alternating, this is just zero. Terms that use the same basis element vanish!
  • The rest of the terms use each $e_i$ exactly once, like $\psi(ce_3,de_1,he_2)$. By multilinearity, this is just $cdh\psi(e_3,e_1,e_2)$. But hey, we know that $\psi(e_3,e_1,e_2)$ only differs from $\psi(e_1,e_2,e_3)$ by a factor of $\pm 1$, so we can write these terms in terms of $\psi(e_1,e_2,e_3)$.
In sum, the whole $\psi(u,v,w)$ can be written in terms of $\psi(e_1,e_2,e_3)$. $\square$

Corollary: If $\psi:(\mathbb{R}^n)^k \to \mathbb{R}^m$ is multilinear and alternating, then $\psi(v_1,\cdots,v_n) = c \cdot \psi(e_1,\cdots,e_n)$ for a constant $c \in \mathbb{R}$ that does NOT depend on $\psi$. It only depends on $v_1,\cdots,v_n$.

You can see that alternating multilinear maps have some pretty neat properties!

...we're eventually going to get to determinants, I promise.


Part 4: Almost a Determinant...

Let $T:\mathbb{R}^n \to \mathbb{R}^n$ be a linear map. I'm now ready to give you a form for its determinant.

Let $\psi:(\mathbb{R}^n)^n \to \mathbb{R}$ be some mutlilinear, alternating map. By the previous theorem, if I set $\psi(e_1,\cdots,e_n) = 1$, then the rest of $\psi$ must be completely determined. In fact, I can now tell you the value of $\psi(T(e_1),T(e_2),\cdots,T(e_n))$. This is the determinant of $T$.

Why should you believe me? I'll give you two reasons.
  • If you remember that crazy determinant formula from before, or if you think hard about expansion by minors, you may realize that what a determinant does is multiply together one coordinate from each component in "every possible way", before adding them up. The sort of "expanding" that you do with multilinearity corresponds to this! (Don't worry if this doesn't catch on immediately, though this is a pretty important point! It might help to look at what's going on in the expanding I did earlier, and notice what terms stay alive... compare this to how you'd compute the determinant of $\begin{bmatrix}a & d & g\\ b & e & h \\ c & f & i\end{bmatrix}$.)
  • If you think about it, the "multilinearity rules" feel like they describe properties for area/volume scaling and increase. For example, doubling one dimension (corresponds to doubling the value of $T$ on some basis vector, say $e_1$) should double the volume (corresponds to $\psi(2T(e_1),T(e_2),\cdots,T(e_n)) = 2\psi(T(e_1),T(e_2),\cdots,T(e_n))$).
If you're satisfied with this, you can stop reading. But I'm not satisfied because this isn't strong enough of a definition in that I can't prove cool properties of the determinant yet, particularly it seems hard to show that $\det AB = \det A \det B$. We need to explore more...


Part 5: The Universal Property

You're probably wondering what the word "universal" means.

Layman's Definition: Something is "universal" if it looks like everything else.
Abstractatition's Definition: A universal object in a category is an initial object. Or something. Specifically in this context we're considering the category of multilinear maps with morphisms that are linear maps, I think. idk anymore help

Let's start simple, before the notation gets a bit heavy: Let's say I have a linear map from $\mathbb{R}$ to $\mathbb{R}$. Call this map $\phi$. I'll say that $\phi$ "looks like" another linear map $\psi$ if $\alpha \circ \phi = \psi$ for some linear map $\alpha$. For example, if $\phi(x) = 2x$ and $\psi(x) = 3x$, then $\phi$ certainly looks like $\psi$. This is because I can just consider $\alpha(x) = \frac{3x}{2}$. Then $\alpha \circ \phi(x) = \frac32 \cdot 2x = 3x = \psi(x)$. In fact, I can find such an $\alpha$ no matter what linear map $\psi$ you choose (note that e.g. $\psi(x) = 2x+1$ is not linear. In fact, any linear map $\mathbb{R} \to \mathbb{R}$ takes the form $x \mapsto mx$). Because of this, $\phi$ "looks like" everybody else, so we say that $\phi$ is universal.

Not all such linear maps are universal. For example, $\phi(x) := 0$ is not a universal linear map. (Why?)

Let's now ramp it up. Suppose $\phi:(\mathbb{R}^n)^k \to \mathbb{R}^m$ and $\psi:(\mathbb{R}^n)^k \to \mathbb{R}^{m'}$ are both multilinear and alternating. We say that $\phi$ "looks like" $\psi$ if there exists a linear map $\alpha: \mathbb{R}^m \to \mathbb{R}^{m'}$ for which $\alpha \circ \phi = \psi$.

You might be able to guess what's coming.

THEOREM: Let $\phi:(\mathbb{R}^n)^n \to \mathbb{R}$ be multilinear alternating and non-zero. Then in fact, $\phi$ is universal.

By non-zero, I mean that the map doesn't just send everything to $0$. Note that we need to specify some specific "dimensions" for the domain and range of $\phi$ for this to work. Particularly, the "domain dimension" is "$n \times n$" whereas the "range dimensions" is $1$.

Proof. Let $\psi:(\mathbb{R}^n)^n \to \mathbb{R}^m$ be multilinear and alternating. It suffices to find a linear $\alpha: \mathbb{R} \to \mathbb{R}^m$ for which $\alpha \circ \phi = \psi$.

The strategy in this proof is to first figure out a formula for $\alpha$, assuming it exists. In other words, you should ask yourself "Hm, what would $\alpha$ have to look like?" Then, given that, we prove that this "formula" is actually valid!

Since $\phi$ is non-zero, we can find vectors $w_1,\cdots,w_n \in \mathbb{R}^n$ such that $\phi(w_1,\cdots,w_n) \neq 0$. Let $\phi(w_1,\cdots,w_n) = a_0 \in \mathbb{R}$, and let $\psi(w_1,\cdots,w_n) = b_0 \in \mathbb{R}^m$. Now think: Given $a_0$ and $b_0$, could we write a formula for $\alpha$ that should work in theory, if it had to exist? Well, $\alpha$ would need to send $a_0$ to $b_0$, so the only formula that could possibly work is $\alpha(x) = b_0x/a_0$.

Hm, does it work? To show that it works, we need to prove that $\alpha \circ \phi = \psi$. This means that we need to prove that:
$$\frac{b_0}{a_0}\phi(v_1,\cdots,v_n) = \psi(v_1,\cdots,v_n) \qquad \forall v_1,\cdots,v_n \in \mathbb{R}^n$$Or:
$$\frac{\psi(w_1,\cdots,w_n)}{\phi(w_1,\cdots,w_n)}\phi(v_1,\cdots,v_n) = \psi(v_1,\cdots,v_n) \qquad \forall v_1,\cdots,v_n \in \mathbb{R}^n$$Hm, I guess that means we just need to show this equality:
$$\psi(w_1,\cdots,w_n)\phi(v_1,\cdots,v_n) = \phi(w_1,\cdots,w_n)\psi(v_1,\cdots,v_n)$$Wait but, by that corollary we did, there exists a constant $c \in \mathbb{R}$ such that $\psi(v_1,\cdots,v_n) = c\psi(e_1,\cdots,e_n)$. And $c$ doesn't depend on which multilinear map you're using, so the same $c$ works for $\phi$. That is, $\phi(v_1,\cdots,v_n) = c\phi(e_1,\cdots,e_n)$, Similarly, there exists a constant $c' \in \mathbb{R}$ such that $\psi(w_1,\cdots,w_n) = c'\psi(e_1,\cdots,e_n)$ and $\phi(w_1,\cdots,w_n) = c'\phi(e_1,\cdots,e_n)$. Plugging this in, we want to show that:
$$c'\psi(e_1,\cdots,e_n)c\phi(e_1,\cdots,e_n) = c'\phi(e_1,\cdots,e_n)c\psi(e_1,\cdots,e_n)$$...which is true! $\square$


Part 6: The Determinant Unmasked

DEFINITION: Let $T: \mathbb{R}^n \to \mathbb{R}^n$ be a linear map. Let $\phi:(\mathbb{R}^n)^n \to \mathbb{R}$ be a non-zero multilinear alternating map. Then $\phi$ is universal. Note also that the map $\psi:(\mathbb{R}^n)^n \to \mathbb{R}$ defined by
$$\psi(v_1,\cdots,v_n) := \phi(T(v_1),\cdots,T(v_n))$$is also multilinear and alternating. Therefore, since $\phi$ is universal, there must exist a linear map $\alpha:\mathbb{R} \to \mathbb{R}$ for which $\alpha \circ \phi = \psi$. Since $\alpha:\mathbb{R} \to \mathbb{R}$, $\alpha$ must take the form $\alpha(x) = \lambda x$. Hence, $\alpha \circ \phi = \psi$ may be written as:
$$\phi(T(v_1),\cdots,T(v_n)) = \lambda \cdot \phi(v_1,\cdots,v_n)$$And this holds for all $v_1,\cdots,v_n \in \mathbb{R}^n$. Notice that $\lambda$ is a special constant that depends only on $T$. We define $\lambda$ as the determinant of $T$. It is the "scaling factor" of $\alpha$!!!
  • Notice that this is much stronger than what we had in Part 4. I'm not forced to use the measly basis vectors $e_1,\cdots,e_n$. No... this identity holds for ALL $v_1,\cdots,v_n$!
  • Note that our choice of $\phi$ didn't really matter, as long as it was non-zero, multilinear and alternating. This is because all such maps are universal.


Part 7: All Hail Abstraction

THEOREM: $\det(AB)=\det(A)\det(B)$

Proof. Take a non-zero alternating multilinear map $\phi$. Then:
$$\phi((AB)v_1,\cdots,(AB)v_n) = (\det (AB))\phi(v_1,\cdots,v_n)$$On the other hand:
$$\phi((AB)v_1,\cdots,(AB)v_n) = \phi(A(Bv_1),\cdots,A(Bv_n)) = (\det A)\phi(Bv_1,\cdots,Bv_n) = (\det A)(\det B)\phi(v_1,\cdots,v_n)$$Boom. $\square$

THEOREM: The determinant itself is a multilinear alternating map! In fact, it is the unique such map that sends the identity matrix/map to $1$.

Proof.
Take the standard basis:
$$\det A \cdot \phi(e_1,\cdots,e_n) = \phi(Ae_1,\cdots,Ae_n)$$Now view $\det$ as a map from $(\mathbb{R}^n)^n$ to $\mathbb{R}$ by simplying interpreting matrices as $n$-tuples of $n$-vectors (so we view $A$ as a tuple of column vectors $(v_1,\cdots,v_n)$). Then for all $v_1,\cdots,v_n \in \mathbb{R}^n$, we have:
$$\det(v_1,\cdots,v_n) \cdot \phi(e_1,\cdots,e_n) = \phi(v_1,\cdots,v_n)$$$\phi(e_1,\cdots,e_n)$ is just some dumb constant. The moving part is $\phi(v_1,\cdots,v_n)$, and this is multilinear and alternating. Hence so is $\det$.

By plugging in $v_1=e_1$ etc. we get that $\det(e_1,\cdots,e_n) = 1$, clearly. But note that $(e_1,\cdots,e_n)$, when interpreted as a matrix, is actually the identity!

We claim that $\det$ is the only such multilinear alternating map that takes the identity matrix to $1$. To see this, let $\psi$ be another multilinear alternating map. Since $\det$ is non-zero, it must be universal. Hence there exists $\alpha:\mathbb{R} \to \mathbb{R}$ for which $\alpha \circ \det = \psi$. But the values of $\det$ and $\psi$ agree on the matrix $I_n$, so $\alpha$ has a non-zero fixed point. It follows that $\alpha$ is the identity linear map, so $\det = \psi$. Hence the uniqueness of $\det$ is proven. $\square$.

THEOREM: Adding one column to another does not change the determinant of a matrix.

Proof. The determinant is multilinear and alternating. $\square$

THEOREM: Multiplying a column by $\lambda$ will multiply the determinant by $\lambda$.

Proof. The determinant is multilinear and alternating. $\square$

If you want, you can even recover that original horrible formula without too much hassle.

THEOREM: $\det A := \sum_{\pi \in S_n} \operatorname{sgn}(\pi)\prod_{i=1}^n a_{i,\pi(i)}$ where $A = (a_{i,j})_{i,j}$.

Proof. Just show that
$$\phi\left(\sum_{i=1}^n a_{1,i}v_i, \sum_{i=1}^n a_{2,i}v_i,\cdots,\sum_{i=1}^n a_{n,i}v_i\right) = \left(\sum_{\pi \in S_n} \operatorname{sgn}(\pi)\prod_{i=1}^n a_{i,\pi(i)}\right)\phi(v_1,\cdots,v_n).$$I leave this to you to verify. $\square$

The only other major property left would probably be $\det(A^T) = \det(A)$, and to my knowledge this is best proven via this horrible formula.

Welp, that's it! Hopefully this post gets you to see the determinant in a new light.
This post has been edited 2 times. Last edited by greenturtle3141, Nov 26, 2021, 6:30 PM

Comment

3 Comments

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
I like the way you motivate multilinearity as a formalization of how oriented hypervolume changes when transforming the unit hypercube, with our intuition for this structure coming from area and the unit square as well as volume and the unit cube.

by obtuse, Nov 26, 2021, 6:51 PM

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
A moment of silence for all those who actually read the whole thing.

by ThinkThink, Nov 27, 2021, 5:51 PM

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
Wow you should compile these posts in a document and publish them someday

by peelybonehead, Nov 13, 2022, 5:07 PM

Turtle math!

avatar

greenturtle3141
Archives
+ October 2024
Shouts
Submit
  • Can you give some thought to dropping a guide to STS? Just like how you presented your research (in your paper), what your essays were about, etc. Also cool blog!

    by Shreyasharma, Mar 13, 2025, 7:03 PM

  • this is so good

    by purpledonutdragon, Mar 4, 2025, 2:05 PM

  • orz usamts grader

    by Lhaj3, Jan 23, 2025, 7:43 PM

  • Entertaining blog

    by eduD_looC, Dec 31, 2024, 8:57 PM

  • wow really cool stuff

    by kingu, Dec 4, 2024, 1:02 AM

  • Although I had a decent college essay, this isn't really my specialty so I don't really have anything useful to say that isn't already available online.

    by greenturtle3141, Nov 3, 2024, 7:25 PM

  • Could you also make a blog post about college essay writing :skull:

    by Shreyasharma, Nov 2, 2024, 9:04 PM

  • what gold

    by peace09, Oct 15, 2024, 3:39 PM

  • oh lmao, i was confused because of the title initially. thanks! great read

    by OlympusHero, Jul 20, 2024, 5:00 AM

  • It should be under August 2023

    by greenturtle3141, Jul 11, 2024, 11:44 PM

  • does this blog still have the post about your math journey? for some reason i can't find it

    by OlympusHero, Jul 10, 2024, 5:41 PM

  • imagine not tortoise math

    no but seriously really interesting blog

    by fruitmonster97, Apr 2, 2024, 12:39 AM

  • W blog man

    by s12d34, Jan 24, 2024, 11:37 PM

  • very nice blog greenturtle it is very descriptive and fascinating to pay attention to :-D

    by StarLex1, Jan 3, 2024, 3:12 PM

  • orz blog

    by ryanbear, Dec 6, 2023, 9:23 PM

67 shouts
Tags
About Owner
  • Posts: 3553
  • Joined: Oct 14, 2014
Blog Stats
  • Blog created: Oct 23, 2021
  • Total entries: 54
  • Total visits: 40904
  • Total comments: 126
Search Blog
a