| Wedge |

by math_explorer, Sep 22, 2012, 7:31 AM

Some stuff that happened yesterday. This is even better than taking courses at a university. Whee.
Let $\vec{\textbf{v}}_1, \ldots, \vec{\textbf{v}}_m$ be elements of $\mathbb{R}^n$ where $m \leq n$. Column vectors with $n$ real elements, that is. It's a confusing convention, not to mention typographical annoyance, that elements of $\mathbb{R}^n$ somehow end up with the components written vertically. However, it's kind of convenient when we want to paste column vectors together. Let $V = [\vec{\textbf{v}}_1, \ldots, \vec{\textbf{v}}_m]$. This means pasting the column vectors together to make an $n \times m$ matrix ($n$ rows $m$ columns). This is confusing because it's sometimes not clear that $\vec{\textbf{v}}_i$ are column vectors, so just to be safe I went and bolded all my vectors as well as put tiny arrows on them which is more than either of the linear algebra textbooks I'm studying does (don't ask why there are two). Yay regexes. Normal letters are usually reals, uppercase letters are usually bigger matrices.

Seriously linear algebra notation is screwed up. One of my textbooks has $A^i$ denote the $i$th column of a matrix and the other has $A^i$ denote a matrix with its $i$th column deleted. They put the transpose sign in superscripts on opposite sides of the matrices. Rawr.

Anyway.

Consider the thing $\vec{\textbf{v}}_1 \wedge \vec{\textbf{v}}_2 \wedge \cdots \wedge \vec{\textbf{v}}_m$, which can be visualized (...well, not for $n \geq 4$ (?)) as the hyperparallelogram $\left\{\sum_{i=1}^{m} t_i\vec{\textbf{v}}_i \,\middle|\, t_i \in [0,1], i = 1, 2, \ldots, m \right\}$ but is apparently a totally different thing in a totally weird vector space of things.

Anyway, we wonder how to compute the hyperarea of this hyperparallelogram.

To get a simple idea of what we're trying to do, here's the $m = 2$ case without any advanced stuff. We have two $n$-dimensional vectors $\vec{\textbf{v}}_1$ and $\vec{\textbf{v}}_2$, and we want to consider the area of the parallelogram they bound. Well, geometrically this can be computed as \[ |\vec{\textbf{v}}_1| |\vec{\textbf{v}}_2|\sin \theta \] where $\theta$ is the angle between the two vectors. And we can get at that angle because we know that the vectors' dot product \[ \vec{\textbf{v}}_1 \cdot \vec{\textbf{v}}_2 = |\vec{\textbf{v}}_1| |\vec{\textbf{v}}_2|\cos \theta \].

So. Let the vectors' coordinates be $\vec{\textbf{v}}_1 = \langle x_1, \ldots, x_n \rangle$ and $\vec{\textbf{v}}_2 = \langle y_1, \ldots, y_n \rangle$.

\begin{align*} A &= |\vec{\textbf{v}}_1| |\vec{\textbf{v}}_2|\sin \theta \\
&= |\vec{\textbf{v}}_1| |\vec{\textbf{v}}_2|\sqrt{1 - \cos^2 \theta} \\
&= \sqrt{|\vec{\textbf{v}}_1|^2|\vec{\textbf{v}}_2|^2 - (|\vec{\textbf{v}}_1| |\vec{\textbf{v}}_2|\cos \theta)^2} \\
&= \sqrt{\sum x_i^2 \sum y_i^2 - (\sum x_i y_i)^2} \end{align*}

Does this look familiar? It should if you've ever done a nontrivial inequality. Anyway, it's equivalent to

\[ \sqrt{\sum_{i<j} (x_i y_j - x_j y_i)^2} \]

which is, interestingly,

\[ \sqrt{\sum_{i<j} \begin{vmatrix} x_i & y_i \\ x_j & y_j \end{vmatrix}^2 } \]

Those vertical bars denote the determinant, which is a really important scalar function of square matrices, if you didn't know. I don't know what sort of background I expect people who read this to have. Whatever.

Hmm. This is interesting because it suggests a general formula that even works in the other cases. Maybe we just take all combinations of $m$ rows and compute the squares of these determinants, sum them up, and take the square root of everything. Compare with $m = 1$

\[ \sqrt{\sum_{i} |x_i|^2} \]

and $m = 3$

\[ \sqrt{\begin{vmatrix} x_i & y_i & z_i \\ x_j & y_j & z_j \\ x_k & y_k & z_k \end{vmatrix}^2 } \]

where the square root of the square of course just works out to be taking the absolute value.

Anyway, the only way we know how to compute hyperarea of hyperparallelograms at this stage turns out to be the determinant, except we can only take determinants of hyperparallelograms that are stuck in the Euclidean space equal to their own dimension. Our vectors form an $n \times m$ matrix, unfortunately.

So instead let's take the $m$-dimensional subspace $\mathcal{S} \subset \mathbb{R}^n$ that our vectors span (assuming they don't have any linear dependencies, because if they did then the hyperparallelogram is degenerate and has area trivially $0$) and try to pretend it's the normal $\mathbb{R}^m$ we're used to. We can do this by taking a random orthonormal basis $\{\vec{\textbf{w}}_1, \vec{\textbf{w}}_2, \ldots, \vec{\textbf{w}}_m\} \subset \mathcal{S}$ of it, and express each of our vectors $\vec{\textbf{v}}_i$ as a linear combination of the basis vectors.

In other words, $\vec{\textbf{v}}_i = [\vec{\textbf{w}}_1, \vec{\textbf{w}}_2, \ldots, \vec{\textbf{w}}_m]\vec{\textbf{c}}_i$ for some $\vec{\textbf{c}}_i \in \mathbb{R}^m$ for each $i = 1, \ldots, n$. And since the basis is orthonormal, the hyperarea we want to compute can be computed as the determinant of the matrix with the coordinate vectors $\vec{\textbf{c}}_i$.

Our task is now to compute $\det(C := [\vec{\textbf{c}}_1, \ldots, \vec{\textbf{c}}_m])$. This is hard, but it can be LaTeXed, which means now we have something to work on. Our only lead on these $\vec{\textbf{c}}_i$ is the way we defined it: we have $V = [\vec{\textbf{w}}_1, \ldots, \vec{\textbf{w}}_m]C$, but it's not useful because we have to somehow get rid of the $[\vec{\textbf{w}}_1, \vec{\textbf{w}}_2, \ldots, \vec{\textbf{w}}_m]$ next to it, and that's not even a square matrix.

We like working in $\mathbb{R}^n$ more. Luckily, there's an obvious way to create an $n \times n$ determinant that's equal to what we want: augment an $(n-m)\times(n-m)$ identity matrix diagonally onto it. So we want to compute the determinant of

\[ D = \det\left( \begin{bmatrix} C & \textbf{0} \\ \textbf{0} & I_{n-m} \end{bmatrix} \right) \]

($I_{n-m}$ being the $(n-m)\times(n-m)$ identity matrix)

We can "obviously" find an orthonormal basis $\{\vec{\textbf{w}}_1, \vec{\textbf{w}}_2, \ldots, \vec{\textbf{w}}_n\}$ of the whole $\mathbb{R}^n$ that includes our orthonormal basis of $\mathcal{S}$. So let $W = [\vec{\textbf{w}}_1, \vec{\textbf{w}}_2, \ldots, \vec{\textbf{w}}_n]$.

How does the property of orthonormal-basis-ness translate into in matrix language? We have $\vec{\textbf{w}}_i \cdot \vec{\textbf{w}}_i = 1$ and $\vec{\textbf{w}}_i \cdot \vec{\textbf{w}}_j = 0$ for $i \neq j$. The matrix multiplication that will involve these dot products is $W^T W$. This is not the notation either of my textbooks uses but $W^T$ is $W$'s transpose.

Entry-by-entry, $W^T W = I_n$. So $W^{-1} = W$. Hmm. Since determinants are multiplicative, we immediately have $\det(W) = \det(W^T) = \pm 1$. Also, we know $WC$ and we can compute $WD$ from that: $WD = [\vec{\textbf{v}}_1, \ldots, \vec{\textbf{v}}_m, \vec{\textbf{w}}_{m+1}, \ldots, \vec{\textbf{w}}_n]$. Let this matrix be $G$. Is $G$ a good name for a matrix? I have no idea. The point is that the area we want is $\det(G)$.

Okay so we've gotten rid of the mysterious coordinate vectors $\vec{\textbf{c}}_i$; unfortunately we really don't know much more about the remnants of our orthonormal basis. However since we really only want the determinant of $G$ we can make the matrix smash with itself this way.

Consider $G^T G$. The elements of this are the dot products of $G$'s column vectors with themselves. Since the $\vec{\textbf{w}}_{m+1}, \ldots, \vec{\textbf{w}}_n$ are orthogonal to each other and to each of the $\vec{\textbf{v}}_i$...

$G^T G = \begin{vmatrix}V^T V & \textbf{0} \\ \textbf{0} & I_{n-m}\end{vmatrix}$

Taking the determinant one final time, remembering that transposing keeps it invariant, we see $\det(G)^2 = \det(V^T V)$ so a way to compute our final solution is $\sqrt{\det(V^T V)}$.

And now it seems too simple to require so much work. What the heck. College math is weird.
This post has been edited 2 times. Last edited by math_explorer, May 13, 2015, 1:25 AM

Comment

0 Comments

♪ i just hope you understand / sometimes the clothes do not make the man ♫ // https://beta.vero.site/

avatar

math_explorer
Archives
+ September 2019
+ February 2018
+ December 2017
+ September 2017
+ July 2017
+ March 2017
+ January 2017
+ November 2016
+ October 2016
+ August 2016
+ February 2016
+ January 2016
+ September 2015
+ July 2015
+ June 2015
+ January 2015
+ July 2014
+ June 2014
inv
+ April 2014
+ December 2013
+ November 2013
+ September 2013
+ February 2013
+ April 2012
Shouts
Submit
  • how do you have so many posts

    by krithikrokcs, Jul 14, 2023, 6:20 PM

  • lol⠀⠀⠀⠀⠀

    by math_explorer, Jan 20, 2021, 8:43 AM

  • woah ancient blog

    by suvamkonar, Jan 20, 2021, 4:14 AM

  • https://artofproblemsolving.com/community/c47h361466

    by math_explorer, Jun 10, 2020, 1:20 AM

  • when did the first greed control game start?

    by piphi, May 30, 2020, 1:08 AM

  • ok..........

    by asdf334, Sep 10, 2019, 3:48 PM

  • There is one existing way to obtain contributorship documented on this blog. See if you can find it.

    by math_explorer, Sep 10, 2019, 2:03 PM

  • SO MANY VIEWS!!!
    PLEASE CONTRIB
    :)

    by asdf334, Sep 10, 2019, 1:58 PM

  • Hullo bye

    by AnArtist, Jan 15, 2019, 8:59 AM

  • Hullo bye

    by tastymath75025, Nov 22, 2018, 9:08 PM

  • Hullo bye

    by Kayak, Jul 22, 2018, 1:29 PM

  • It's sad; the blog is still active but not really ;-;

    by GeneralCobra19, Sep 21, 2017, 1:09 AM

  • dope css

    by zxcv1337, Mar 27, 2017, 4:44 AM

  • nice blog ^_^

    by chezbgone, Mar 28, 2016, 5:18 AM

  • shouts make blogs happier

    by briantix, Mar 18, 2016, 9:58 PM

91 shouts
Contributors
Tags
About Owner
  • Posts: 583
  • Joined: Dec 16, 2006
Blog Stats
  • Blog created: May 17, 2010
  • Total entries: 327
  • Total visits: 354112
  • Total comments: 368
Search Blog
a