Difference between revisions of "User:DVO"

(For Prof. Welch)
(Random Math Problems)
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Questions about Multiple View Geometry ==
 
1) How can I learn about the history of computer vision research?  What were the most important papers and when were they published?  Can I get them?  Who are the leading researchers?  Why has there been a lot of progress in Multiple View Geometry in the past ten years but not before?
 
  
2) Were "circular points" introduced originally for image processing purposes, or were projective geometers interested in them before there was such a thing as image processing?  Same question for the line conic <math>C^*_\infty</math> and for the "absolute conic".
 
 
3) How can you prove that if a point conic <math>C</math> in <math>\mathbb{P}^2</math> contains an entire line, then it is degenerate (in other words, each matrix in the equivalence class of matrices that represent <math>C</math> has rank less than <math>3</math>.)
 
 
4) A conic <math>C</math> is a set of points in a projective plane <math>\pi</math>.  However, suppose <math>M</math> is a matrix that represents <math>C</math> (with respect to a certain coordinate system), so that <math>C= \{ P \in \pi</math> such that if <math>x</math> is a homogeneous coordinate vector for <math>P</math>, then <math>x^T C x = 0 \}</math> .  Then <math>M</math>, a matrix, is also called a "conic".  Isn't that a little sloppy?
 
 
5) Definition: a vector <math>x \in \mathbb{R}^3</math> is "equivalent" to a vector <math>y \in \mathbb{R}^3</math> iff there exists a nonzero <math>c \in \mathbb{R}</math> such that <math>x = cy</math>.  This defines an equivalence relation on <math>\mathbb{R}^3-\{0\}</math> , which induces a partition of <math>\mathbb{R}^3-{0}</math> into equivalence classes.  <math>\mathbb{P}^2</math> is the set of all such equivalence classes.  Is that a correct definition of <math>\mathbb{P}^2</math> ?
 
 
It would be sloppy to say that <math>(2,3,5) \in \mathbb{P}^2</math>.  It would be more careful to say that the equivalence class <math>C_{(2,3,5)} \in \mathbb{P}^2</math>.
 
 
By this definition, it is not quite true that a point in <math>\mathbb{P}^2</math> is a line through the origin in <math>\mathbb{R}^3</math>.  The origin would be missing.
 
 
Instead of this definition, should I define <math>\mathbb{P}^2</math> to be a set of equivalence classes of elements of <math>\mathbb{C}^3-\{0\}</math> ?  If not, how can it be claimed that the "circular points" belong to <math>\mathbb{P}^2</math>.
 
 
6) Is it really true that every conic C in <math>\mathbb{P}^2</math> ,including degenerate and improper conics, has a matrix representation that is unique up to a non-zero scalar multiple?
 
 
They say that "five points determine a conic."  But is that always true, for any set of five points?  Can there be five points that are all contained in two separate conics?  Is <math>\mathbb{P}^2</math> itself really a degenerate conic (where all the coefficients are zero)?  What condition must five points satisfy in order to uniquely determine a conic?
 
 
== For Prof. Welch ==
 
 
(This is also for anyone else who see this and happens to know the answer.)
 
 
Let's consider the continuous model for the Kalman filter learning tool's "sloshing" case, where the water is sloshing but not filling.  Let <math>L(t)</math> be the height of the water at time t.  Then
 
 
<math>L(t) = c + k_s \sin(\omega t)</math> for some constants <math>c</math> and <math>k_s</math>, perhaps <math>c=.3</math> and <math>k_s=.05</math>.
 
 
In the continuous model, the "state vector" at time <math>t</math> is <math>x(t) = 
 
  \begin{pmatrix}
 
L(t)\\
 
k_s \end{pmatrix}</math>.
 
 
<math>x'(t) = \begin{pmatrix}
 
0& \omega \cos(\omega t)\\
 
0&0 \end{pmatrix} x(t) + w(t)</math>      (Eqn.1)
 
 
where <math>w(t)</math> is the (random) "process error."
 
 
We need to discretize this system of ODEs.
 
 
For any <math>t_0</math>, <math>\Phi(t,t_0) = \begin{pmatrix}
 
1&\sin(\omega t) - \sin(\omega t_0) \\
 
0&1\end{pmatrix}</math>
 
 
is a fundamental matrix for <math>x'(t)=\begin{pmatrix}
 
0& \omega \cos(\omega t)\\
 
0&0 \end{pmatrix} x(t)</math>, and <math>\Phi(t_0,t_0)=\begin{pmatrix}
 
1&0\\
 
0&1\end{pmatrix}</math> .  Any solution <math>x(t)</math> of Eqn. 1 can be written (by variation of parameters) as
 
 
<math>x(t) = \Phi(t,t_0)x(t_0) + \int_{t_0}^t \Phi(t,\tau)w(\tau) d \tau</math>.
 
 
Letting <math>t=t_k</math> and <math>t_0 = t_{k-1}</math> , we have that
 
 
<math>{x(t_k) = \Phi(t_k,t_{k-1})x(t_{k-1}) + \int_{t_{k-1}}^{t_k} \Phi(t_k,\tau)w(\tau) d \tau}</math>.
 
 
Thus the "discrete time state transition matrix" is
 
 
<math>A_{k-1} = \Phi(t_k, t_{k-1}) =
 
 
\begin{pmatrix}
 
1&\sin(\omega t_k) - \sin(\omega t_{k-1}) \\
 
0&1\end{pmatrix}</math> .
 
 
However, this disagrees with Equation (6) on page 4 of the document "Team 18: The Kalman Filter Learning Tool, Dynamic and Measurement Models," where it is stated that the discrete time state transition matrix is
 
 
<math>A(t) = \begin{pmatrix}
 
 
1& \omega \cos(\omega t)\\
 
0&1 \end{pmatrix}</math> .
 
 
Is this a mistake in the "Team 18" document, or am I making some mistake?
 
 
This ends my question for Prof. Welch.  Thanks.
 
 
== Personal info ==
 
 
Name: Daniel O'Connor
 
 
(full name: Daniel Verity O'Connor)
 
 
Location: Los Angeles
 
 
 
== Contributions ==
 
 
*  Created [[Chain Rule]] article
 
*  Created [[Fundamental Theorem of Calculus]] article
 
 
 
== Random Math Problems ==
 
 
The following problems are part of a discussion I'm having with someone I know.  Please don't comment about them, and most importantly please don't correct any errors or give any hints about better solutions.
 
 
----
 
Suppose you have a fair six-sided die and you're going to roll the die again and again indefinitely.  What's the expected number of rolls until a <math>1</math> comes up on the die?
 
----
 
 
The probability that it will take one roll is <math>\frac{1}{6} </math>.
 
 
The probability that it will take two rolls is <math>\left(\frac56 \right)\left(\frac16 \right) </math>.
 
 
The probability that it will take three rolls is <math>\left(\frac56 \right)^2 \left(\frac16 \right) </math>.
 
 
The probability that it will take four rolls is <math>\left(\frac56 \right)^3 \left(\frac16 \right) </math>.
 
 
And so on.
 
 
So, the expected number of rolls that it will take to get a <math>1</math> is:
 
 
<math>1\cdot \frac{1}{6} + 2\cdot \left(\frac56 \right)\left(\frac16 \right) + 3\cdot \left(\frac56 \right)^2 \left(\frac16 \right) + 4 \cdot \left(\frac56 \right)^3 \left(\frac16 \right) + \cdots</math>.
 
 
What's the sum of this infinite series?  It looks kind of like an infinite geometric series, but not exactly.  Factoring out a <math>\frac16</math> makes it look a bit more like an infinite geometric series:
 
 
<math>\left(\frac16 \right) \left(1 + 2\cdot \left(\frac56 \right) + 3\cdot \left(\frac56 \right)^2 + 4 \cdot \left(\frac56 \right)^3 + \cdots \right)</math>
 
 
This is similar to a geometric series, which we know how to sum.  But we have those annoying factors of <math>2</math>, <math>3</math>, <math>4</math>, etc. to worry about.  Maybe we could play around with the formula for a geometric series to get a formula for this series.  The formula for a geometric series is:
 
 
<math>1 + r + r^2 + r^3 + r^4 + \cdots = \frac{1}{1-r}</math>.
 
 
Differentiating both sides of this with respect to <math>r</math>, we find:
 
 
<math>1 + 2r + 3r^2 + 4r^3 + \cdots = -(1-r)^{-2}(-1) = \frac{1}{(1-r)^2}</math>.
 
 
So, we find that
 
 
<math>\left(\frac16 \right) \left(1 + 2\cdot \left(\frac56 \right) + 3\cdot \left(\frac56 \right)^2 + 4 \cdot \left(\frac56 \right)^3 + \cdots \right) = \frac16 \frac{1}{(1-\frac56)^2} = \frac16 (36) = 6</math>.
 
 
Which seems like a very reasonable answer, given that the die has six sides.
 
 
 
----
 
What's the expected number of rolls it will take in order to get all six numbers at least once?
 
----
 
 
 
As a first step, let's find out the expected number of rolls it will take in order to get both <math>1</math> and <math>2</math> at least once.
 
 
Let <math>n \ge 2</math> be an integer.  What's the probability that it will take <math>n</math> rolls to get a <math>1</math> and a <math>2</math>?
 
 
Consider two different events: Event <math>OneFirst</math> is the event that the <math>n^{th}</math> roll is a <math>2</math>, and the first <math>n-1</math> rolls give at least one <math>1</math> and also no <math>2</math>'s.  Event <math>TwoFirst</math> is the event that the <math>n^{th}</math> roll is a <math>1</math>, and the first <math>n-1</math> rolls give at least one <math>2</math> and also no <math>1</math>'s.  The probability that it will take <math>n</math> rolls to get a <math>1</math> and a <math>2</math> is <math>P(OneFirst)+ P(TwoFirst)</math>.
 
 
What is <math>P(OneFirst)</math>?
 
 
Let <math>A</math> be the event that the <math>n^{th}</math> roll is a <math>2</math>.  Let <math>B</math> be the event that the first <math>n-1</math> rolls give at least one <math>1</math>.  Let <math>C</math> be the event that the first <math>n-1</math> rolls give no <math>2</math>'s.
 
 
<math>P(OneFirst)=P(A \cap B \cap C)</math>.
 
 
Event <math>A</math> is independent of the event <math>B \cap C</math>, so:
 
 
<math>P(A \cap (B \cap C)) = P(A)P(B \cap C)</math>.
 
 
<math>P(A)</math> is <math>\frac16</math>, but what is <math>P(B \cap C)</math>?
 
 
Events <math>B</math> and <math>C</math> aren't independent.  But we can say this:
 
 
<math>P(B \cap C)= P(C)P(B|C)= P(C)\left(1-P(B^c|C)\right)</math>.  (Here <math>B^c</math> is the complement of the event <math>B</math>.  I have used the fact that <math>P(X|Y)+P(X^c|Y)=1</math>.)
 
 
<math>P(C) = \left(\frac56 \right)^{n-1} </math>.
 
 
<math>P(B^c|C) = \frac{P(B^c \cap C)}{P(C)}= \frac{(\frac46)^{n-1}}{(\frac56)^{n-1}} </math>.
 
 
Thus, <math>P(B \cap C)=
 
\left(\frac56 \right)^{n-1} \left(1- \frac{ (\frac46)^{n-1} }{ (\frac56)^{n-1} } \right)
 
 
= \left(\frac56 \right)^{n-1} - \left(\frac46 \right)^{n-1} </math>.
 
 
So <math>P(A \cap (B \cap C) ) = \left( \frac16 \right) \left( \left(\frac56 \right)^{n-1} - \left(\frac46 \right)^{n-1} \right)</math>.
 
 
We have found <math>P(OneFirst)</math>.  We're close to finding the probability that it will take n rolls to get both a <math>1</math> and a <math>2</math>.
 
 
<math>P(TwoFirst) = P(OneFirst)</math>.  Thus the probability that it will take <math>n</math> rolls to get both a <math>1</math> and a <math>2</math> is <math>P(OneFirst)+ P(TwoFirst) = \left( \frac13 \right) \left( \left(\frac56 \right)^{n-1} - \left(\frac46 \right)^{n-1} \right)</math>.
 
 
Okay.....
 
 
So what's the expected number of rolls it will take to get both a <math>1</math> and a <math>2</math>?
 
 
It is:
 
 
<math>{2 \cdot \left( \frac13 \right) \left( \left(\frac56 \right)^1 - \left(\frac46 \right)^1 \right) + 3 \cdot \left( \frac13 \right) \left( \left(\frac56 \right)^2 - \left(\frac46 \right)^2 \right) + 4 \cdot \left( \frac13 \right) \left( \left(\frac56 \right)^3 - \left(\frac46 \right)^3 \right) + \cdots}</math>
 
 
<math> = \left( \frac13 \right) \left( 2 \left(\frac56 \right)^1 + 3 \left(\frac56 \right)^2 + 4 \left(\frac56 \right)^3 + \cdots \right) - \left( \frac13 \right) \left( 2\left(\frac46 \right)^1 + 3 \left(\frac46 \right)^2 + 4 \left(\frac46 \right)^3 + \cdots \right) </math>
 
 
<math>= \left( \frac13 \right) \left( -1 + 1 + 2 \left(\frac56 \right)^1 + 3 \left(\frac56 \right)^2 + 4 \left(\frac56 \right)^3 + \cdots \right) - \left( \frac13 \right) \left(-1 + 1 +  2\left(\frac46 \right)^1 + 3 \left(\frac46 \right)^2 + 4 \left(\frac46 \right)^3 + \cdots \right)</math>
 
 
<math>= \left( \frac13 \right) \left( -1 + \frac{1}{(1-\frac56)^2} \right) - \left( \frac13 \right)  \left( -1 + \frac{1}{(1-\frac46)^2} \right)  </math>
 
 
<math>= \left( \frac13 \right) ( -1 + 36 + 1 - 9 ) = \left( \frac13 \right) (27) = 9</math>.
 
 
The expected number of rolls is <math>9</math>.
 
 
 
Using a similar (but a little more complicated) method, I get that the expected number of rolls until all six numbers appear is <math>14.7</math>.
 
 
I also found that the expected number of rolls until <math>1</math>, <math>2</math>, and <math>3</math> appear is <math>11</math>.
 

Latest revision as of 15:58, 26 March 2011