Structure-preserving maps, Part II: Vector spaces

by t0rajir0u, Jun 30, 2008, 7:30 AM

The theory of vector spaces (i.e. linear algebra) has a well-known motivation in high school calculations: it derives initially from the need to solve systems of linear equations. In order to give perspective on vector spaces, let me therefore approach linear algebra from the opposite extreme of abstraction. Our starting point will be the notion of a finitely-generated abelian (commutative) group.

Note that many groups are not finitely generated: the generators of $ (\mathbb{Q}^{ + }, \times)$, for example, are the primes. On the other hand, cyclic groups are generated by a single element. A finitely-generated abelian group with generators $ g_1, g_2, ... g_k$ (written in additive notation) has the property that every element can be written in the form

$ n_1 g_1 + n_2 g_2 + ... + n_k g_k$

where $ n_i$ is an integer and $ n_i g_i$ denotes repeated addition (or, if $ n_i$ is negative, subtraction). Note that this representation need not be unique; if it is, such a group is called free abelian. The interesting thing we have done here is introduce a notation: $ n_i$ and $ g_i$ are variables being multiplied but they do not belong to the same set; $ n_i$ is an integer and $ g_i$ is a member of some group, but this operation has some meaning in terms of the group.

Here we have the notion of a module. The elements of a module $ M$ behave like a group (and we'll denote the group operation by addition), but in addition there is a second set of scalars $ S$ such that "multiplication" of a member of $ M$ by a scalar gives another member of $ M$. $ S$ itself has a structure; here it is a ring, but we won't worry about the general case; for our purposes $ S$ will be a simple ring like the integers, which just means that you can both add and multiply.

When $ S$ also has the property that you can divide by anything that isn't zero (in other words, $ S$ is a field), the object you end up getting is a wonderful, wonderful thing called a vector space.

That was a pretty tortured chain of implication. Let's recap. A vector space requires specifying two sets: an abelian group $ V$ of vectors (additive notation) and a field $ S$ of scalars. (In the discussions that follow $ S$ will usually either be $ \mathbb{R}$ (in which case $ V$ is called a real vector space) or $ \mathbb{C}$ (in which case $ S$ is called a complex vector space). In addition to vector addition, a scalar multiplication $ \cdot : S \times V \to V$ is required that satisfies several axioms which boil down to agreeing with your intuitive notion of scalar multiplication (stretching vectors and all that).

With that out of the way, let's talk about what we mean by structure-preserving maps between vector spaces. A vector space homomorphism (or linear map, or linear operator, or linear transformation) has the property of "linearity"; in other words, it is a group homomorphism with respect to vector addition and is compatible with scalar multiplication. Two vector spaces are isomorphic if a bijective linear map exists between them. Then we have vector space endomorphisms and automorphisms.

That's a lot of words to digest, but before I work through concrete examples as in the last post I'd like to throw a few more at you. The notion of generators is much stronger in a vector space than in a group (since most groups are neither finitely-generated nor free abelian). A set of vectors $ \{\mathbf{b}_1, \mathbf{b}_2, ... \mathbf{b}_k \}$ such that every element of a vector space $ V$ can be written as a linear combination $ c_1 \mathbf{b}_1 + c_2 \mathbf{b}_2 + ... + c_k \mathbf{b}_k$ where $ c_i$ are scalars is called a basis, and the number $ k$ is called the dimension of the vector spaces, which is (surprisingly enough) unique.

It follows that a given vector $ \mathbf{v} \in V$ can be uniquely represented by a $ k$-tuple of scalars $ \left < c_1, c_2, ... c_k \right >$; in other words, over a field of scalars $ S$, a $ k$-dimensional vector space is isomorphic to $ S^k$. (When $ S = \mathbb{R}$, we get Euclidean space.) Elementary linear algebra is essentially the study of linear transformations between finite-dimensional real (or complex) vector spaces.

A linear transformation between a vector space $ V_1$ of dimension $ m$ and a vector space $ V_2$ of dimension $ n$ can be represented in the following way: since a linear transformation $ \mathbf{T}(\mathbf{v})$ respects addition and scalar multiplication, $ \mathbf{T}(c_1 \mathbf{b}_1 + ... + c_m \mathbf{b}_m) = c_1 \mathbf{T}(\mathbf{b}_1) + ... + c_m \mathbf{T}(\mathbf{v}_m)$; in other words, the effect of $ \mathbf{T}$ on $ V_1$ is uniquely described by the effect of $ \mathbf{T}$ on the basis vectors of $ V_1$. The vectors $ \mathbf{T}(\mathbf{b}_i) \in V_2$ can be described in terms of the $ n$ basis vectors of $ V_2$, which are given by $ n$ scalars per vector. $ \mathbf{T}$ is therefore uniquely described by $ mn$ components; in other words, by a matrix.

Composition of linear transformations is the most natural way to motivate matrix multiplication and leads to some extremely rich ideas. The set of $ n \times n$ real (or complex) matrices (equivalently, the set of linear transformations $ \mathbb{R}^n \to \mathbb{R}^n$, or the set of linear operators) form a ring and the set of $ n \times n$ invertible matrices (equivalently, the set of invertible linear transformations) form a group called the general linear group. The study of representing a group as a group of matrices, i.e. group representation theory, could take up the rest of this article, but I have other things to talk about.

When I talked about group homomorphisms, I centered discussion on isomorphisms to show that groups that appeared to be different were in fact the same. When studying linear transformations, however, the invertible ones form such a small subset of the rich set of linear transformations that it makes more sense to consider the linear transformations themselves, rather than the vector spaces, as the underlying object of study.


Interesting things start here.

We begin with the notion of a linear recurrence describing a sequence $ \{ s_k \}$ in terms of some coefficients

$ s_n = a_{1} s_{n - 1} + a_{2} s_{n - 2} + ... + a_{k} s_{n - k}$.

Such a recurrence is called a $ k^{th}$-order homogeneous recurrence. Note that the set of sequences $ V$ satisfying such a recurrence is a vector space. Define the shift operator $ \mathbf{S}(\mathbf{s}), \mathbf{s} \in V$ to take the sequence $ \mathbf{s} = \{ s_0, s_1, ... \}$ to the sequence $ \mathbf{S}(\mathbf{s}) = \{ s_1, s_2, ... \}$. Recall that we can define sums of linear operators and compositions, so let $ \mathbf{S}^r$ denote the shift operator applied $ r$ times. Rewrite the recurrence as

$ s_n - a_{1} s_{n - 1} - a_{2} s_{n - 2} - ... = 0$

and consider the polynomial $ P(x) = x^k - a_{1} x^{k - 1} - ...$. This polynomial is called the characteristic polynomial of the recurrence; the underlying reason why is that we can write the recurrence as

$ P(\mathbf{S})(\mathbf{s}) = \mathbf{0}$.

The above requires some explanation of notation. $ P(\mathbf{S})$ refers to the sum of linear operators $ \mathbf{S}^n - a_{n - 1} \mathbf{S}^{n - 1} - ...$; in other words, first we shift the sequence $ n$ times, then we shift the sequence $ n - 1$ times and subtract $ a_{n - 1}$ times that from what we had before, then... and so forth. $ P(\mathbf{S})(\mathbf{s})$ refers to a sequence (not indexed by $ k$!), that is, a vector, that is the result of the linear operator $ P(S)$ on the sequence (vector) $ \mathbf{s} = \{ s_k \}$. Note that here $ \mathbf{0}$ refers to the zero sequence $ \{ 0, 0, 0, ... \}$.

Phew! If it helps, think of a sequence as a vector with an infinite number of components; in other words, a function (on a countable set).

To wit: we have rephrased the problem of finding a sequence that satisfies a homogeneous linear recurrence as the problem of finding a sequence that is taken by a particular linear operator to zero, in other words, that is in the nullspace (or kernel) of $ P(\mathbf{S})$. Recall that the kernel of a group homomorphism was itself a group; here we see, analogously, that the nullspace of a linear transformation is itself a vector space. Now what?

Well, when studying a linear operator we frequently study its eigenvectors and eigenvalues. Let's start with $ \mathbf{S}$ before studying $ P(\mathbf{S})$. The eigenvectors of $ \mathbf{S}$ are the sequences $ \{ g_k \}$ that are multiples of themselves when they are shifted; in other words, $ g_{k + 1} = \lambda g_k$, where $ \lambda$ is the eigenvalue. But this is obviously a geometric sequence with common ratio $ \lambda$!

This is good news (especially if you already know the answer to this particular problem). Recall that an eigenvector $ \mathbf{v}$ with eigenvalue $ \lambda$ (a scalar) satisfies $ \mathbf{S}(\mathbf{v}) = \lambda \mathbf{v}$, hence $ \mathbf{S}^r(\mathbf{v}) = \lambda^r \mathbf{v}$ (remember, $ v$ is a vector here, which is a sequence). It follows that

$ P(\mathbf{S})(\mathbf{v}) = P(\lambda) \mathbf{v}$

for any eigenvector (that is, geometric sequence) $ v$. In the case that $ P(x)$ has distinct roots $ r_1, r_2, ... r_k$ (which is the only case I want to tackle for now), the geometric sequences $ \mathbf{v}_1, \mathbf{v}_2, ... \mathbf{v}_k$ with common ratios $ r_1, r_2, ... r_k$ are in the nullspace of $ P(\mathbf{S})$ because

$ P(\mathbf{S})(\mathbf{v}_k) = P(r_k) \mathbf{v}_k = 0$.

There are $ k$ of these vectors and they are clearly independent. On the other hand, the vector space of sequences that satisfy a $ k^{th}$-order homogeneous linear recurrence is clearly of dimension $ k$ because it is spanned by the sequences $ \{ 1, 0, 0, ... \}, \{ 0, 0, 1, ... \}$ (where all of the first $ k$ terms are zero except one and the rest are determined by the recurrence); in other words, $ P(\mathbf{S})$ has nullity $ k$, so we have described the nullspace completely. In other words, we have the following

Proposition. The set of sequences satisfying a $ k^{th}$-order homogeneous linear recurrence with characteristic polynomial $ P(x)$ with distinct roots $ r_1, r_2, ... r_k$ is given by

$ s_n = a_1 r_1^n + a_2 r_2^n + ... + a_k r_k^n$.

The characteristic-equation method is one of the most general methods for proving (generalizations of) Binet's formula. Note that the restriction of $ \mathbf{S}$ to the nullspace of $ P(\mathbf{S})$ gives a linear operator whose matrix has characteristic polynomial $ P(x)$; hence in an important sense the two notions of characteristic polynomial (of matrices on the one hand and linear recurrences on the other) are the same!


Excursion: Find an "algebraic" (that is, not involving floor functions or fractional parts) equation for a sequence of numbers with period $ n$, i.e. $ s_{k + n} = s_k$ for all $ k$.

Solution 1. Let $ \omega = e^{\frac {2 \pi i}{n} }$ denote a primitive solution to $ x^n - 1 = 0$. Then $ s_k = \sum_{i = 0}^{n - 1} a_i \omega^{ik}$ for some numbers $ a_0, ... a_{n - 1}$. To find them, we solve the system

$ s_0 = \sum_{i = 0}^{n - 1} a_i$
$ s_1 = \sum_{i = 0}^{n - 1} a_i \omega^i$
...
$ s_{n - 1} = \sum_{i = 0}^{n - 1} a_i \omega^{i(n - 1)}$.

Using the fact that $ \sum_{i = 0}^{n - 1} \omega^{ki} = 0$ when $ n \not | k$ and $ n$ otherwise, verify that

$ n a_k = \sum_{i = 0}^{n - 1} s_i \omega^{ - i}$.

If we accept that the characteristic equation method is valid, this argument is fairly straightforward.

Solution 2. Introduce an inner product on two sequences $ u_k, v_k$

$ \left < u_k \cdot v_k \right > = \frac {1}{n} \sum_{k = 0}^{n - 1} u_k \bar{v_k}$.

(Note the analogue to integration.) The basis vectors $ \{\omega^{ik} \}$ can therefore be verified to have norm $ 1$ and to be orthogonal, and therefore form an orthonormal basis. The expression above that gives $ a_k$ in terms of $ s_i$ is therefore a consequence of the expression of a vector in an orthonormal basis.

The operation we are doing above is important enough to have a special name: it is called the discrete Fourier transform, which is itself a linear transformation. It takes a sequence with period $ n$ (that is, $ n$ numbers) to $ n$ coefficients decomposing the sequence into waves, and is (in my opinion) the most general approach to the problem I gave in my previous post.


There are two other rather general approaches to proving (generalizations of) Binet's formula. One is by considering the ordinary generating function $ F(x) = \sum_{k = 0}^{\infty} F_k x^k$, and the other is by considering the exponential generating function $ G(x) = \sum_{k = 0}^{\infty} \frac {F_k x^k}{k!}$. These approaches can be unified. The one thing they all share in common is a shift operator $ S$.

The shift operator was an operator on sequences. The operator that assigns a generating function (of either type) to a sequence is itself a linear operator - it takes the "sequence space" to a function space - and it translates the shift operator into a different operator. On ordinary generating functions, the shift operator takes $ F(x)$ to $ \frac {F(x) - F(0)}{x}$, and on exponential generating functions, the shift operator takes $ G(x)$ to $ G'(x)$ (recall that differentiation is a linear operator)!

This is very deep. The problem of finding the nullspace of $ P(\mathbf{S})$ translates (with ordinary generating functions) to the problem of solving for $ F(x)$ by multiplying out the $ x$'s (in other words, the shift operator becomes very simple) and then computing the Taylor series of the result, and (with exponential generating functions) to a (linear homogeneous ordinary) differential equation.

Those of you who are familiar with differential equations might recall the "characteristic equations" technique for solving linear homogeneous ODEs, which is in fact exactly the above technique for solving linear homogeneous recurrences, translated to an exponential generating function. So in fact these three approaches boil down to different ways of realizing the shift operator $ \mathbf{S}$, and the three notions of characteristic equation (of a recurrence, of a linear ODE, of a linear operator) boil down to different characteristic equations of restrictions of $ \mathbf{S}$ to certain spaces.

If you're not sure what I'm talking about, let's prove Binet's formula using the above techniques.

Proof 1. With $ F(x)$ as our vector we have the recurrence

$ \mathbf{S}^2 F - \mathbf{S} F - F = 0 \implies$
$ \frac {\frac {F(x) - F(0)}{x} - F'(0)}{x} - \frac {F(x) - F(0)}{x} - F(x) = 0 \implies$
$ F(x) - F(0) - F'(0)x - F(x)x - F(0) x - F(x) x^2 = 0 \implies$
$ (1 - x - x^2) F(x) = F(0) + F'(0) x + F(0) x$.

Now, $ F(0) = F_0 = 0$ and $ F'(0) = F_1 = 1$, so

$ F(x) = \frac {x}{1 - x - x^2}$.

We then find the Taylor series of $ F(x)$ using partial fractions. Let $ \phi, \varphi$ be the roots of $ x^2 - x - 1 = 0$ as usual. Then

$ \begin{eqnarray*} F(x) & = & \frac {x}{(1 - \phi x)(1 - \varphi x)} \\ & = & \frac {1}{\phi - \varphi} \left( \frac {1}{1 - \phi x} - \frac {1}{1 - \varphi x} \right) \\ & = & \frac {1}{\phi - \varphi} \left( \sum_{k = 0}^{\infty} (\phi^k - \varphi^k) x^k \right) \\ \end{eqnarray*}$

which is exactly the answer we're looking for as usual. :) (I gave this proof some time ago in a thread on AoPS if it looks familiar. It's a standard exercise in generating functions.) Just as the eigenvectors of the shift operator on sequences were geometric sequences, here the eigenvectors are the ordinary generating functions of geometric sequences; in other words, the functions $ \frac {1}{1 - \lambda x}$. The value of this approach is that $ F(x)$ can be found algebraically and can be naturally decomposed into eigenvectors of the shift operator.

Proof 2. With $ G(x)$ as our vector we have

$ \mathbf{S}^2 G - \mathbf{S} G - G = 0 \implies$
$ G'' - G' - G = 0$.

Several techniques for solving such differential equations are available; the standard method is that the roots to the characteristic equation $ r^2 - r - 1 = 0$ describe the solutions

$ G(x) = A e^{\phi x} + B e^{\varphi x}$

where the coefficients $ A, B$ can be solved for using the initial conditions (here $ G(0) = 0, G'(0) = 1$). Again, note that the eigenvectors of the differentiation operator are the exponential generating functions of geometric sequences; in other words, the functions $ e^{\lambda x}$. The proof of the characteristic equation technique is essentially the proof for linear recurrences I outlined earlier.


Another common technique for solving a second-order linear differential equation is to introduce a second function $ H = G'$. We then have the system of differential equations

$ H' = G' + G = H + G$
$ G' = H$.

Such systems are usually solved by writing them in matrix notation. Letting $ \mathbf{P} = \left[ \begin{array}{c} H \\
G \end{array} \right]$, we have

$ \mathbf{P}' = \left[ \begin{array}{cc} 1 & 1 \\
1 & 0 \end{array} \right] \mathbf{P}$.

The matrix $ \mathbf{M}$ that appears here should be familiar to you as what I called the Fibonacci operator. The solution to the above system is given by

$ \mathbf{P} = e^{\mathbf{M} x} \mathbf{P}(0)$

where $ e^{\mathbf{A}}$ is the matrix exponential. Note that the coefficients of the matrix exponential here are exactly the Fibonacci numbers (because that's how we defined $ G$), which lets us prove the lemma about the powers of $ \mathbf{M}$ without any computation. Marvelous!

(Binet's formula can be proven from the powers of $ \mathbf{M}$ alone by eigendecomposition, which is the more general way to phrase my argument in my old post.)


There is another rather ingenious way to prove that the powers of $ \mathbf{M}$ give the Fibonacci numbers. It begins with interpreting $ \mathbf{M}$ as an adjacency matrix. An $ n \times n$ adjacency matrix describes the location of (directed) edges in a graph on $ n$ vertices; if the graph is undirected, the matrix is symmetric. Since I introduced matrices in terms of linear transformations, an adjacency matrix is a linear operator on the vector space of functions on the vertices of a graph. One way of interpreting such an operator is that it takes some numbers on each vertex that represent the number of ways of moving to that vertex (along directed edges) and returns some numbers on each vertex that represent the number of ways of moving to that vertex in one more step.

Now, $ \mathbf{M}$ is the adjacency matrix of a graph on two vertices $ A, B$ with an edge from $ A$ to itself and an undirected edge from $ A$ to $ B$. Let's call this the "Fibonacci graph." It's well-known that the $ n^{th}$ power of an adjacency matrix give the number of paths of length $ n$ between edges.

The number of paths of length $ n$ from $ A$ to itself can be counted as follows: each path consists of two types of possible steps, one of which is to take the first edge from $ A$ to itself (a step of length one) and the other of which is to take the edge from $ A$ to $ B$ and back to $ A$ (a step of length two). But there is a famous counting proof that tells us that the number of paths of length $ n$ where we are allowed to take steps of length one or two is just $ F_{n + 1}$! (Briefly: let $ P_n$ be the number of paths of length $ n$. The last step is of length one or two. The rest of the path is of length $ n - 1$ or $ n - 2$, so $ P_n = P_{n - 1} + P_{n - 2}, P_1 = 1, P_2 = 2$.)

The number of paths from $ A$ to $ B$, from $ B$ to $ A$, and from $ B$ to $ B$ can be counted in a very similar manner and give exactly the four components of the powers of $ \mathbf{M}$ as seen before. Using eigendecomposition to find Binet's formula here is now a step towards understanding the structure of the Fibonacci graph; such techniques belong to the fascinating subject of spectral graph theory. Eigenvalues and eigenvectors of an adjacency matrix give information about the behavior of paths on a graph, which in turn give information about the graph itself.


The different proofs of Binet's formula can now be seen to all be essentially the same technique. Whether we are considering the eigenvectors of the shift operator or of $ \mathbf{M}$, it's linear transformations and "eigen"-reasoning that let us manipulate what was once a mysterious sequence into a totally clear sum of geometric sequences.

But we are still not done! If a linear operator takes sequences to ordinary generating functions and another one takes sequences to exponential generating functions, how do we translate directly from ordinary to exponential and vice versa? If you noticed how the shift operator related differentiation on the one hand to division by $ x$ on the other, you might already know the answer; it is (almost) given by the Laplace transform.

Specifically, the Laplace transform of an exponential generating function $ f(x) = \sum_{k = 0}^{\infty} a_k \frac {x^k}{k!}$ is given by

$ F(s) = \sum_{k = 0}^{\infty} \frac {a_k}{s^{k + 1}}$.

Then we just change variables, letting $ y = \frac {1}{s}$, and remove the extraneous factor of $ y$ and we have the ordinary generating function of $ \{ a_k \}$!

The Laplace transform is used to convert the solution of a linear ODE to the solution of an algebraic equation. If you're like me and took a differential equations course, it might seem unmotivated. But the intermediate step (where differentiation, the shift operator, and multiplying by $ s$ or dividing by $ x$ are all related) hopefully uncovers more of what's really going on here.

We therefore have the rather interesting table of correspondences.

$ \begin{tabular}{ | l | c | r | } Algebra & Linear recurrences & Linear ODEs \\ \hline (f(x) - f(0))/x & Shift operator & Differentiation \\ 1/(1 - \lambda x) & Geometric sequences & e^{\lambda x} \\ ??? & Adjacency matrix & Matrix exponential \\ \end{tabular}$.

Ordinary generating functions take the middle column to the left, exponential generating functions take the middle column to the right, and the Laplace transform and its inverse take the left and right columns to each other.

What's the missing entry? The third row describes different interpretations of $ \mathbf{M}$. We interpreted $ \mathbf{M}$ as the adjacency matrix of a graph where its powers counted paths on a graph (that satisfy a second-order linear recurrence) and as the coefficient matrix of a system of differential equations where its powers form the matrix exponential that solves it.

Well, the left-column analogue to $ e^{\mathbf{M}x}$ is the matrix function $ \frac {1}{I_2 - \mathbf{M} x}$, where $ I_2$ is the $ 2 \times 2$ identity matrix. The determinant of this matrix is the Ihara zeta function of the Fibonacci graph, which dovetails nicely with the middle column, but it is here that I am out of my league.

Still, we have made a lot of progress: the different approaches to understanding the Fibonacci numbers have all come together nicely thanks to the power of linear transformations; we translate operations in one vector space to operations in another and structures that were before opaque (differential equations) became transparent (partial fractions).


Practice Problem 1: The operation that assigns a "Newton generating function" to a sequence $ \{ a_k \}$ as $ N(x) = \sum_{k = 0}^{\infty} a_k {x \choose k}$ is a linear operator. The shift operator becomes the forward difference operator as described here and the eigenvectors are the sequences $ \sum_{k = 0}^{\infty} r^k {x \choose k} = (r + 1)^x$. What happens when we try to use this technique to prove Binet's formula?

Practice Problem 1.5: Compute $ {mn \choose 0} + {mn \choose m} + {mn \choose 2m} + ... + {mn \choose mn}$. (This is a value of a Newton generating function. What is the corresponding sequence? What is its discrete Fourier transform?) This is the "roots of unity filter."

Practice Problem 2: The operation that assigns a "Dirichlet generating function" to a sequence $ \{ a_k \}$ as $ D(s) = \sum_{n = 1}^{\infty} \frac {a_n}{n^s}$ is a linear operator. Just as Laplace transformations take convolutions to products, this transformation takes Dirichlet convolutions to products. Using this knowledge, prove the Mobius inversion formula.

Practice Problem 3: Compute $ \left[ \begin{array}{cccccc} 0 & 1 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
\end{array} \right]^2$

by interpreting it as an adjacency matrix. Does the resulting graph look familiar? Generalize.

Comment

2 Comments

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
Interesting...Here is the article that I learned linear difference equations from (at first anyway)...

I think that it has some interesting comments about the operators that you talked about here

http://compgeom.cs.uiuc.edu/~jeffe/teaching/373/notes/recurrences.pdf


Related to that last problem you posted: construct a matrix with integer entries that has eigenvalues that are the primitive nth roots of unity, an example can be motivated by graph theory...

i.e M^phi(n)=I, i don't know group theory or anything (i'm reading about it now) but I feel there is something in there that might be related...also it has interesting relations to number theory...like what is the least element such that $ p|g^k-1$, etc, I sort of have a feeling there is a lot of look at here but I don't know enough yet to interpret in the context of higher math

by Altheman, Jun 30, 2008, 9:08 AM

The post below has been deleted. Click to close.
This post has been deleted. Click here to see post.
I can't believe it's taken me this long to realize this, but it's clearly not possible to construct such matrices in cases like $ n = 4, 6$ that would also be adjacency matrices (in other words, have non-negative coefficients), and you mean the cyclotomic polynomials. It is straightforward to construct a matrix (in fact a permutation matrix) with characteristic and minimal polynomial $ x^n - 1$ (which is in fact the companion matrix of $ x^n - 1$), and the corresponding graph is the cycle graph $ C_n$. I'm still not sure what you're getting at, though. Is there a specific connection you're thinking of beyond the general connections I've given?

(By the way, the companion matrix idea generalizes the specific things I've been doing with the Fibonacci matrix for polynomials with all non-positive coefficients except the leading one, but it's not quite enough material for a new entry :) )

by t0rajir0u, Jul 10, 2008, 2:55 AM

Various mathematical thoughts, ranging from problem-solving techniques to attempts to tie together related concepts.

avatar

t0rajir0u
Archives
+ March 2009
+ October 2007
+ May 2007
Shouts
Submit
  • orz $~~~~$

    by clarkculus, Jan 10, 2025, 4:13 PM

  • Insanely auraful

    by centslordm, Jan 1, 2025, 11:17 PM

  • Fly High :(

    by Siddharthmaybe, Oct 22, 2024, 8:34 PM

  • Dang it he is gone :(( /revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive/revive.

    by 799786, Aug 4, 2022, 1:56 PM

  • annoying precision

    by centslordm, May 16, 2021, 7:34 PM

  • rip t0rajir0u

    by OlympusHero, Dec 5, 2020, 9:29 PM

  • Shoutbox bump xD

    by DuoDuoling0, Oct 4, 2020, 2:25 AM

  • dang hes gone :(

    by OlympusHero, Jul 28, 2020, 3:52 AM

  • First shout in July

    by smartguy888, Jul 20, 2020, 3:08 PM

  • https://artofproblemsolving.com/community/c2448

    has more.

    -πφ

    by piphi, Jun 12, 2020, 8:20 PM

  • wait hold up 310,000
    people visited this man?!?!??

    by srisainandan6, May 29, 2020, 5:16 PM

  • first shout in 2020

    by OlympusHero, Apr 4, 2020, 1:15 AM

  • in his latest post he says he moved to wordpress

    by MelonGirl, Nov 16, 2019, 2:43 AM

  • Please revive!

    by AopsUser101, Oct 30, 2019, 7:10 PM

  • first shout in october fj9odiais

    by bulbasaur., Oct 14, 2019, 1:14 AM

128 shouts
Tags
About Owner
  • Posts: 12167
  • Joined: Nov 20, 2005
Blog Stats
  • Blog created: Dec 5, 2006
  • Total entries: 48
  • Total visits: 321831
  • Total comments: 202
Search Blog
a