Difference between revisions of "Polynomial ring"

(New page: Given a (commutative) ring <math>R</math>, the polynomial ring <math>R[x]</math> is, informally, "the ring of all polynomials in <math>x</math> with coefficients in <math>R</math>.")
 
m (fixed "bivariate polynomial" link & added some more links)
 
(5 intermediate revisions by 4 users not shown)
Line 1: Line 1:
Given a (commutative) [[ring]] <math>R</math>, the polynomial ring <math>R[x]</math> is, informally, "the ring of all polynomials in <math>x</math> with coefficients in <math>R</math>."
+
Given a [[ring]] <math>R</math>, the '''polynomial ring''' <math>R[x]</math> is, informally, "the ring of all polynomials in a [[commutative]] <math>x</math> with coefficients in <math>R</math>."  That is, it is the ring of all sums of the form
 +
<cmath> \sum_{k=0}^N a_k x^k </cmath>
 +
where <math>N</math> is a [[nonnegative integer]] that varies from sum to sum.
 +
 
 +
The ring <math>R[x]</math> is also an <math>R</math>-module.
 +
 
 +
==Formal Definition==
 +
 
 +
We can rigorously define <math>R[x]</math> to be the set of all sequences of elements of <math>R</math> with only finitely many terms nonzero:
 +
<cmath>R[x] = \{(a_0,a_1,a_2,\ldots)|\text{the set }\{i|a_i\neq 0\} \text{ is finite }\}</cmath>
 +
The we call the elements of <math>R[x]</math> '''polynomials''' (over <math>R</math>). For a polynomial <math>p=(a_0,a_1,a_2,\ldots)</math>, the terms <math>a_0,a_1,a_2,\ldots</math> are called the '''coefficients''' of <math>p</math>.
 +
 
 +
For example, <math>(0,0,0,\ldots), (0,1,0,0,\ldots), (1,4,0,3,0,0,\ldots)</math> are polynomials, but <math>(1,1,1,1,\ldots)</math> is not a polynomial.
 +
 
 +
At this point, our formal definition of a polynomial may seem unrelated to our intuitive notion of a polynomial. To relate these two concepts, we introduce the some notation.
 +
 
 +
We denote the polynomial <math>(a_0,a_1,a_2,\ldots)</math> by <math>a_0+a_1x+a_2x^2+\cdots</math>. For instance, we write:
 +
<cmath>
 +
\begin{align*}
 +
(0,0,0,\ldots) &= 0+0x+0x^2+\cdots\
 +
(0,1,0,0,\ldots) &= 0+1x+0x^2+0x^3+\cdots\
 +
(1,4,0,3,0,0,\ldots) &= 1+4x+0x^2+3x^3+0x^4+0x^5+\cdots
 +
\end{align*}
 +
</cmath>
 +
Typically, we repress the terms with coefficient <math>0</math> and we do not write the coefficient on terms with coefficient <math>1</math>. We also do not care about the order in which the terms are written, and indeed often list them in descending order of power. So we would write:
 +
<cmath>
 +
\begin{align*}
 +
(0,0,0,\ldots) &= 0\
 +
(0,1,0,0,\ldots) &= x\
 +
(1,4,0,3,0,0,\ldots) &= 3x^3+4x+1
 +
\end{align*}
 +
</cmath>
 +
 
 +
We can now define addition and multiplication in <math>R[x]</math> in the canonical way:
 +
<cmath>
 +
\sum_i a_ix^i + \sum_i b_ix^i = \sum_i (a_i+b_i)x^i;
 +
</cmath>
 +
<cmath>
 +
\biggl(\sum_i a_ix^i\biggr)\cdot \biggl(\sum_j b_jx^j\biggr) = \sum_k\biggl(\sum_{i=0}^k a_ib_{k-i}\biggr)x^k
 +
</cmath>
 +
It is now a simple matter to verify that <math>R[x]</math> indeed constitutes a ring under these operations, and that it is commutative when <math>R</math> is commutative. This ring has additive identity <math>0=(0,0,0,\ldots)</math> and multiplicative identity <math>1 = (1,0,0,\ldots)</math>.
 +
 
 +
The ring <math>R</math> can be thought of as a [[subring]] of <math>R[x]</math> via the embedding <math>r\mapsto (r,0,0,\ldots)</math>.
 +
 
 +
For a nonzero polynomial <math>p = (a_0, a_1, a_2, \dotsc)</math>, the greatest integer <math>N</math> such that <math>a_N \neq 0</math> is called the ''degree of <math>p</math>''.  It is often denoted <math>\deg(p)</math>. By convention, the degree of the zero polynomial (i.e., of the polynomial <math>0 = (0,0,0,\dotsc)</math>) is either undefined, or <math>-1</math>, or <math>-\infty</math> depending on the author.
 +
 
 +
== Polynomials and Functions ==
 +
 
 +
Polynomials are not [[function|functions]].  The symbol <math>x</math> does not represent a variable (in the usual sense of this word), but rather a commutative indeterminate, that is, a formal symbol that commutes with the elements of <math>R</math> and whose powers are independent of each other over <math>R</math>.  However, polynomials are associated with functions, called polynomial functions.  This is a historically important association: originally, the two concepts were almost inseperable.  Indeed, polynomial functions were almost certainly the first functions studied.  The concept of "function" was not articulated until the 12th to 14th centuries.  By Euler's time, "functions" were explicit rules of association built from elementary expressions, though Euler himself generalized the concept to what we now call continuous functions.  This began a long debate over how "function" should be defined that did not resolve until the 20th century, when the modern, abstract definition of "function" became standard. (Although the classical concept of a function as a "deterministic rule to compute an output based on an input" has survived in [[constructive mathematics]] and [[functional programming]]!)  The history of the concept of polynomial is more obscure, but they were almost certainly not divorced from their function roots until the beginnings of modern algebra in the 19th century.
 +
 
 +
Specifically, each element in <math>p \in R[x]</math> is ''associated'' with a function mapping <math>R</math> into itself; this function is evaluated at a value <math>a \in R</math> by replacing the symbol <math>x</math> with the element <math>a</math> in <math>p</math>.
 +
 
 +
More, formally, we can prove by [[induction]] on the degree of the elements of <math>R</math> that for any <math>a\in R</math> and any <math>p \in R[x]</math>, there is a unique element of <math>R</math> that is equivalent to <math>p</math> modulo <math>(x-a)</math>.  This unique element is sometimes denoted <math>p(a)</math>.  Thus we may associate each element <math>p \in R</math> with the mapping <math>a \mapsto p(a)</math> of <math>R</math> into itself.  (Alternatively, we can associate with each element <math>a\in R</math> a [[homomorphism]] of <math>R[x]</math> into <math>R</math> that is the composition of the canonical homomorphism of <math>R[x]</math> into <math>R[x]/(x-a)</math> and the canonical homomorphism of <math>R[x]/(x-a)</math> into <math>R</math>.)
 +
 
 +
It is important to note that although each polynomial in <math>p\in R</math> is associated with a function mapping <math>R</math> into itself, it is not always possible to uniquely reconstruct <math>p</math> from this function.  In particular, if <math>R</math> is [[finite]], then the set of functions mapping <math>R</math> into itself is finite, whereas <math>R[x]</math> is [[infinite]] (unless <math>R</math> is a trivial ring), so some functions must be associated with infinitely many different polynomials.  (In fact, it follows from the theory of [[coset]]s, applied to the additive groups involved, that ''every'' function that is associated with a polynomial must be associated with infinitely many polynomials.)
 +
 
 +
For example, if <math>R</math> is the ring of [[integer]]s modulo <math>p</math>, for <math>p</math> a [[prime number | prime]], then [[Fermat's Little Theorem]] states that the polynomials <math>x^p</math> and <math>x</math> are associated with the same functions mapping <math>R</math> into itself.
 +
 
 +
Nevertheless, in many infinite rings (such as the ring of integers), this association of polynomials with functions ''is'' unique.  In such contexts, the polynomials are often identified with their functions, by abuse of language.  The association of polynomials with functions is an important one: polynomials were first studied as polynomial functions, and indeed it was not until recently that functions gained their modern definition, quite divorced from polynomials.
 +
 
 +
There is yet another reason why polynomials should not be regarded as the functions they represent. Namely, if <math>R</math> is a [[commutative ring]], then a polynomial <math>p \in R[x]</math> can be evaluated not only at an element of <math>R</math>, but also at an element <math>a</math> of any <math>R</math>-algebra <math>A</math> (by replacing every appearance of <math>x</math> by <math>a</math> in <math>p</math>). For instance, <math>p</math> can be applied to any square matrix with entries in <math>R</math>, or to any other polynomial over <math>R</math> (this results in the composite of the two polynomials), or to a linear operator on an <math>R</math>-module. This is in contrast to actual functions, which come with a predetermined domain and are not defined outside of it. Notice that the commonly used notation <math>p(x)</math> for a polynomial <math>p \in R[x]</math> is a particular case of evaluation: When one evaluates a polynomial <math>p \in R[x]</math> at the indeterminate <math>x</math>, one gets <math>p</math> back (because replacing all <math>x</math>'s by <math>x</math> in <math>p</math> does not change the polynomial). Thus, <math>p(x) = p</math> (and not just by convention).
 +
 
 +
There is a yet more general context in which polynomials are defined. Namely, one can define <math>R[x]</math> whenever <math>R</math> is an additive [[abelian group]] (not necessarily a ring). Then <math>R[x]</math> will be just an additive group, not a ring; nevertheless such constructions are occasionally of use (see, e.g., the definition of a [loop algebra]). The usefulness mainly stems from the fact that if <math>R</math> is an <math>S</math>-module for some commutative ring <math>S</math>, then <math>R[x]</math> becomes an <math>S[x]</math>-module. Here, again, trying to regard polynomials as functions leaves one hopelessly lost.
 +
 
 +
The above discussion was concerned with univariate polynomials (i.e., polynomials in one variable). One can define polynomials in multiple (even infinitely many) variables. The definition is similar to the above, although it requires some more technical bookkeeping, at least if one wants to keep track of variable names. There is a way to avoid some of the technicalities for finitely many variables by an inductive construction (i.e., defining a polynomial in <math>n+1</math> variables to be a polynomial in <math>1</math> variable over a polynomial ring in <math>n</math> variables), although here again some care has to be taken (when you define a polynomial in two variables, you do not want to call both of them <math>x</math>).
 +
 
 +
== Finitude of Degree ==
 +
 
 +
"Polynomials of infinite degree" are properly called [[formal power series]].  The set of formal power series over a ring <math>R</math> constitutes a ring, denoted <math>R[[x]]</math>, of which the ring of polynomials is a subring.  In general, formal power series are not associated with mappings of <math>R</math> into itself, as infinitely iterated addition is not generally well-defined unless the sum converges.
 +
 
 +
== Differential operators ==
 +
 
 +
Given a commutative ring <math>R</math>, one can define an <math>R</math>-linear map <math>\partial : R[x] \to R[x]</math> as follows:
 +
<cmath>
 +
\partial \left(a_0 + a_1 x + a_2 x^2 + \cdots\right) = 1 a_1 + 2 a_2 x^2 + 3 a_3 x^3 + \cdots .
 +
</cmath>
 +
This operator <math>\partial</math> is called the ''(formal) derivative (with respect to <math>x</math>)'', and behaves much like the [[derivative]] of a function in analysis, and in fact commutes with the natural map from polynomials to polynomial functions (although, once again, polynomials are not per-se functions). For example, the Leibniz rule <math>\partial \left(fg\right) = f \cdot \partial g + \left(\partial f\right) \cdot g</math> and the chain rule <math>\partial\left(f\left(g\right)\right) = \left(\partial f\right)\left(g\right) \cdot \left(\partial g\right)</math> hold for any two polynomials <math>f</math> and <math>g</math>. Unlike the derivative in analysis, the formal derivative does not rely on any limits or topology (in particular, <math>R</math> can be any commutative ring, not necessarily <math>\mathbb{R}</math> or <math>\mathbb{C}</math>), although it does have a property mimicking the "difference quotient" definition of the analytic derivative: If <math>f \in R[x]</math>, then the [[polynomial]] <math>f(x+y)-f(x) \in R[x,y]</math> is divisible by <math>y</math>, and evaluating the quotient <math>\dfrac{f(x+y)-f(x)}{y}</math> at <math>y = 0</math> (that is, substituting <math>0</math> for <math>y</math>) yields precisely <math>\partial f</math>. (The second variable <math>y</math> here is the analogue of the infamous <math>\varepsilon</math> in analysis, but here we need not take any limits.)
 +
 
 +
== See also ==
 +
 
 +
* [[Polynomial]]
 +
* [[Formal power series]]
 +
* [[Ring]]
 +
 
 +
[[Category:Ring theory]][[Category:Abstract algebra]]

Latest revision as of 23:10, 2 August 2020

Given a ring $R$, the polynomial ring $R[x]$ is, informally, "the ring of all polynomials in a commutative $x$ with coefficients in $R$." That is, it is the ring of all sums of the form \[\sum_{k=0}^N a_k x^k\] where $N$ is a nonnegative integer that varies from sum to sum.

The ring $R[x]$ is also an $R$-module.

Formal Definition

We can rigorously define $R[x]$ to be the set of all sequences of elements of $R$ with only finitely many terms nonzero: \[R[x] = \{(a_0,a_1,a_2,\ldots)|\text{the set }\{i|a_i\neq 0\} \text{ is finite }\}\] The we call the elements of $R[x]$ polynomials (over $R$). For a polynomial $p=(a_0,a_1,a_2,\ldots)$, the terms $a_0,a_1,a_2,\ldots$ are called the coefficients of $p$.

For example, $(0,0,0,\ldots), (0,1,0,0,\ldots), (1,4,0,3,0,0,\ldots)$ are polynomials, but $(1,1,1,1,\ldots)$ is not a polynomial.

At this point, our formal definition of a polynomial may seem unrelated to our intuitive notion of a polynomial. To relate these two concepts, we introduce the some notation.

We denote the polynomial $(a_0,a_1,a_2,\ldots)$ by $a_0+a_1x+a_2x^2+\cdots$. For instance, we write: \begin{align*} (0,0,0,\ldots) &= 0+0x+0x^2+\cdots\\ (0,1,0,0,\ldots) &= 0+1x+0x^2+0x^3+\cdots\\ (1,4,0,3,0,0,\ldots) &= 1+4x+0x^2+3x^3+0x^4+0x^5+\cdots \end{align*} Typically, we repress the terms with coefficient $0$ and we do not write the coefficient on terms with coefficient $1$. We also do not care about the order in which the terms are written, and indeed often list them in descending order of power. So we would write: \begin{align*} (0,0,0,\ldots) &= 0\\ (0,1,0,0,\ldots) &= x\\ (1,4,0,3,0,0,\ldots) &= 3x^3+4x+1 \end{align*}

We can now define addition and multiplication in $R[x]$ in the canonical way: \[\sum_i a_ix^i + \sum_i b_ix^i = \sum_i (a_i+b_i)x^i;\] \[\biggl(\sum_i a_ix^i\biggr)\cdot \biggl(\sum_j b_jx^j\biggr) = \sum_k\biggl(\sum_{i=0}^k a_ib_{k-i}\biggr)x^k\] It is now a simple matter to verify that $R[x]$ indeed constitutes a ring under these operations, and that it is commutative when $R$ is commutative. This ring has additive identity $0=(0,0,0,\ldots)$ and multiplicative identity $1 = (1,0,0,\ldots)$.

The ring $R$ can be thought of as a subring of $R[x]$ via the embedding $r\mapsto (r,0,0,\ldots)$.

For a nonzero polynomial $p = (a_0, a_1, a_2, \dotsc)$, the greatest integer $N$ such that $a_N \neq 0$ is called the degree of $p$. It is often denoted $\deg(p)$. By convention, the degree of the zero polynomial (i.e., of the polynomial $0 = (0,0,0,\dotsc)$) is either undefined, or $-1$, or $-\infty$ depending on the author.

Polynomials and Functions

Polynomials are not functions. The symbol $x$ does not represent a variable (in the usual sense of this word), but rather a commutative indeterminate, that is, a formal symbol that commutes with the elements of $R$ and whose powers are independent of each other over $R$. However, polynomials are associated with functions, called polynomial functions. This is a historically important association: originally, the two concepts were almost inseperable. Indeed, polynomial functions were almost certainly the first functions studied. The concept of "function" was not articulated until the 12th to 14th centuries. By Euler's time, "functions" were explicit rules of association built from elementary expressions, though Euler himself generalized the concept to what we now call continuous functions. This began a long debate over how "function" should be defined that did not resolve until the 20th century, when the modern, abstract definition of "function" became standard. (Although the classical concept of a function as a "deterministic rule to compute an output based on an input" has survived in constructive mathematics and functional programming!) The history of the concept of polynomial is more obscure, but they were almost certainly not divorced from their function roots until the beginnings of modern algebra in the 19th century.

Specifically, each element in $p \in R[x]$ is associated with a function mapping $R$ into itself; this function is evaluated at a value $a \in R$ by replacing the symbol $x$ with the element $a$ in $p$.

More, formally, we can prove by induction on the degree of the elements of $R$ that for any $a\in R$ and any $p \in R[x]$, there is a unique element of $R$ that is equivalent to $p$ modulo $(x-a)$. This unique element is sometimes denoted $p(a)$. Thus we may associate each element $p \in R$ with the mapping $a \mapsto p(a)$ of $R$ into itself. (Alternatively, we can associate with each element $a\in R$ a homomorphism of $R[x]$ into $R$ that is the composition of the canonical homomorphism of $R[x]$ into $R[x]/(x-a)$ and the canonical homomorphism of $R[x]/(x-a)$ into $R$.)

It is important to note that although each polynomial in $p\in R$ is associated with a function mapping $R$ into itself, it is not always possible to uniquely reconstruct $p$ from this function. In particular, if $R$ is finite, then the set of functions mapping $R$ into itself is finite, whereas $R[x]$ is infinite (unless $R$ is a trivial ring), so some functions must be associated with infinitely many different polynomials. (In fact, it follows from the theory of cosets, applied to the additive groups involved, that every function that is associated with a polynomial must be associated with infinitely many polynomials.)

For example, if $R$ is the ring of integers modulo $p$, for $p$ a prime, then Fermat's Little Theorem states that the polynomials $x^p$ and $x$ are associated with the same functions mapping $R$ into itself.

Nevertheless, in many infinite rings (such as the ring of integers), this association of polynomials with functions is unique. In such contexts, the polynomials are often identified with their functions, by abuse of language. The association of polynomials with functions is an important one: polynomials were first studied as polynomial functions, and indeed it was not until recently that functions gained their modern definition, quite divorced from polynomials.

There is yet another reason why polynomials should not be regarded as the functions they represent. Namely, if $R$ is a commutative ring, then a polynomial $p \in R[x]$ can be evaluated not only at an element of $R$, but also at an element $a$ of any $R$-algebra $A$ (by replacing every appearance of $x$ by $a$ in $p$). For instance, $p$ can be applied to any square matrix with entries in $R$, or to any other polynomial over $R$ (this results in the composite of the two polynomials), or to a linear operator on an $R$-module. This is in contrast to actual functions, which come with a predetermined domain and are not defined outside of it. Notice that the commonly used notation $p(x)$ for a polynomial $p \in R[x]$ is a particular case of evaluation: When one evaluates a polynomial $p \in R[x]$ at the indeterminate $x$, one gets $p$ back (because replacing all $x$'s by $x$ in $p$ does not change the polynomial). Thus, $p(x) = p$ (and not just by convention).

There is a yet more general context in which polynomials are defined. Namely, one can define $R[x]$ whenever $R$ is an additive abelian group (not necessarily a ring). Then $R[x]$ will be just an additive group, not a ring; nevertheless such constructions are occasionally of use (see, e.g., the definition of a [loop algebra]). The usefulness mainly stems from the fact that if $R$ is an $S$-module for some commutative ring $S$, then $R[x]$ becomes an $S[x]$-module. Here, again, trying to regard polynomials as functions leaves one hopelessly lost.

The above discussion was concerned with univariate polynomials (i.e., polynomials in one variable). One can define polynomials in multiple (even infinitely many) variables. The definition is similar to the above, although it requires some more technical bookkeeping, at least if one wants to keep track of variable names. There is a way to avoid some of the technicalities for finitely many variables by an inductive construction (i.e., defining a polynomial in $n+1$ variables to be a polynomial in $1$ variable over a polynomial ring in $n$ variables), although here again some care has to be taken (when you define a polynomial in two variables, you do not want to call both of them $x$).

Finitude of Degree

"Polynomials of infinite degree" are properly called formal power series. The set of formal power series over a ring $R$ constitutes a ring, denoted $R[[x]]$, of which the ring of polynomials is a subring. In general, formal power series are not associated with mappings of $R$ into itself, as infinitely iterated addition is not generally well-defined unless the sum converges.

Differential operators

Given a commutative ring $R$, one can define an $R$-linear map $\partial : R[x] \to R[x]$ as follows: \[\partial \left(a_0 + a_1 x + a_2 x^2 + \cdots\right) = 1 a_1 + 2 a_2 x^2 + 3 a_3 x^3 + \cdots .\] This operator $\partial$ is called the (formal) derivative (with respect to $x$), and behaves much like the derivative of a function in analysis, and in fact commutes with the natural map from polynomials to polynomial functions (although, once again, polynomials are not per-se functions). For example, the Leibniz rule $\partial \left(fg\right) = f \cdot \partial g + \left(\partial f\right) \cdot g$ and the chain rule $\partial\left(f\left(g\right)\right) = \left(\partial f\right)\left(g\right) \cdot \left(\partial g\right)$ hold for any two polynomials $f$ and $g$. Unlike the derivative in analysis, the formal derivative does not rely on any limits or topology (in particular, $R$ can be any commutative ring, not necessarily $\mathbb{R}$ or $\mathbb{C}$), although it does have a property mimicking the "difference quotient" definition of the analytic derivative: If $f \in R[x]$, then the polynomial $f(x+y)-f(x) \in R[x,y]$ is divisible by $y$, and evaluating the quotient $\dfrac{f(x+y)-f(x)}{y}$ at $y = 0$ (that is, substituting $0$ for $y$) yields precisely $\partial f$. (The second variable $y$ here is the analogue of the infamous $\varepsilon$ in analysis, but here we need not take any limits.)

See also