De Wikipedia, la enciclopedia libre
Saltar a navegación Saltar a búsqueda

En matemáticas , específicamente en álgebra conmutativa , los polinomios simétricos elementales son un tipo de bloque de construcción básico para polinomios simétricos , en el sentido de que cualquier polinomio simétrico puede expresarse como un polinomio en polinomios simétricos elementales. Es decir, cualquier polinomio simétrico P viene dado por una expresión que involucra solo sumas y multiplicaciones de constantes y polinomios simétricos elementales. Hay un polinomio simétrico elemental de grado d en n variables para cada entero no negativo dn , y se forma sumando todos los productos distintos de d distintas variables.

Definición [ editar ]

Los polinomios simétricos elementales en n variables X 1 ,…, X n , escritos e k ( X 1 ,…, X n ) para k = 0, 1,…, n , están definidos por

y así sucesivamente, terminando con

En general, para k ≥ 0 definimos

de modo que e k ( X 1 ,…, X n ) = 0 si k > n .

Así, para cada entero no negativo k menor o igual an existe exactamente un polinomio simétrico elemental de grado k en n variables. Para formar el que tiene grado k , tomamos la suma de todos los productos de k -subconjuntos de las n variables. (Por el contrario, si se realiza la misma operación utilizando conjuntos múltiples de variables, es decir, tomando variables con repetición, se llega a los polinomios simétricos homogéneos completos ).

Dada una partición entera (es decir, una secuencia finita no creciente de enteros positivos) λ = ( λ 1 ,…, λ m ) , se define el polinomio simétrico e λ ( X 1 ,…, X n ) , también llamado polinomio simétrico elemental, por

.

A veces se usa la notación σ k en lugar de e k .

Ejemplos [ editar ]

A continuación se enumeran los n polinomios simétricos elementales para los primeros cuatro valores positivos de  n . (En todos los casos, e 0 = 1 es también uno de los polinomios).

Para n = 1 :

Para n = 2 :

Para n = 3 :

Para n = 4 :

Propiedades [ editar ]

Los polinomios simétricos elementales aparecen cuando expandimos una factorización lineal de un polinomio monico: tenemos la identidad

Es decir, cuando sustituimos valores numéricos por las variables X 1 , X 2 ,…, X n , obtenemos el polinomio monico univariado (con variable λ ) cuyas raíces son los valores sustituidos por X 1 , X 2 ,…, X n y cuyos coeficientes están hasta su signo los polinomios simétricos elementales. Estas relaciones entre las raíces y los coeficientes de un polinomio se denominan fórmulas de Vieta .

El polinomio característico de una matriz cuadrada es un ejemplo de aplicación de las fórmulas de Vieta. Las raíces de este polinomio son los valores propios de la matriz. Cuando sustituimos estos autovalores en los polinomios simétricos elementales, obtenemos, hasta su signo, los coeficientes del polinomio característico, que son invariantes de la matriz. En particular, la traza (la suma de los elementos de la diagonal) es el valor de e 1 y, por lo tanto, la suma de los valores propios. Asimismo, el determinante es, hasta el signo, el término constante del polinomio característico; más precisamente, el determinante es el valor de en . Por tanto, el determinante de una matriz cuadrada es el producto de los valores propios.

El conjunto de polinomios simétricos elementales en n variables genera el anillo de polinomios simétricos en n variables. Más específicamente, el anillo de polinomios simétricos con coeficientes enteros es igual al anillo polinomial integral [ e 1 ( X 1 ,…, X n ),…, e n ( X 1 ,…, X n )] . (Ver más abajo para una declaración y prueba más general.) Este hecho es uno de los fundamentos de la teoría invariante. Para otros sistemas de polinomios simétricos con una propiedad similar, consulte polinomios simétricos de suma de potencias y polinomios simétricos homogéneos completos .

Teorema fundamental de polinomios simétricos [ editar ]

Para cualquier anillo conmutativo A , denote el anillo de polinomios simétricos en las variables X 1 ,…, X n con coeficientes en A por A [ X 1 ,…, X n ] S n . Este es un anillo polinomial en los n polinomios simétricos elementales e k ( X 1 ,…, X n ) para k = 1,…, n . (Tenga en cuenta que e 0 no se encuentra entre estos polinomios; ya que e0 = 1 , no puede ser miembro de ningún conjunto de elementos algebraicamente independientes.)

Esto significa que todo polinomio simétrico P ( X 1 ,…, X n ) ∈ A [ X 1 ,…, X n ] S n tiene una representación única

para algún polinomio QA [ Y 1 ,…, Y n ] . Otra forma de decir lo mismo es que el homomorfismo de anillo que envía Y k a e k ( X 1 ,…, X n ) para k = 1,…, n define un isomorfismo entre A [ Y 1 ,…, Y n ] y A [ X 1 ,…, X n ] Sn .

Proof sketch[edit]

The theorem may be proved for symmetric homogeneous polynomials by a double mathematical induction with respect to the number of variables n and, for fixed n, with respect to the degree of the homogeneous polynomial. The general case then follows by splitting an arbitrary symmetric polynomial into its homogeneous components (which are again symmetric).

In the case n = 1 the result is obvious because every polynomial in one variable is automatically symmetric.

Assume now that the theorem has been proved for all polynomials for m < n variables and all symmetric polynomials in n variables with degree < d. Every homogeneous symmetric polynomial P in A[X1, …, Xn]Sn can be decomposed as a sum of homogeneous symmetric polynomials

Here the "lacunary part" Placunary is defined as the sum of all monomials in P which contain only a proper subset of the n variables X1, …, Xn, i.e., where at least one variable Xj is missing.

Because P is symmetric, the lacunary part is determined by its terms containing only the variables X1, …, Xn − 1, i.e., which do not contain Xn. More precisely: If A and B are two homogeneous symmetric polynomials in X1, …, Xn having the same degree, and if the coefficient of A before each monomial which contains only the variables X1, …, Xn − 1 equals the corresponding coefficient of B, then A and B have equal lacunary parts. (This is because every monomial which can appear in a lacunary part must lack at least one variable, and thus can be transformed by a permutation of the variables into a monomial which contains only the variables X1, …, Xn − 1.)

But the terms of P which contain only the variables X1, …, Xn − 1 are precisely the terms that survive the operation of setting Xn to 0, so their sum equals P(X1, …, Xn − 1, 0), which is a symmetric polynomial in the variables X1, …, Xn − 1 that we shall denote by (X1, …, Xn − 1). By the inductive assumption, this polynomial can be written as

for some . Here the doubly indexed σj,n − 1 denote the elementary symmetric polynomials in n − 1 variables.

Consider now the polynomial

Then R(X1, …, Xn) is a symmetric polynomial in X1, …, Xn, of the same degree as Placunary, which satisfies

(the first equality holds because setting Xn to 0 in σj,n gives σj,n − 1, for all j < n). In other words, the coefficient of R before each monomial which contains only the variables X1, …, Xn − 1 equals the corresponding coefficient of P. As we know, this shows that the lacunary part of R coincides with that of the original polynomial P. Therefore the difference PR has no lacunary part, and is therefore divisible by the product X1···Xn of all variables, which equals the elementary symmetric polynomial σn,n. Then writing PR = σn,nQ, the quotient Q is a homogeneous symmetric polynomial of degree less than d (in fact degree at most dn) which by the inductive assumption can be expressed as a polynomial in the elementary symmetric functions. Combining the representations for PR and R one finds a polynomial representation for P.

The uniqueness of the representation can be proved inductively in a similar way. (It is equivalent to the fact that the n polynomials e1, …, en are algebraically independent over the ring A.) The fact that the polynomial representation is unique implies that A[X1, …, Xn]Sn is isomorphic to A[Y1, …, Yn].

Alternative proof[edit]

The following proof is also inductive, but does not involve other polynomials than those symmetric in X1, …, Xn, and also leads to a fairly direct procedure to effectively write a symmetric polynomial as a polynomial in the elementary symmetric ones. Assume the symmetric polynomial to be homogeneous of degree d; different homogeneous components can be decomposed separately. Order the monomials in the variables Xi lexicographically, where the individual variables are ordered X1 > … > Xn, in other words the dominant term of a polynomial is one with the highest occurring power of X1, and among those the one with the highest power of X2, etc. Furthermore parametrize all products of elementary symmetric polynomials that have degree d (they are in fact homogeneous) as follows by partitions of d. Order the individual elementary symmetric polynomials ei(X1, …, Xn) in the product so that those with larger indices i come first, then build for each such factor a column of i boxes, and arrange those columns from left to right to form a Young diagram containing d boxes in all. The shape of this diagram is a partition of d, and each partition λ of d arises for exactly one product of elementary symmetric polynomials, which we shall denote by eλt (X1, …, Xn) (the t is present only because traditionally this product is associated to the transpose partition of λ). The essential ingredient of the proof is the following simple property, which uses multi-index notation for monomials in the variables Xi.

Lemma. The leading term of eλt (X1, …, Xn) is X λ.

Proof. The leading term of the product is the product of the leading terms of each factor (this is true whenever one uses a monomial order, like the lexicographic order used here), and the leading term of the factor ei(X1, …, Xn) is clearly X1X2···Xi. To count the occurrences of the individual variables in the resulting monomial, fill the column of the Young diagram corresponding to the factor concerned with the numbers 1, …, i of the variables, then all boxes in the first row contain 1, those in the second row 2, and so forth, which means the leading term is X λ.

Now one proves by induction on the leading monomial in lexicographic order, that any nonzero homogeneous symmetric polynomial P of degree d can be written as polynomial in the elementary symmetric polynomials. Since P is symmetric, its leading monomial has weakly decreasing exponents, so it is some X λ with λ a partition of d. Let the coefficient of this term be c, then Pceλt (X1, …, Xn) is either zero or a symmetric polynomial with a strictly smaller leading monomial. Writing this difference inductively as a polynomial in the elementary symmetric polynomials, and adding back ceλt (X1, …, Xn) to it, one obtains the sought for polynomial expression for P.

The fact that this expression is unique, or equivalently that all the products (monomials) eλt (X1, …, Xn) of elementary symmetric polynomials are linearly independent, is also easily proved. The lemma shows that all these products have different leading monomials, and this suffices: if a nontrivial linear combination of the eλt (X1, …, Xn) were zero, one focuses on the contribution in the linear combination with nonzero coefficient and with (as polynomial in the variables Xi) the largest leading monomial; the leading term of this contribution cannot be cancelled by any other contribution of the linear combination, which gives a contradiction.

See also[edit]

  • Symmetric polynomial
  • Complete homogeneous symmetric polynomial
  • Schur polynomial
  • Newton's identities
  • MacMahon Master theorem
  • Symmetric function
  • Representation theory

References[edit]

  • Macdonald, I. G. (1995). Symmetric Functions and Hall Polynomials (2nd ed.). Oxford: Clarendon Press. ISBN 0-19-850450-0. CS1 maint: discouraged parameter (link)
  • Stanley, Richard P. (1999). Enumerative Combinatorics, Vol. 2. Cambridge: Cambridge University Press. ISBN 0-521-56069-1. CS1 maint: discouraged parameter (link)