10.1 Preliminary for Symmetric Groups
Parity and Presentation of Symmetric Groups
parity
Each permutation in can be presented as a diagram permutation said to be odd (rep. even) if the number of crossing in this diagram is odd (rep. even). Define as if is even and if is odd.
Equivalently, the parity of permutation is the number of inversions in : such that .
The adjacent transposition for all , then is generated by .
Proposition
The symmetric group has the following presentation
\begin{proof}Define the LHS equals . We have a surjective homomorphism . It suffices to show . Note that the case of holds. Suppose that , and consider the subgroup generated by where by induction hypothesis. Note thatand this composition is stable by the action of . Therefore, and so .
\end{proof}Definition
Any element of can be written as product of ‘s, and for , then minimum value of is called the length of .
Cycle Types and Partitions
Definition
Any permutation can be written as product of disjoint cycles. The cycle type is the set of length of cycles. For example, the cycle type of in is .
As two permutations with the same cycle type belong to the same conjugacy class, the conjugacy classes of can be described as with and .
There are some properties of partition functions. Write partition in the multiplicity from with , then we have the following property.
Proposition
The cardinality of conjugacy class of corresponding to partition is
\begin{proof}Note that
- are same cycles and there are repetitions.
- and are same cycles and there are repetitions.
Now we finish the proof.
\end{proof}For any given , define as the number of partitions of . Then we can get its generating function.
Link to originalEuler formula
The generating function for partition is
where is the number of partitions of .
10.2 Tableaux, Tabloids and Specht Module
Definition
Let be a partition of with , then define corresponding Young subgroup as
For a trivial representation of , we have that
tableau
Let . A Young tableau of shape is an array obtained by putting into -tableau.
For example, see here. Note that can naturally act on -tableaus.
Definition
Two tableaux are equivalent if they are of the same -shape and they have the same element in each row. Here is an example.
tabloid
Given a tableau , define the tabloid or -tabloid as the set .
the Corresponding Module
Definition
For every partition the corresponding module is the linear space of all tabloids of shape , where and .
Examples.
- Let , then and .
- Let , then and this action is regular, as acting on .
- Let , then and this action deduces the permutation representation of .
Proposition
Let be a partition, and let be the subgroup corresponding to . Then .
\begin{proof}Define where is the trivial -module and is the tabloid whose rows are , , …, . Then is an isomorphism of -modules.\end{proof}the Row/Column Stabilizer and the Associated Polytabloid
Definition
Let be a partition, and let be a tableau in -shape. Suppose has rows and columns . Define subgroups of as and , and they are called the row-stabilizer and the column-stabilizer of .
Remark. With this definition, we have that .
associated polytabloid
For any subset , define the following elements of
where .
For any tableau with , define
where is called the associated polytabloid of -tableau .
Remark. For any -tableau , the associated polytabloid is an element of permutation module .
Example. Here is an example of .
Lemma
Let be a tableau and . Then:
- ;
- ;
- ;
- , where is the associated polytabloid of .
\begin{proof}Note that iff iff iff iff , and the proof of ii) and iii) are similar. Furthermore, .\end{proof}Specht Module
Definition
Let . The Specht module is the submodule of spanned by polytabloids where is of shape .
Remark. Since , any is a cyclic module (a module that is generated by a single element), i.e., it is generated by any tabloid .
Examples.
Link to original
- Let . Then , and . Note that is spanned by and .
- Let . Then and . It follows that and . Thus, is isomorphic to the sign representation.
- Let . Then and . It follows that and where is the tabloid whose second line is . Thus, is the standard module.
10.3 Ordering of Partitions
Definition
Let and be and . Then in lexicographic order if for some , for all and .
Definition
For any and , if for any , then we say dominates , written as .
Example. Hasse diagram for dominance order for , see here.
dominance lemma
Let and be tableaux of shape and respectively. If for each index , the elements of row of are in different columns in , then .
Link to original
\begin{proof}We will arrange the tableau to new form by permuting entries within each column. Take elements of the first row of all in different columns of and move them to the first row in . Continue to do it for the second row for each . The number of elements in the first rows of is and all elements of first rows of appears in first rows of . Thus for any , and so . As an example, see here.\end{proof}
10.4 Complete List of Irreducible Modules
Definition
For each , define an inner product as . Note that this inner produce is -invariant, i.e., .
sign lemma
Let be a subgroup of .
- If , then .
- For any , .
- If , then for some .
- If and are in the same row of , then .
\begin{proof}Easy. See here.\end{proof}Corollary
Let be tableau of shape and be tableau of shape , where and . If , then . Moreover, if and , then .
\begin{proof}Suppose and are in the same row of , then they belong to different columns of . Otherwise by ^fae9c5. Then by ^3e06ba . If and , then for some (by ^fae9c5, each pair of numbers in the same column does not appear in the same row of ). It follows that by ^fae9c5.\end{proof}Corollary
If and is tableau of shape , then is multiple of .
\begin{proof}Since is a linear combination of tabloid of shape , it follows from ^2fe718.\end{proof}submodule theorem
Let be submodule of , then either on or . In particular, is irreducible.
\begin{proof}Take , then for some . Suppose that there exists such that . But then . As is cyclic, generates and is inside of , which yields that . Otherwise, for any tableau and , we have and soTherefore, .
\end{proof}Proposition
If there is a non-zero element , then . If , then is multiplication by a scalar.
\begin{proof}Since is non-zero, there exists polytabloid such that . Moreover, as can be decomposed as , we extend to a homomorphismSince for some -tabloids , there is by ^2fe718.
If , then . Then for any permutation ,
and so is a multiplication by .
\end{proof}Theorem
The Specht modules for form a complete list of irreducible modules of .
\begin{proof}Since the number of equals the number of and so the number of conjugacy classes of , it suffices to show for any .Otherwise, suppose that with . Since is non-trivial, we have non-zero and so by ^376fda. Similarly we have and so , contradiction. Now we finish the proof.
\end{proof}Corollary
The irreducible decomposition of the permutation module has the form
where is called Kostka numbers. Moreover, .
Here is an example.
Link to original
\begin{proof}Note that where and are characters of and , respectively. If , then by ^376fda. Furthermore, since is multiplication by scalar by ^376fda, we have .\end{proof}
10.5 Basis of Specht Modules
Definition
A tableau is called standard if the rows and the columns of are increasing sequences.
In this section, we aim to prove the following theorem. The proof is written here.
Theorem
The set is a basis of Specht module .
We call a sequence with is a composition of , and is called part of . (Compare with partition: composition with .) We say that composition dominates , if for any there is .
Suppose is a tabloid of shape . For each , define as the tabloid formed by all elements in , and define as the composition which is the shape of . Here is an example:
There is a way to order tabloids in the same shape.
Definition
Let and be two tabloids in -shape with the corresponding composition and . We say that dominates , denoted by if for all .
Remark. Not all tabloids can be compared. For example, take
\begin{matrix} 1 & 3 & 4\\ 2 & 5 \end{matrix}\right\}\mbox{ and } \{s\}=\left\{ \begin{matrix} 1 & 2 & 5 \\ 3 & 4 \end{matrix}\right\}.$$ Note that $\lambda^2=(1,1)\lhd \mu^2=(2,0)$ and $\lambda^4=(3,1)\rhd\mu^4=(2,2)$. > [!lemma] > > If $k<\ell$ and $k$ occurs in a lower row than $\ell$ in $\{t\}$, then $\{t\}\lhd(k,\ell)\{t\}$. ^rswrnz `\begin{proof}` Let $\lambda^i$ and $\mu^i$ be composition sequences for $\{t\}$ and $(k,\ell)\{t\}$, respectively. If $k\leqslant i\leqslant \ell$, and suppose $\ell$, $k$ is in row $q$, $r$ of $\{t\}$ respectively, then(\lambda^i)_q=(\mu^i)_q-1,(\lambda^i)_r=(\mu^i)_r+1
and so $\lambda^i\lhd \mu^i$ for all $i$. [[Pasted image 20241128202606.png|Here]] is a illustration. `\end{proof}` > [!corollary] > > If $v=\sum_i c_i\{t_i\}\in M^\lambda$, we say that $\{t_i\}$ appears in $v$ if $c_i\neq 0$. If $t$ is a [[10.5 Basis of Specht Modules#^8pq74m|standard tableau]] and $\{s\}$ appears in $e_t$, then $\{t\}\rhd \{s\}$. `\begin{proof}` Write $s=\pi t$ with $\pi\in C_t$. We do induction on number of "inversions" in $s$, that is, the number of pairs $(k,\ell)$ such that $k\leqslant \ell$ and $k,\ell$ are in the same column but $k$ is in lower row than $\ell$. Then $\{s\}\lhd (k,\ell) \{s\}$ for each inversion and thus $\{s\}\lhd \{t\}$. Here is an example. ![[Pasted image 20241125150516.png|500]] For the last term $\{s\}$, $\{1,2\}$ and $\{3,4\}$ are two inversions. By [[#^rswrnz|^rswrnz]], $\{s\}\lhd (12)\{s\}\lhd (12)(34)\{s\}=\{t\}$. `\end{proof}` Now we are ready to prove [[#^030a4e|^030a4e]]. ^goswcn `\begin{proof}` First we prove that they are linear independent. Suppose $c_1e_{t_1}+\cdots+c_ne_{t_n}=0$, and we label $\{t_1\}$ in such way that there is no $i>1$ with $\{t_1\}\rhd \{t_i\}$. Then by corollary $\{t_1\}$ only appears in $e_{t_1}$ and so $c_1=0$. By induction all $c_i=0$, contradiction. Next we prove that standard polytabloids of shape $\lambda$ span $S^\lambda$. For a polytabloid $e_t$ with tableau $t$, we may assume that column of $t$ are increasing by replacing $e_t$ with $e_{\sigma t}=\sigma e_t=\mathrm{sgn}(\sigma)e_t$ for some suitable $\sigma\in C_t$. Then if $t$ is not standard, we will find two adjacent elements in one row with $t_{i,j}>t_{i,j+1}$. Now we aim to find a linear combination of polytabloids where this row descent is eliminated. This algorithm uses [Garnir relations](https://en.wikipedia.org/wiki/Garnir_relations), as the following example shows. > [!note] Example > > Consider the following Young tableau $t$: > > ![[Pasted image 20241128213455.png|75]] > > There is a row descent in the second row, so we choose the subsets $A$ and $B$ as indicated. > > Consider all partitions of $A\sqcup B=A_i\sqcup B_i$ with $|A_i|=|A|$ and $|B_i|=|B|$, that is > > $$ > \{5,6\}\sqcup\{2,4\},\{4,6\}\sqcup\{2,5\},\{2,6\}\sqcup\{4,5\},\{4,5\}\sqcup\{2,6\},\{2,5\}\sqcup\{4,6\},\{2,4\}\sqcup\{5,6\}, > $$ > > and they corresponds the following polytabloid $t-t_2+t_3+t_4-t_5+t_6$ > > ![[Pasted image 20241128213634.png|500]] > > and the Garnir element $g_{A, B}=1-(45)+(245)+(465)-(2465)+(25)(46)$. One may [[#^791bdb|check]] $g_{A,B}e_t=0$ and so > > $$ > e_t=e_{t_2}-e_{t_3}-e_{t_4}+e_{t_5}-e_{t_6}. > $$ > > Therefore, the row descent in the second row is removed. One can repeatedly apply this procedure to straighten a polytabloid, eventually writing it as a linear combination of standard polytabloids. With this example, it suffices to show $g_{A,B}e_t=0$. > [!proposition] > > For a tableau $t$ with a row descent and the corresponding sets $A$ and $B$, we have $g_{A,B}e_t=0$. > ^791bdb `\begin{proof}` We claim that $S^-_{A\sqcup B}e_t=\{0\}$. Since $|A|+|B|$ is greater than the size of column containing $A$, for any $\sigma\in C_t$ there exist $a,b\in A\sqcup B$ which are in the same row of $\sigma t$. Then by [[10.4 Complete List of Irreducible Modules#^fae9c5|^fae9c5]], $(ab)\in S_{A\sqcup B}$ yields that $S^-_{A\sqcup B}\{\sigma t\}=\{0\}$. Since $e_t=\sum_{\sigma\in C_t}\mathrm{sgn}\sigma\{\sigma t\}$, there is $S^-_{A\sqcup B}e_t=\{0\}$. Then consider the coset decomposition of $S_{A\sqcup B}$. Note thatS_{A\sqcup B}=\pi_1(S_A\times S_B)\sqcup\cdots\sqcup \pi_\ell(S_A\times S_B)
where $g_{A,B}=\sum_{k=1}^\ell\mathrm{sgn}\pi_i\cdot \pi_i$. Hence, $S_{A\sqcup B}^-=g_{A,B}(S_A\times S_B)^-$. For any element $\tau\in S_A\times S_B$, $\tau e_t=\mathrm{sgn}\tau\cdot e_t$ and so $(S_A\times S_B)^- e_t=|S_A\times S_B|e_t$. Therefore,{0}=S^-{A\sqcup B}e_t=g{A,B}(S_A\times S_B)^- e_t=g_{A,B}|S_A\times S_B|e_t
yields that $g_{A,B}e_t=0$. `\end{proof}` Now we finish the proof of [[#^030a4e|^030a4e]]. `\end{proof}` Now we get a basis of Specht module $S^\lambda$. The corresponding representation of it, is Young natural representation. Recall that $S_n$ is generated by transpositions $(k\;k+1)$, then we denote $s_k=(k\; k+1)$. For $e_t$ be a polytabloid corresponding to standard tableau, there is - if $k$ and $k+1$ are in the same column, then $s_ke_t=-e_t$; - if $k$ and $k+1$ are in the same row, then $(k\;k+1)e_t$ will have row descent and then apply Garnir element; - if $k$ and $k+1$ are in different columns and different rows, then $(k\; k+1)e_t$ is another polytabloid corresponding to standard tableau. Therefore, each transposition $(k\;k+1)$ induces a linear transformation on the subspace spanned by s polytabloids corresponding to standard tableaus. We refer to this representation as *Young natural representation*. **Remark.** Every irreducible representation of $S_n$ has integer values, in particular defined over $\mathbb{R}$. Hence, all representations are of [[7 Character Table, Frobenius-Schur Indicator & Complexification#^df16b3|symmetric type]].Link to original
10.6 Frobenius Character Formula
Frobenius character formula
Let with , and let . The value of of on conjugacy class given by with equals to the coefficient of the monomial in polynomial
where .
Corollary
The dimension of Specht module is
\begin{proof}Note that , then by ^4b468aBy Vandermonde determinant,
\vdots & \vdots & & \vdots \\ x_1 & x_2 & \cdots & x_n \\ 1 & 1 & \cdots & 1\end{matrix}\right|=\prod_{1\leqslant i< j\leqslant n}(x_i-x_j)$$ and so the coefficient of $x_1^{\ell_1}\cdots x_n^{\ell_n}$ in(x_1+\cdots+x_n)^n\prod_{1\leqslant i<j\leqslant n}(x_i-x_j)=\left(\sum_{k_1+\cdots+k_n=n}\frac{n!}{k_1!\cdots k_n!}x_1^{k_1}\cdots x_n^{k_n}\right)\left(\sum_{\sigma\in S^n}\mathrm{sgn}\sigma \cdot x_1^{n-\sigma(1)}x_2^{n-\sigma(2)-1}\cdots x_n^{n-\sigma(n)}\right)
is $$\sum_{\sigma\in S_n}\mathrm{sgn}\sigma\frac{n!}{(\ell_1-n+\sigma(1))!\cdots(\ell_n-n+\sigma(n))!}=n!\det\left|\begin{matrix} \dfrac{1}{(\ell_1-n+1)!} & \cdots & \dfrac{1}{(\ell_n-n+1)!} \\ \vdots & & \vdots \\ \dfrac{1}{\ell_1!} & \cdots & \dfrac{1}{\ell_n!} \end{matrix}\right|=\frac{n!}{\ell_1!\cdots\ell_n!}\prod_{1\leqslant i<j\leqslant n}(\ell_i-\ell_j).$$ Now we finish the proof. `\end{proof}` # Hook Length Formula For a Young tableau $\lambda$ and the element on $i$ th row, $j$ th column, define $h(i,j)=\lambda_i+\lambda_j'-i-j+1$, where $\lambda_j'$ is the number of $j$ th column. The hook length formula tells us that\dim S^\lambda=f^\lambda=\frac{n!}{\prod_{(i,j)\in\lambda}h(i,j)}.
[[Pasted image 20241202154117.png|Here]] is an example, and [[List 4 (Oral Exam)#^xmv91w|here]] is a proof. # Young-Fibonacci Lattice The number of standard tableaux for $\lambda$ is the number of path from $\emptyset$ to $\lambda$ diagram in Young diagram. Here is an example. ![[Pasted image 20241204154913.png|300]].Link to original
10.7 Branching Rule
Definition
A box of is called removable, if its removal leaves a diagram, which is denoted by .
Definition
A box of is called addable to if the union of and this box is a diagram.
Lemma
We have .
\begin{proof}Recall that basis of is polytabloids of standard tableaux. Notice every standard tableau consist of in some removable box and a standard tableau for some and then we finish the proof.\end{proof}branching rule
If is a partition, then
S^\lambda\downarrow_{S_{n-1}}=\oplus_{\lambda^-}S^{\lambda^-}\mbox{ and }S^\lambda\uparrow^{S_{n+1}}=\oplus_{\lambda^+}S^{\lambda^+}. $$ ^5voxbj
\begin{proof}Suppose that the removable boxes appear in rows , for each , denote by diagram when you remove box in row . For standard tableau with in th two denote by tableau obtained by removing box with . Since is a finite group, for any -modules with , we have by Maschke’s theorem.To prove the theorem, it suffices to construct chain of -modules
such that .
Define as submodule in spanned by all polytabloids corresponding to standard tableaux with being in the rows , that is, being in rows . Then we get , the chain of modules what we desire. Define
Then is -homomorphism. Since occupies a removable box, we have
and so and . It deduces that
and can be written as
Therefore, we have
Furthermore, by Frobenius reciprocity, there is
and so if and if . Note that iff , thus . Now we finish the proof.
\end{proof}Link to originalCorollary
The restriction is irreducible iff the diagram , that is, is a rectangle.
10.8 Symmetric Polynomials
Some Symmetric Polynomials: p, e and h
Definition
A polynomial in variables is called symmetric if it is stable under all permutations , that is, .
The algebra of symmetric polynomials is denoted by .
Examples. They are symmetric polynomials.
- for all , where the corresponding generated function is .
- The elementary symmetric polynomials for . Furthermore, define . Note that when , and the corresponding generated function is .
- The complete symmetric polynomials for . We also define , and the corresponding generated function is .
Since , we have . Next, notice that , then
P(t)=d/dt\ln(H(t))=\frac{H'(t)}{H(t)}. $$ ^skztji Similarly $P(-t)=E'(t)/E(t)$. It deduces the Newton identity. > [!lemma] the Newton identity > > With the definition above, we have $nh_n=\sum_{r=1}^\infty p_rh_{n-r}$ and $ne_n=\sum_{r=1}^n(-1)^{r-1}p_re_{n-r}$. ^sbpikb > [!proposition] > > If $A$ is an $m \times m$ matrix, then the coefficients of the characteristic polynomial $\operatorname{det}(A-t I)$ are given by the elementary symmetric functions of the eigenvalues $\left\{\lambda_1, \lambda_2, \ldots, \lambda_m\right\}$, with alternating signs $\pm 1$ depending on the degree of each term. The power sums of the eigenvalues coincide with $\operatorname{tr}A^n=\lambda_1^n+\cdots+\lambda_m^n$. **Remark.** There is some connection between symmetric polynomials and the rooted solution of polynomial equations. See [here](https://math.stackexchange.com/a/96329/1445401). When the degree of polynomial is less than or equal to $4$, the roots can be obtained by operating the coefficients by combination of adding, subtracting, multiplication, division, and taking roots. # Schur Polynomial > [!definition] > > Suppose $\lambda=(\lambda_1,\cdots,\lambda_m)$ is a partition of $n$ of length at most $m$. The Schur polynomial $\mathscr s_\lambda$ is defined by > > $$\mathscr s_\lambda=\frac{\left|\begin{matrix} x_1^{\lambda_1+m-1} & \cdots & x_m^{\lambda_1+m-1} \\ x_1^{\lambda_2+m-2} & \cdots & x_m^{\lambda_2+m-2} \\ \vdots & & \vdots \\ x_1^{\lambda_m} & \cdots & x_m^{\lambda_m} \end{matrix}\right|}{\left|\begin{matrix} x_1^{m-1} & \cdots & x_m^{m-1} \\ x_1^{m-2} & \cdots & x_m^{m-2} \\ \vdots & & \vdots \\ 1 & \cdots & 1 \end{matrix}\right|}.$$ > > ^i6y60w For example, when $\lambda=(n)$, $\mathscr s_\lambda=h_n$. When $\lambda=1^n$ and $m=n$, $\mathscr s_\lambda=x_1\cdots x_n$. We can prove them by computing directly, or by the following proposition. > [!proposition] Jacobi-Trudi formula > > We have > > $$ > \mathscr s_\lambda=\left|\begin{matrix}h_{\lambda_1} & h_{\lambda_1+1} & \cdots & h_{\lambda_1+m-1} \\ h_{\lambda_2-1} & h_{\lambda_2} & \cdots & h_{\lambda_2+m-2}\\ \vdots & \vdots & & \vdots \\ h_{\lambda_m-m+1} & h_{\lambda_m-m+2} & \cdots & h_{\lambda_m}\end{matrix}\right|. > $$ ^jrsjoc `\begin{proof}` Let $\alpha=(\alpha_1,\cdots,\alpha_m)$ be a composition. Put $$A_\alpha=\begin{bmatrix} x_1^{\alpha_1} & \cdots & x_m^{\alpha_1} \\ \vdots & & \vdots \\ x_1^{\alpha_m} & \cdots & x_m^{\alpha_m} \end{bmatrix}\mbox{ and }H_\alpha =\begin{bmatrix} h_{\alpha_1-m+1} & \cdots & h_{\alpha_1} \\ \vdots & & \vdots \\ h_{\alpha_m-m+1} & \cdots & h_{\alpha_m}\end{bmatrix}.$$ Consider the elementary symmetric polynomial $e_n^{(k)}$ in variables $x_1,\cdots,\hat x_k,\cdots,x_m$ with $1\leqslant k\leqslant m$ and write them to $m\times m$ matrix $$M=\begin{bmatrix} (-1)^{m-1}e_{m-1}^{(1)} & \cdots & (-1)^{m-1}e_{m-1}^{(m)} \\ \vdots & & \vdots \\ (-1)e_1^{(1)} & \cdots & (-1)e_1^{(m)} \\ 1 & \cdots & 1\end{bmatrix}.$$ We claim that $A_\alpha=H_\alpha M$. Consider generating function for $e_n^{(k)}$E^{(k)}(t)=\sum_{n=0}^{m-1}e_n^{(k)}t^n=\prod_{i\neq k}(1+x_it),
then by $H(t)E(-t)=1$ there isH(t)E^{(k)}(t)=\frac{1}{1-x_kt}=1+x_kt+\cdots+(x_kt)^\ell+\cdots.
Take the coefficient of $t^{\alpha_i}$ and we have\sum_{j=1}^m h_{\alpha_i-m+j}(-1)^{m-j}e_{m-j}^{(k)}=x_k^{\alpha_i}\tag{*}.
With $(*)$, we can prove the claim. Thus, $\det A_\alpha=\det H_\alpha\det M$ and so $\det H_\alpha=\det A_\alpha/\det M$. For $\delta=(m-1,m-2,\cdots,1,0)$, the matrix $H_\delta=1$ and so $\det A_\delta=\det M$. It follows that\det H_\alpha=\frac{\det A_\alpha}{\det A_\delta}=\mathscr s_\lambda,
where $\alpha=\lambda+\delta=(\lambda_1+m-1,\lambda_2+m-2,\cdots,\lambda_m)$. Now we finish the proof. `\end{proof}` We have a more direct formula for $\mathscr s_\lambda$ by considering a generalized tableau of shape $\lambda$. > [!definition] > > We place numbers $\{1,\cdots,n\}$ into tableau $\lambda\vdash n$ allowing repetition. Such tableau is called *semistandard* if rows are weakly increasing (not decreasing) while columns are strictly increasing sequences. > > If $T$ is semistandard with $\{1,\cdots,n\}$, set > > $$ > \mathscr x^T=\prod_{i=1}^n(x_i)^{\operatorname{number of occurrence of }i \in T}. > $$ > [!proposition] > > For $\lambda\vdash n$ with length $m$, the Schur polynomial in $m$ variables is $\mathscr s_\lambda=\sum_T \mathscr x^T$. # Basis of $\Lambda_m$ Now we consider algebra of symmetric polynomials in $m$ variables. First define $\Lambda_m^n=\mathrm{span}\{\mathscr s_\lambda:\lambda\vdash n\}$, the subspace of homogeneous symmetric polynomial of degree $n$, and then define $\Lambda_m=\oplus_{n\geqslant 0}\Lambda_m^n$. For example, when $m=3$, we have $\Lambda_m^0=\mathrm{span}\{1\}$, $\Lambda_m^1=\mathrm{span}\{x_1+x_2+x_3\}$,\begin{aligned} \Lambda_m^2&=\mathrm{span}{\mathscr s_{(1,1)},\mathscr s_{(2,0)}}=\mathrm{span}{x_1x_2+x_1x_3+x_2x_3,x_1^2+x_2^2+x_3^2+x_1x_2+x_1x_3+x_2x_3}, \ \Lambda_m^3&=\mathrm{span}{s_{(3)},s_{(2,1)},s_{(1,1,1)}}=\mathrm{span}{x_1x_2x_3,x_1^2x_2+x_1^2x_3+x_1x_2^2+2x_1x_2x_3+x_1x_3^2+x_2^2x_3+x_2x_3^2,h_3}. \end{aligned}
> [!proposition] > > $\mathscr s_\lambda$ is a basis for $\Lambda_m$ for all $\lambda\vdash n$ with $n\geqslant 0$. (For a fixed $n$, $\lambda\vdash n$ running through all partitions $\{\mathscr s_\lambda:\lambda\vdash n\}$ is a basis for $\Lambda_m^k$.) Then we introduce a few families of symmetric polynomials for algebra $\Lambda_m$. - For any partition $\lambda=(\lambda_1,\cdots,\lambda_m)$, define $p_\lambda=p_{\lambda_1}\cdots p_{\lambda_m}$, $e_\lambda=e_{\lambda_1}\cdots e_{\lambda_m}$ and $h_\lambda=h_{\lambda_1}\cdots h_{\lambda_m}$. For example, when $m=3$ and $n=2$, $p_{(2)}=p_2=x_1^2+x_2^2+x_3^2$ and $p_{(1,1)}=p_1p_1=(x_1+x_2+x_3)^2$, $e_{(2)}=e_2=x_1x_2+x_2x_3+x_1x_3$ and $e_{(1,1)}=e_1e_1=(x_1+x_2+x_3)^2$, $h_{(2)}=h_2=x_1^2+x_2^2+x_3^2+x_1x_2+x_1x_3+x_2x_3$ and $h_{(1,1)}=h_1h_1=(x_1+x_2+x_3)^2$. - Let $\lambda=(\lambda_1,\cdots,\lambda_m)$. Another family of symmetric polynomials is defined as the sum of all different monomials obtained from $x_1^{\lambda_1}\cdots x_m^{\lambda_m}$ under permutation. For example, when $m=3$ and $n=2$, we have $m_{(2)}=x_1^2+x_2^2+x_3^2$, $m_{(1,1)}=2(x_1x_2+x_2x_3+x_1x_3)$. > [!theorem] > > Suppose that $\lambda$ runs over all partitions of $n$ of length at most $m$. Then each of the following families is basis of the space $\Lambda_m^n$: > > $$ > m_\lambda,\mathscr s_\lambda,e_{\lambda'},h_{\lambda'},p_{\lambda'},e_\lambda,h_\lambda,p_\lambda > $$ > > In particular, $\dim \Lambda_m^n=p(n)$ with $m\geqslant n$. `\begin{proof}` We first prove $\{m_\lambda:\lambda\vdash n\}$ is a basis. Note that they are linearly independent: If $\lambda\vdash n$ and $\mu\vdash n$ with $\lambda\neq \mu$, then $m_\lambda$ and $m_\mu$ have no common monomials. Hence, if $\sum_i \alpha_i m_{\lambda_i}=0$, then $\alpha_i=0$. Suppose $f\in \Lambda_m^n$, and we aim to show $f\in\mathrm{span}\{m_\lambda:\lambda\vdash n\}$. We do induction on the order of the element. Take the greatest element $x_1^{n_1}\cdots x_m^{n_m}$ in [[10.3 Ordering of Partitions#^0ujtr4|lexicographic]] order with $\mu=(n_1,\cdots,n_m)$. Then $f-m_\mu\in\Lambda_m^n$ and $f-m_\mu<f$. By induction, $f$ can be written as linear combination of $\{m_\lambda:\lambda\vdash n\}$. To show $\mathscr s_\lambda$ form a basis, define $A_m$ as the space of skew-symmetric polynomials in $x_1,\cdots,x_m$. We say $g$ is skew-symmetric if $(ij)g=-g$. It is easy to see that any skew-symmetric polynomials is divisible by $\prod_{1\leqslant i<j\leqslant m}(x_i-x_j)$. We have that $\varphi:\Lambda_m\to A_m,f\mapsto f\prod_{1\leqslant i<j\leqslant m}(x_i-x_j)$, where $\varphi$ defines an isomorphism between $\Lambda_m$ and $A_m$. One can [check](https://math.uchicago.edu/~may/REU2020/REUPapers/Graham.pdf?utm_source=chatgpt.com) that basis of $A_m$ can be given by $$b_\lambda=\left|\begin{matrix} x_1^{\lambda_1+m-1} & \cdots & x_m^{\lambda_1+m-1} \\ x_1^{\lambda_2+m-2} & \cdots & x_m^{\lambda_2+m-2} \\ \vdots & & \vdots \\ x_1^{\lambda_m} & \cdots & x_m^{\lambda_m} \end{matrix}\right|$$ where $\lambda$ runs over through partitions of $n$ of length $\leqslant m$. Notice that $\mathscr s_\lambda=b_\lambda/\prod_{1\leqslant i<j\leqslant m}(x_i-x_j)$ and $\varphi(\mathscr s_\lambda)=b_\lambda$. Furthermore, as $\dim\Lambda_m^n=\dim A_m^{n+{m\choose 2}}$, it deduces that $\mathscr s_\lambda$ form a basis. Moreover, recall that [[10.4 Complete List of Irreducible Modules#^i3a31d|^i3a31d]] and similarly we have $\mathscr s_\mu=\sum_{\lambda}\kappa_{\mu\lambda}m_\lambda=m_\mu+\sum_{\lambda\neq \mu}\kappa_{\mu\lambda}m_\lambda$, and it yields that $\left\langle \mathscr s_\mu:\mu\vdash n\right\rangle=\left\langle m_\mu:\mu\vdash n\right\rangle=\Lambda_m^n$. To show $e_{\lambda'}$ form a basis, suppose that $\lambda\vdash n$ with $\lambda=(\lambda_1,\cdots,\lambda_k)$ and $\lambda'=(\rho_1,\cdots,\rho_k)$, and thene_{\lambda’}=e_{\rho_1}\cdots e_{\rho_k}=(x_1\cdots x_{\rho_1}+\cdots)(x_1\cdots x_{\rho_2}+\cdots)\cdots(x_1\cdots x_{\rho_k}+\cdots)=x_1^{\lambda_1}\cdots x_m^{\lambda_m}+\cdots+\mbox{smaller terms}=m_\lambda+\sum_{\mu\lhd\lambda}m_\mu.
For $h_\lambda$, we can do it by [[#^jrsjoc|^jrsjoc]], as it deduces that $\left\langle \mathscr s_\lambda:\lambda\vdash n\right\rangle\subseteq\left\langle h_\lambda:\lambda\vdash n\right\rangle$. `\end{proof}` > [!definition] > > For each $m$ we have > > $$ > \Lambda_{m+1}\to \Lambda_m,f(x_1,\cdots,x_m,x_{m+1})\mapsto f(x_1,\cdots,x_m,0) > $$ > > Define a [projective limit](https://en.wikipedia.org/wiki/Inverse_limit) algebra $\Lambda=\oplus_{n\geqslant 0}\Lambda^n$ where $f=(f_m:m\geqslant 0)$ is a sequence of $f_m\in\Lambda_m^n$ and $f_{m+1}\mapsto f_m$ for all $m\geqslant 0$. Then $f$ is a formal series in infinitely many variables, and there are several examples: - $p_n=\sum_{i=1}^\infty x_i^n$ - $e_n=\sum_{1\leqslant i_1<\cdots< i_n}x_{i_1}\cdots x_{i_n}$ - $h_n=\sum_{1\leqslant i_1\leqslant\cdots\leqslant i_n}x_{i_1}\cdots x_{i_n}$ For more details of projective limit, see page 489 of [[Algebra chapter 0 - 2009 - Aluffi.pdf]]. The sequence of Schur polynomials $\mathscr s_\lambda$ defines Schur functions, and the sequence of $m_\lambda$ defines the monomial symmetric functions. > [!lemma] > > We have > > $$ > h_n=\sum_{\lambda\vdash n}\frac{p_\lambda}{z_\lambda}\mbox{ and }e_n=\sum_{\lambda\vdash n}\frac{\epsilon_\lambda p_\lambda}{z_\lambda} > $$ > > where $\lambda=(1^{m_1},\cdots,n^{m_n})$, $z_\lambda=1^{m_1}m_1! \cdots n^{m_n}m_n!$ and $\epsilon_\lambda=(-1)^{n-\ell(\lambda)}$. ^dbe3dc `\begin{proof}` Recall that ![[10.8 Symmetric Polynomials#^skztji|^skztji]] It followsH(t)=\exp(\sum_{k=1}^\infty \frac{p_kt^k}{k})=\prod_{k=1}^\infty\exp(\frac{p_kt^k}{k})=\left( 1+p_1t+\frac{(p_1t)^2}{2!}+\cdots\right)\left( 1+\frac{p_2t}{2}+\frac{(p_1t)^2}{2^2\cdot2!}+\cdots\right)\left( 1+\frac{p_3t}{3}+\frac{(p_3t)^2}{3^2\cdot 2!}+\cdots\right)
and it deduces what we desire. Similarly we can prove $e_n=\sum_{\lambda\vdash n}{\epsilon_\lambda p_\lambda}/{z_\lambda}$. `\end{proof}` > [!proposition] orthogonal properties > > For two families of variables $x=(x_1,\cdots,x_k,\cdots)$ and $y=(y_1,\cdots,y_k,\cdots)$, we have the following properties > > $$ > \begin{aligned} > > \prod_{i,j\geqslant 1}(1-x_iy_j)^{-1}&=\sum_{\lambda}z_\lambda^{-1}p_\lambda(x)p_\lambda(y),\\ > \prod_{i,j\geqslant 1}(1-x_iy_j)^{-1}&=\sum_\lambda h_\lambda(x)m_\lambda(y)=\sum_\lambda m_\lambda(x)h_\lambda(y)=\sum_\lambda \mathscr s_\lambda(x)\mathscr s_\lambda(y). > > \end{aligned} > $$ > `\begin{proof}` By [[#^dbe3dc|^dbe3dc]], we have\prod_{i\geqslant 1}(1-x_it)^{-1}=\sum_\lambda z_\lambda^{-1}p_\lambda t^{|\lambda|}\tag{*}
where $|\lambda|$ is the length of $\lambda$. Note that $\sum_{i,j}(x_iy_j)^k=p_k(x)p_k(y)$. Use $(*)$ for $x_iy_j$ and set $t=1$, we have\prod_{i,j\geqslant 1}(1-x_i y_j)^{-1}=\sum_\lambda z_\lambda^{-1} p_\lambda(x_iy_j)=\sum_\lambda z_\lambda^{-1}p_\lambda(x_i)p_\lambda(y_j).
Equip $\Lambda$ with the following form $\left\langle h_\lambda, m_\mu\right\rangle=\delta_{\lambda\mu}$, and it deduces that $\left\langle p_\lambda,p_\mu\right\rangle=z_\lambda \delta_{\lambda\mu}$ and $\left\langle \mathscr s_\lambda,\mathscr s_\mu\right\rangle=\delta_{\lambda\mu}$. With respect to this inner product, $\{\mathscr s_\lambda:\lambda\vdash n\}$ is orthonormal basis of $\Lambda^n$. `\end{proof}`Link to original
10.9 Characteristical Map
For a finite group , its irreducible characters form basis for the space of class functions . Define and with .
We introduce on the product. For any and , where (rep. ) is a character of (rep. ), define as
Theorem
The product on is commutative and associative.
\begin{proof}Define . Then there is a bijectionFor associativity, use .
\end{proof}Furthermore, we can equip the algebra with an inner product
Definition
Define the character map
Remark. For any and any conjugacy class with representative , we can write the character map as
Theorem
The characteristic map is an isometry and an algebra isomorphism. Moreover, and , where is the character of permutation module .
\begin{proof}A Generalization
Let be a finite group, and let be an associative algebra. If are functions, we may define a bilinear form with values in
If is a class function on and is a class function on , the generalized Frobenius reciprocity still holds, that is
where and with the convention that if .
For any , there is
Also note that .
Since for and , we have
and so is a homomorphism.
Let be the trivial representation of , then
Recall that the permutation module with . Then yields that
To prove that , note that
\eta_{\lambda_1} & \cdots & \eta_{\lambda_1+m-1} \\ \vdots & & \vdots \\ \eta_{\lambda_m-m+1} & \cdots & \eta_{\lambda_m} \end{matrix}\right|$$ and $\mathrm{ch}(\widetilde {\chi^\lambda})=\mathscr s_\lambda$. Now we finish the proof, as $\widetilde {\chi^\lambda}$ is induced $\chi^\lambda$. `\end{proof}` > [!corollary] > > The character table $\chi_{\mathscr p}^\lambda$ is the transition matrix between bases $p_\lambda$ and $\mathscr s_\lambda$ of $\Lambda^n$. ^7ed3d1 `\begin{proof}` It suffices to show $p_{\mathscr p}=\sum_{\lambda}\chi_{\mathscr p}^\lambda \mathscr s_\lambda$. By [[#^492b55|^492b55]] we have\mathscr s_\lambda=\mathrm{ch}(\chi^\lambda)=\sum_{\mathscr p}z_{\mathscr p}^{-1}\chi_{\mathscr p}^\lambda p_{\mathscr p}
and so $\chi_{\mathscr p}^\lambda=\left\langle \mathscr s_\lambda, p_\mathscr p\right\rangle$. It deduces that\left\langle \mathscr s_\lambda,p_\mu\right\rangle =\sum_{\mathscr p} z_\mathscr p^{-1}\chi_{\mathscr p}^{\lambda}\left\langle p_\mathscr p,p_\mu\right\rangle =z_{\mu}^{-1}\chi_{\mu}^\lambda z_{\mu}=\chi_{\mu}^\lambda.
Now we finish the proof. `\end{proof}` > [!corollary] Frobenius character formula > > If $\rho=(\rho_1,\cdots,\rho_m)$ with $\rho_1\geqslant \rho_2\geqslant\cdots\geqslant \rho_m\geqslant 1$, then for any $k\geqslant\ell(\lambda)$ the value $\chi_\rho^\lambda$ is the coefficient of $x_1^{\ell_1}\cdots x_k^{\ell_k}$ in the monomial > > $$ > \prod_{i=1}^m(x_1^{\rho_i}+\cdots+x_k^{\rho_i})\prod_{1\leqslant i< j\leqslant k}(x_i-x_j) > $$ > > where $\ell_i=\lambda_i+k-i$ for $i=1,\cdots,k$. **Remark.** It has been proved in [[10.6 Frobenius Character Formula#^4b468a|^4b468a]]. `\begin{proof}` Recall that $p_{\rho}=\prod_{i=1}^m(x_1^{\rho_i}+\cdots+x_k^{\rho_i})$ and ![[10.8 Symmetric Polynomials#^i6y60w|^i6y60w]] By [[#^7ed3d1|^7ed3d1]], we have $p_{\mathscr p}=\sum_{\lambda}\chi_{\mathscr p}^\lambda \mathscr s_\lambda$ and it deduces that $$\mathrm{LHS}=\sum_{\mu}\chi_{\rho}^\mu\left|\begin{matrix} x_1^{\mu_1+k-1} & ... & x_k^{\mu_1+k-1} \\ \vdots & & \vdots \\ x_1^{\mu_l} & \cdots & x_k^{\mu_k} \end{matrix}\right|.$$ Therefore, the coefficient of $x_1^{\ell_1}\cdots x_k^{\ell_k}$ with $\ell_i=\lambda_i+k-i$ of right hand side is $\chi_{\rho}^\lambda$. `\end{proof}`Link to original


