Degree Adjustment in SNARKs
On adjusting the degree of SNARK-related polynomials to fit our needs
Table of Contents
Notation
We will denote by $\mathbb{F}$ to a finite field and $\mathbb{F}^*$ to its respective multiplicative group. We will also denote as $H$ to a subgroup of $\mathbb{F}^*$. Finally, we will denote by $Z_H$ to the polynomial over $\mathbb{F}$ that vanishes on $H$a and call it the vanishing polynomial.
Given a polynomial $p \in \mathbb{F}[X]$ and an evaluation domain $H$, we will denote by $\tilde{p}$ to the polynomial resulting from the interpolation of the set of points ${(h, p(h)), \text{ for } h \in H}$.
Statement
Let $d,D \in \mathbb{N}$ such that $D > d$ and let $f \in \mathbb{F}[X]$. Let $g \in \mathbb{F}[X]$ be the polynomial defined as: $$ g(X) = f(X) + X^{D-d} \cdot f(X). $$
Then, if $\tilde{f}$ has degree lower than $d$ then we have that $\tilde{g}$ has degree lower than $D$ (Prove it! Hint: Divide $f$ by $Z_H$).
In our context, it is always the case that $D$ is a power of two and $d$ is not. What we intend with this “degree adjustment” is to form a polynomial lower than a power of two from a polynomial for which this is not the case.
One can imagine $D = 8$ and $d = 5$ so that we adjust the degree to the “next” power of two.
To achieve the converse, we need to involve random coefficients $\alpha$ and $\beta$ and define $g(X)$ in the following way: $$ g(X) = \alpha \cdot f(X) + \beta \cdot X^{D-d} \cdot f(X). $$
Now, we can claim that $\tilde{f}$ has degree lower than $d$ if and only if $\tilde{g}$ has degree lower than $D$.
Example: Randomness is Necessary
Let’s see why the converse is not true in the first definition of $g$.
Let $\mathbb{F} = \mathbb{Z}_{97}$ (notably, $96 = 2^5 \cdot 3$), let $\omega = 8$ be a primitive $16$-th root of unity and let: $$ H = \langle \omega \rangle = \{ 1,8, 64, 27, 22, 79, 50, 12, 96, 89, 33, 70, 75, 18, 47, 85 \} $$ be the subset of $\mathbb{F}^*$, generated by $\omega$, containing all the $16$-th roots of unity. We therefore have that $Z_H(X) = X^{16} - 1$ is the vanishing polynomial on $H$.
Assume that $D = 8$ and $d = 5$.
Now, let $f(X) = Z_H(X) + X^{14} - X^{11} + X^{8} - X^5$. We therefore have that: $$ f(X) \equiv X^{14} - X^{11} + X^{8} - X^5 \pmod{Z_H(X)}, $$ so that $f(X)$ behaves identical to $X^{14} - X^{11} + X^{8} - X^5$ when evaluated on $H$, implying that the interpolated polynomial $\tilde{f}$ will be of degree $14$.
In this case, we have that: \begin{align} g(X) = &~(Z_H(X) + X^{14} - X^{11} + X^{8} - X^5) \\& + X^3 \cdot (Z_H(X) + X^{14} - X^{11} + X^{8} - X^5) = \\&(X^3 + 1)Z_H(X) + X^{17} - X^{5} = \\ &~(X^3 + X + 1)Z_H(X) - X^{5} + X, \end{align} so that $g$ behaves identical to $-X^{5} + X$ when evaluated on $H$, implying that the interpolated polynomial $\tilde{g}$ is of degree $5$.
Hence, $\tilde{g}$ is a polynomial of degree $5 < D$ even that $\tilde{f}$ is of degree $14 > d$.
Randomness comes to the rescue!
Say that $\alpha = 3$ and $\beta = 13$.
In this case we have: \begin{align} g(X) = &~3 \cdot (Z_H(X) + X^{14} - X^{11} + X^{8} - X^5) \\&+ 13 \cdot X^3 \cdot (Z_H(X) + X^{14} - X^{11} + X^{8} - X^5) \\ &= (13 \cdot X^3 + 13 \cdot X + 3)Z_H(X) \\&- 10 \cdot X^{14} + 10 \cdot X^{11} - 10 \cdot X^{8} - 3 \cdot X^5 + 13 \cdot X, \end{align}
As it can be seen, thanks to the random values, now the high-degree terms do not cancel out and we find that $\tilde{g}$ is a polynomial of degree ${14}$.
Alternatives
The following alternatives also work! We should assume however that the Prover cannot predict the randomness in advance.
$\beta = \alpha^2$ (or in general $\beta = f(\alpha)$)
$$ g(X) = \alpha \cdot f(X) + \alpha^2 \cdot X^{D-d} \cdot f(X). $$ As far as $\alpha \neq \alpha^2$ this alternative will also work and therefore we would save one random value in the process. However, notice that the probability of $\tilde{g}$ being a low degree polynomial is of $1 - 2/|\mathbb{F}|$, since both $\alpha = 0$ and $\alpha = 1$ would make it fail.
In general, all $\alpha$’s that make $\alpha = \beta = f(\alpha)$ would make it fail.
Only $\alpha$ is used
$$ g(X) = \alpha \cdot f(X) + X^{D-d} \cdot f(X). $$ This should work as far $\alpha \neq 1$. The probability here is of $1 - 1/|\mathbb{F}|$.
Only $\beta$ is used
$$ g(X) = f(X) + \beta \cdot X^{D-d} \cdot f(X). $$ This should work as far $\beta \neq 1$. The probability here is of $1 - 1/|\mathbb{F}|$.
When Both Random Values are Needed?
Say that we have $f_1,f_2 \in \mathbb{F}[X]$ and we define: $$ g(X) = (\alpha_1 \cdot f_1(X) + \beta_1 \cdot X^{D-d_1} \cdot f_1(X)) + (\alpha_2 \cdot f_2(X) + \beta_2 \cdot X^{D-d_2} \cdot f_2(X)). $$ Here we have that $\tilde{g}$ is of degree lower than $D$ if and only if the degree of $\tilde{f_1} < d_1$ and the degree of $\tilde{f_2} < d_2$.
When we include in the definition of $g$ more than one $f$ polynomial, then both randomness are necessary to achieve the highest probability.
Conclusion
We arrive to the following conclusions:
- If only one $f$ polynomial is included in the definition of $g$, then simply sample a random value $\alpha$ and define: $$ g(X) = f(X) + \alpha \cdot X^{D-d} \cdot f(X). $$
- If more than one $f$ polynomial is needed, then sample two random values $\alpha_i, \beta_i$ per $f$ polynomial and define: $$ g(X) = \sum_{i=1}^k (\alpha_i \cdot f_i(X) + \beta_i \cdot X^{D-d_i} \cdot f_i(X)). $$ In such case, the $\alpha_i$’s and $\beta_i$’s must be sampled uniformly and independently (of each other) from $\mathbb{F}$. This guarantees that $\tilde{g}$ will be of low degree with probability $1 - 1/|\mathbb{F}|$, assuming that the $\tilde{f}$’s are of low degree.