Gram Root Decomposition over the Polynomial Ring:
Application to Sphericalization of Discrete Gaussian
Hiroki Okada
1,2 a
and Tsuyoshi Takagi
2
1
KDDI Research, Inc., Saitama, 356-8502 Japan
2
The University of Tokyo, Tokyo, 113-8654 Japan
Keywords:
Lattice-Based Cryptography, Polynomial Ring, Discrete Gaussian, Gram Root Decomposition.
Abstract:
Efficient construction of lattice-based cryptography is often based on the polynomial ring. Furthermore, many
advanced lattice-based cryptosystems require the analysis of the discrete Gaussian under convolutions and
linear transformations. In this paper, we present an efficient Gram root decomposition algorithm of the
polynomial ring and an application to sphericalization of the discrete Gaussian. Let r be a polynomial of
spherical discrete Gaussian coefficients and e be a fixed polynomial. Then, the coefficient vector of r ·e
is (statistically close to) non-spherical discrete Gaussian whose (scaled) covariance matrix is G
e
:= EE
,
where E is composed of rotations of the coefficient vector of e. Given G
e
, our algorithm outputs polynomials
ζ
1
,...,ζ
l
s.t.
l
i=1
G
ζ
i
+G
e
is a scalar matrix. The objective of this algorithm is similar to the (ring version of)
integral Gram root decomposition proposed by Ducas et al. (Eurocrypt 2020). Notably, our algorithm ensures
the bounds of the norm of ζ
i
and the minimum eigenvalue of G
ζ
i
, whereas Ducas et al.s algorithm does not
ensure such bounds. Owing to the bounds, we can obtain a polynomial (r
0
e +
l
i=1
r
i
ζ
i
) whose coefficients
are spherical discrete Gaussians, where r
i
are polynomials with discrete Gaussian coefficients; i.e., we can
“cancel out” the dependence between the coefficients.
1 INTRODUCTION
Lattice-based cryptosystems (Kiltz et al., 2018; Bos
et al., 2018; Fouque et al., 2020) have been selected
as NIST post-quantum cryptography (PQC) stan-
dards (Alagic et al., 2022). Lattice-based schemes,
including the PQC standards, are often based on
polynomial rings i.e., NTRU (Hoffstein et al., 1998;
Fouque et al., 2020), Ring-LWE (Stehl
´
e et al., 2009;
Lyubashevsky et al., 2010) and Module-LWE (Brak-
erski et al., 2011; Langlois and Stehl
´
e, 2015), to
achieve better efficiency.
The discrete Gaussian probability distribution
(Definition 2.2) is an important object in lattice cryp-
tography, and more generally the mathematical as-
pects of lattices. For example, the analysis of the
computational hardness of lattice problems (Regev,
2005; Micciancio and Regev, 2007; Gentry et al.,
2008; Peikert, 2009; Brakerski et al., 2013) relies on
the useful properties of discrete Gaussians.
In addition, many advanced lattice-based cryp-
tosystems such as identity-based encryption (Gentry
et al., 2008; Agrawal et al., 2010) and functional
a
https://orcid.org/0000-0002-5687-620X
encryption (Agrawal et al., 2011) require algorithms
to sample discrete Gaussian that are efficient and se-
cure against side-channel attacks, e.g., (Gentry et al.,
2008; Peikert, 2010; Micciancio and Peikert, 2012;
Micciancio and Walter, 2017; Genise and Miccian-
cio, 2018; Ducas et al., 2020). While most works
rely on floating-point arithmetic (FPA), Ducas et al.
(Ducas et al., 2020) presented an algorithm without
FPA, which is efficient and amenable to side-channel
countermeasures. The core technique of (Ducas et al.,
2020) is the integral matrix Gram root decomposition,
which is an algorithm to obtain an integer matrix A s.t.
G = AA
for the target covariance matrix G.
Many studies have analyzed the properties (e.g.,
correlation, convolutions, linear transformation) of
discrete Gaussians: (Peikert, 2010; Agrawal et al.,
2013; Aggarwal and Regev, 2016; Genise et al., 2020;
Okada et al., 2023). The discrete Gaussian distribu-
tion is called spherical if its covariance matrix is a
scalar matrix, and ellipsoidal otherwise. Although
lattice-based cryptography usually uses the spherical
discrete Gaussian, some applications rely on the ellip-
soidal discrete Gaussian because of some artifacts of
the proof techniques (Agrawal et al., 2013). As dis-
306
Okada, H. and Takagi, T.
Gram Root Decomposition over the Polynomial Ring: Application to Sphericalization of Discrete Gaussian.
DOI: 10.5220/0013139700003899
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 11th International Conference on Information Systems Secur ity and Privacy (ICISSP 2025) - Volume 2, pages 306-317
ISBN: 978-989-758-735-1; ISSN: 2184-4356
Proceedings Copyright © 2025 by SCITEPRESS Science and Technology Publications, Lda.
cussed in (Lyubashevsky et al., 2010), an ellipsoidal
discrete Gaussian makes certain applications and their
proofs are more cumbersome than the case with the
spherical discrete Gaussian.
Our Contributions. In this paper, we advance the
research on the properties of ring polynomials whose
coefficients are distributed accordingly to the discrete
Gaussian distribution. Our contributions are 1) an
algorithm for Gram root decomposition over the ring
and 2) its application to the sphericalization of a dis-
crete Gaussian over the ring.
Root Decomposition over the Ring First, we present
an efficient Gram root decomposition algorithm of
polynomials.
Let r be a polynomial over the ring R (defined in
Eq. (1)) whose coefficient vector (Definition 3.7) is
a multivariate spherical discrete Gaussian, and let e
be a fixed polynomial over R. Then, the coefficient
vector of r ·e is (statistically close to) nonspherical
discrete Gaussian whose (scaled) covariance matrix
is G
e
:= EE
, where E is composed of rotations of
the coefficient vector of e. That is, E is the coefficient
matrix of e, and G
e
is the coefficient Gram matrix of
e, as defined in Definition 3.7.
Given e (and G
e
), our Gram root decompo-
sition algorithm outputs polynomials ζ
1
,.. .,ζ
l
s.t.
l
i=1
G
ζ
i
+ G
e
1
become a scalar matrix βI for some
β > 0, where G
ζ
i
is the coefficient Gram matrix of ζ
i
.
In other words, the Gram root decomposition algo-
rithm outputs polynomials whose sums of coefficients
Gram matrices can “diagonalize” the given matrix G
e
.
Notably, this algorithm also ensures an upper
bound of the norm of ζ
i
and a lower bound of the
minimum eigenvalue of G
ζ
i
. These bounds are cru-
cial for our second convolution described below. The
objective of our Gram root decomposition algorithm
is similar to that of the integral Gram root decompo-
sition proposed by Ducas et al. (Ducas et al., 2020).
However, their method does not ensure the bounds
on the outputs as does our algorithm; thus, it is not
sufficient for the application that we will explain later.
Application: Sphericalizing the Discrete Gaus-
sian over the Ring. As an application of our root
Gram root decomposition algorithm, we show how
to “sphericalize” ring polynomials with (ellipsoidal)
discrete Gaussian coefficients. Let r
0
,.. .,r
l
be poly-
nomials with a spherical discrete Gaussian coefficient
vector. Given a fixed e R and G
e
, output poly-
nomials ζ
1
,.. .,ζ
l
s.t.
l
i=1
G
ζ
i
+ G
e
= βI by using
our Gram root decomposition algorithm. Then, we
1
More generally, given e
1
,...,e
m
, our algorithm out-
puts ζ
1
,...,ζ
l
s.t.
l
i=1
G
ζ
i
+
m
i=1
G
e
i
. We set m = 1 in
the abstract and Section 1 for simplicity.
show that the coefficient vector of the polynomial
(r
0
e +
l
i=1
r
i
ζ
i
) R follows discrete Gaussian dis-
tribution whose covariance is a scalar matrix βI, i.e.,
a spherical discrete Gaussian.
Notably, the above convolution theorem requires a
lower-bound of the minimum eigenvalue of G
ζ
i
. It is
not trivial to obtain a nonnegligibly large lower bound
of the minimum eigenvalue of random matrices, as
analyzed in, e.g., (Tao, 2012; Nguyen and Vu, 2016).
Owing to the bounds ensured by our algorithm, we
can prove the convolution theorem.
Organization. The remainder of this paper is orga-
nized as follows. In Section 2, we provide necessary
definitions and lemmas. We analyze the basic prop-
erties of the polynomial ring of concern (defined in
Eq. (1)) in Section 3, which are building blocks of
this paper and may be of independent interest. We
propose our Gram root decomposition algorithm in
Section 4. Then, as an application, we show how to
sphericalize discrete Gaussians in Section 5. Finally,
we summarize this paper and discuss future work in
Section 6.
2 PRELIMINARIES
In Section 2.1, we provide the notations used in this
paper. Then, we provide necessary the definitions and
lemmas of lattices in Section 2.2 and the Gaussian
distribution in Section 2.3.
2.1 Notations
The base 2 logarithm is denoted by log. For N N,
define [N] := {1, ..., N}. The size of set S is denoted
by |S|.
We use bold lower-case for vectors and bold
upper-case for matrices. We write the transpose of
x as x
. The l
2
-norm and l
-norm of x is denoted by
x and x
, respectively. We denote the identity
matrix by I
n
Z
n×n
. We write G 0 if G is positive
definite. A square root of G 0 is a nonsingular
matrix S such that SS
= G, which is written as S =
G. Note that (
G)
1
= S
1
= (S
)
= (
p
G
1
)
holds. The largest and smallest singular values of
a matrix S are denoted by σ
max
(S) and σ
min
(S), re-
spectively. We denote by S the matrix norm of S
induced by the l
2
-norm. Note that we have σ
max
(S) =
S, and if σ
min
(S) ̸= 0, i.e., S is nonsingular, then
σ
min
(S)
1
= S
1
holds. The Frobenius norm of
S is S
F
=
p
tr(S
S). Let S
len
= max
i[n]
s
i
,
where s
i
is the i-th column vector of S, then we have:
Gram Root Decomposition over the Polynomial Ring: Application to Sphericalization of Discrete Gaussian
307
Fact 2.1. For any matrix S, we have S
len
S
S
F
, S
1
S
2
len
S
1
∥∥S
2
len
S
1
∥∥S
2
.
2.2 Lattices
A lattice L is the set of all integer linear combinations
of linearly independent vectors b
1
,···, b
n
R
m
, i.e.,
L = {
n
i=1
z
i
b
i
| z Z
n
}. If we arrange the vectors b
i
as the columns of a matrix B R
m×n
, then we have
L := L(B) = {Bz |z Z
n
} = BZ
n
.
The rank of this lattice is n and its dimension is m.
If n = m, then the lattice is called full rank. For
arbitrary c R
m
, a coset of lattice L is defined as
L + c := {v + c | v L}. The dual of a lattice L is
b
L := {x | y L,x,y Z}. We denote the volume
of the fundamental parallelepiped of L as det(L). We
have det(
b
L) = 1/ det(L). For a full-rank lattice L(B),
we have det(L(B)) = |det(B)|. For n-rank lattice L
and i = 1,. ..,n, the successive minimum λ
i
(L) is
defined as the radius of the smallest ball that contains
i linearly independent vectors in L. The integer lattice
L := Z
n
is the primary focus of this paper.
2.3 Gaussians
The continuous Gaussian distribution with a mean of
0 and a standard deviation σ > 0 is denoted as N
σ
.
For a rank-n matrix S R
n×m
, the (centered) el-
lipsoidal Gaussian function on R
n
with the (scaled)
covariance matrix G = SS
R
n×n
is defined as:
ρ
S
(x) := exp(πx
(SS
)
1
x).
Since the function ρ
S
(x) is determined exactly by G,
we have ρ
S
= ρ
G
. When S = sI
n
, we write ρ
S
as ρ
s
.
For any set A R
n
, we define ρ
S
(A) :=
xA
ρ
S
(x).
We define the discrete Gaussian distribution over
the lattice L as follows:
Definition 2.2 (Discrete Gaussian). For a full
column-rank matrix S, the (centered) discrete Gaus-
sian distribution over a lattice L is defined as
x L,D
L,S
(x) = ρ
S
(x)/ρ
S
(L).
In particular, when SS
= s
2
I
n
for some s > 0, we ab-
breviate D
L,S
as D
L,s
and call it the spherical discrete
Gaussian distribution.
The smoothing parameter of lattice L is defined as
η
ε
(L) = min{s | ρ
1/s
(
b
L) 1 + ε} for ε > 0. Unless
otherwise specified, we set ε to be negligibly small;
ε = negl(λ). An upper-bound of η
ε
(L) is obtained by
the successive minimum
2
λ
n
(L):
2
Although (Gentry et al., 2008, Lemma 3.1) provides a
sharper bound, we rely on Lemma 2.3 for simplicity.
Lemma 2.3 ((Micciancio and Regev, 2007, Lemma
3.3)). Define η
+
ε
(Z
n
) :=
p
ln(2n(1 + 1/ε))/π. For
any rank-n lattice L and any ε > 0, we have η
ε
(L)
λ
n
(L)η
+
ε
(Z
n
). In particular, η
ε
(Z
n
) η
+
ε
(Z
n
) holds.
For simplicity of notation, we also define
˜
η
ε
(·) :=
2η
ε
(·) and
˜
η
+
ε
(Z
n
) :=
2η
+
ε
(Z
n
). Note that we
have
˜
η
+
ε
(Z) > η
+
ε
(Z
2
). The smoothing parameter is
extended to matrices as follows:
Definition 2.4 ((Peikert, 2010, Definition 2.3)). Let
G 0 be any positive definite matrix. For any lattice
L, we say that
G η
ε
(L) if η
ε
(
G
1
L) 1.
For a full-rank lattice, we obtain a sufficient con-
dition as follows:
Fact 2.5. For any full-rank lattice L(B) and G 0,
G η
ε
(L) holds if 1
G
1
∥∥B
len
η
+
ε
(Z
n
),
i.e., σ
min
(
G) B
len
η
+
ε
(Z
n
).
Proof. By Fact 2.1 and Lemma 2.3, we have
η
ε
(
G
1
L) λ
n
(
G
1
L)η
+
ε
(Z
n
)
G
1
B
len
η
+
ε
(Z
n
)
G
1
∥∥B
len
η
+
ε
(Z
n
) 1.
The linear transformation of a discrete Gaussian is
as follows:
Lemma 2.6 (Special case of (Genise et al., 2020,
Lemma 1)). For any nonsingular matrices S,T
Z
n×n
, we have T ·D
Z
n
,S
= D
T·Z
n
,TS
.
The sum of two ellipsoidal discrete Gaussians is
statistically close to an ellipsoidal discrete Gaussian:
Lemma 2.7 (Special case of (Peikert, 2010, Thm.
3.1)). Let G
1
,G
2
0 be positive definite matrices and
define G
3
:= (G
1
1
+ G
1
2
)
1
. Let L
1
, L
2
be full-rank
lattices such that
G
2
η
ε
(L
2
) and
G
3
η
ε
(L
1
),
and let
X := {(x
1
,x
2
)|x
1
D
L
1
,
G
1
,x
2
x
1
+D
L
2
x
1
,
G
2
}.
Then, the marginal distribution of x
2
in X is statisti-
cally close to D
L
2
,
G
1
+G
2
.
In particular, when L
1
L
2
, we can simplify
Lemma 2.7 because the coset L
2
x
1
is equal to L
2
itself for any x
1
L
1
:
Corollary 2.8. Let G
1
,G
2
0 be positive definite ma-
trices and define G
3
:= (G
1
1
+ G
1
2
)
1
. Let L
1
, L
2
be full-rank lattices such that L
1
L
2
,
G
2
η
ε
(L
2
)
and
G
3
η
ε
(L
1
). Then, we have
D
L
1
,
G
1
+ D
L
2
,
G
2
s
D
L
2
,
G
1
+G
2
.
ICISSP 2025 - 11th International Conference on Information Systems Security and Privacy
308
3 PROPERTIES OF THE
POLYNOMIAL RING
In this section, we analyze the basic properties of the
polynomial ring defined in Eq. (1). The properties
derived in this section are the building blocks for the
construction of our algorithm presented in Section 5.
3.1 Definition
Let Z[X] be a set of polynomials with integer coeffi-
cients. In this paper, we consider a polynomial ring
R = Z[X]/(X
n
+ 1) for n a power of 2, (1)
which is often used in lattice-based cryptography,
e.g., (Lyubashevsky et al., 2010; Kiltz et al., 2018;
Bos et al., 2018).
We define a signed permutation matrix that is use-
ful for analyzing the properties of R.
Definition 3.1. The signed permutation matrix is de-
fined as
P =
0 1
I
n1
0
Z
n×n
. (2)
The following facts hold for P:
Fact 3.2 (Properties of P). For P defined in Eq. (2),
we have:
P
i
=
O I
i
I
ni
O
P
n/2
=
O I
n/2
I
n/2
O
for n even (3)
P
i
=
O I
ni
I
i
O
= (P
i
)
(4)
P
ni
= P
n
P
i
= P
i
= (P
i
)
(5)
We also define a reverse permutation matrix:
Definition 3.3. The reverse permutation matrix is de-
fine as
R :=
0 .. . 1
.
.
. 1
.
.
.
1 .. . 0
Z
n×n
.
The following facts hold for R (and P):
Fact 3.4. RR = I, R
= R
Fact 3.5. P
i
R = RP
i
(RP
i
= P
i
R)
We define an outer-product-like operation :
Definition 3.6 (). For any m, n N, A
1
,.. .,A
m
Z
n×n
and b Z
n
, we define:
(A
1
··· A
m
) b := (A
1
b ··· A
m
b) Z
n×m
b
A
1
.
.
.
A
m
:=
b
A
1
.
.
.
b
A
m
Z
m×n
Finally, using the P defined in Definition 3.1 and
the operation , we define the coefficient vector, co-
efficient matrix and coefficient Gram matrix for any
polynomial a R as follows:
Definition 3.7 (Coefficient vector / matrix / Gram
matrix). Let a =
n1
i=0
a
i
X
i
R be a polynomial. For
a, we define the coefficient vector, the coefficient mat-
rix and the coefficient Gram matrix as follows:
a := vec(a) := (a
0
,a
1
,.. .,a
n1
)
Z
n
A := mat(a) :=
I P ··· P
n1
a Z
n×n
G
a
:= Gram(a) := AA
Z
n×n
.
We denote the distribution over R as follows:
Definition 3.8. For the distribution χ over Z
n
, define
R(χ) := {a R |vec(a) χ}.
3.2 Properties of the Coefficient Matrix
In this subsection, we present some basic proper-
ties of the coefficient matrix (mat(a)). By Defini-
tions 3.1, 3.6 and 3.7, for any a R s.t. vec(a) :=
(a
0
,a
1
,.. .,a
n1
)
Z
n
, we have:
mat(a) :=
I P . .. P
n1
vec(a)
:=
a Pa . .. P
n1
a
=
a
0
a
n1
... a
1
a
1
a
0
... a
2
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
a
n2
... a
0
(6)
We can see that mat(a) is a matrix composed of
permutations (by P
i
) of the first column vector a =
vec(a). The coefficient matrix mat(a) can also be
seen a matrix composed of permutations of the last
row vector (a
n1
,a
n2
,a
n3
,.. .,a
0
) = a
R:
Fact 3.9 (Dual representation of mat(a)). For any
a R, we have:
mat(a) = a
R
P
n1
.
.
.
P
I
3.3 Properties of the Coefficient Gram
Matrix
In this subsection, we present some basic properties
of the coefficient Gram matrix (mat(a)). We can ex-
plicitly write the coefficient Gram matrix as follows:
Gram Root Decomposition over the Polynomial Ring: Application to Sphericalization of Discrete Gaussian
309
Fact 3.10. Let a R and a := vec(a). Then, we have:
Gram(a)
=
a
2
a
Pa ··· a
P
n1
a
a
P
1
a a
2
··· a
P
n2
a
.
.
.
.
.
.
.
.
.
a
P
(n1)
a a
P
(n2)
a ··· a
2
Proof. By Definition 3.7, Fact 3.4, Fact 3.5 and
Fact 3.9, we have:
Gram(a)
:= mat(a)(mat(a))
= a
R
P
n1
...
P
I
·
P
(n1)
... I
Ra
= a
R
I P ··· P
n1
P
1
I ··· P
n2
.
.
.
.
.
.
.
.
.
P
(n1)
P
(n2)
··· I
Ra
=
a
2
a
P
1
a ··· a
P
(n1)
a
a
Pa a
2
··· a
P
(n2)
a
.
.
.
.
.
.
.
.
.
.
.
.
a
P
n1
a a
P
n2
a ··· a
2
Thus, the claim follows by subsequent Fact 3.11.
Fact 3.11. a
P
i
a = a
(P
i
)
a = a
P
i
a
Proof. Follows from Fact 3.2: (P
i
)
= P
i
.
Furthermore, owing to the properties of P (shown
in Fact 3.2), we show that coefficient Gram matrices
have symmetricity in their elements. To begin with,
we define the inverse function of mat(·) for simplicity
of notation.
Definition 3.12 (mat
1
). For any a R and A :=
mat(a), we define mat
1
(A) := a = vec(a) (the first
column vector of A).
Then, we show the symmetricity of the elements
of the coefficient Gram matrices. Note that the co-
efficient Gram matrix of a R is a coefficient mat-
rix of some b(= aa) R, as we will show later in
Lemma 3.21.
Lemma 3.13 (Symmetricity of Gram(a)). Let a R,
a := vec(a), G
a
:= Gram(a) and (σ
0
,.. .,σ
n1
) :=
mat
1
(G
a
). Then, we have:
σ
0
= a
2
σ
i
= σ
ni
(1 i
n
2
1) (7)
σ
n
2
= 0 (8)
(Note: Here, n is assumed to be even. This is satisfied
by the definition of Eq. (1).)
Proof. By Fact 3.10, we have:
(
σ
0
= a
2
σ
i
= a
P
i
a (i = 1, .. .,n 1)
.
Then, for (i = 1,. ..,n 1), we have:
σ
ni
= a
P
(ni)
a = a
(P
ni
)
a ( Eq. (4))
= a
P
ni
a = a
P
i
a = σ
i
( Eq. (5))
Thus, Eq. (7) holds. We have Eq. (8) since a
P
n
2
a = 0
holds any a Z
n
by Eq. (3) in Fact 3.2.
Owing to this symmetricity, G
a
:= Gram(a) is de-
termined only by σ
0
,.. .,σ
n
2
1
(since A := mat(a) is
determined only by a := vec(a)).
3.4 Rotation
For any a R, multiplication by X
i
can be regarded
as “rotation” of the coefficient vector/matrix by the
permutation matrix P
i
. The coefficient Gram Matrix
is invariant with respect to multiplication by X
i
:
Fact 3.14 (Rotation). For any a R and i Z,
vec(aX
i
) = P
i
a,
mat(aX
i
) = P
i
A,
Gram(aX
i
) = Gram(a).
Proof. Let a =
n1
i=0
a
i
X
i
. Then, we have:
aX = a
n1
+ a
0
X +···+ a
n2
X
n1
vec(aX
i
) = P
i
a
mat(aX
i
) =
I P . .. P
n1
vec(aX
i
)
=
I P . .. P
n1
P
i
a
= P
i
I P . .. P
n1
a
= P
i
A
Gram(aX
i
) = AP
i
(AP
i
)
= AA
( Eq. (4))
3.5 Commutativity
We show an important lemma to analyze the coeffi-
cient vector of the product of polynomials over R:
This result is why we define the coefficient matrix as
in Definition 3.7.
Lemma 3.15 (Multiplication over R). For a,b R,
vec(ab) = Ab = Ba,
where A := mat(a), a := vec(a), B := mat(b), and
b := vec(b).
ICISSP 2025 - 11th International Conference on Information Systems Security and Privacy
310
Proof. Note that X
n+i
= (X
n
+ 1)X
i
X
i
X
i
holds. We have
a = a
0
+ a
1
X +···+ a
n1
X
n1
= x
a and
b = b
0
+ b
1
X +···+ b
n1
X
n1
= x
b,
where a := (a
0
,a
1
,.. .,a
n1
)
and b := (b
0
,b
1
,.. .,
b
n1
)
are the coefficient vectors of a and b, and
x := (1,X , X
2
,.. .,X
n1
)
. Then, we have
ab =(a
0
b
0
a
1
b
n1
a
2
b
n2
···a
n1
b
1
)
+ (a
0
b
1
+ a
1
b
0
a
2
b
n1
···a
n1
b
2
)X
+ (a
0
b
2
+ a
1
b
1
+ a
2
b
0
···a
n1
b
3
)X
2
+ ...
=x
b
0
b
n1
... b
1
b
1
b
0
... b
2
.
.
.
.
.
.
.
.
.
.
.
.
b
n1
b
n2
... b
0
a
0
a
1
.
.
.
a
n1
=x
Ba
Thus, we have vec(ab) = Ba. We obtain vec(ab) =
Ab in a similar manner.
It is known that the ring R defined in Eq. (1) is
commutative: for any a, b R, we have ab = ba.
This can also be confirmed by Lemma 3.15: we obtain
vec(ab) = Ab = Ba = vec(ba) by Lemma 3.15, and
vec(·) is isomorphic from R to Z
n
.
Importantly, the coefficient matrix also has com-
mutativity:
Theorem 3.16 (Commutativity of the coefficient
matrices). For any A := mat(a), B := mat(b),
mat(ab) = AB = BA
Proof. Let a := vec(a) and b := vec(b). We have:
mat(ab)
=
I P . .. P
n1
vec(ab)
=
I P . .. P
n1
Ab ( Lemma 3.15)
=
A PA . .. P
n1
A
b
=
A AP . .. AP
n1
b ( Lemma 3.17)
= (A
I P . .. P
n1
) b
= AB
By Lemma 3.15, vec(ab) = Ab = Ba. Thus, similarly,
we also have mat(ab) = BA.
We complete the above proof by presenting the
deferred Lemma 3.17:
Lemma 3.17. For any a R, A := mat(a) and i Z,
we have P
i
A = AP
i
Proof. Let a := vec(a). Then, we have
P
i
A = P
i
·
I P . .. P
n1
a
=
I P . .. P
n1
P
i
a = mat(P
i
a), and
AP
i
= a
R
P
n1
P
n2
.
.
.
P
I
·P
i
= a
RP
i
P
n1
P
n2
.
.
.
P
I
= (P
i
a)
R
P
n1
P
n2
.
.
.
P
I
( Fact 3.5)
= mat(P
i
a),
where we use the fact that a
RP
i
= a
P
i
R =
(P
i
a)
R holds.
As a corollary of Lemma 3.15, we obtain the fol-
lowing fact:
Corollary 3.18. For a,b R, ab = 0 holds if and only
if a = 0 or b = 0. Thus, A := mat(a) is nonsingular
for any a ̸= 0.
3.6 Transpose
We first define the transpose of polynomials in R:
Definition 3.19 (Transpose in the ring). For a :=
a(X) :=
n1
i=0
a
i
X
i
R, we define its transpose as
a := a(X
1
) R.
Then, we can derive the coefficient vector, co-
efficient matrix and coefficient Gram matrix of the
transpose polynomials as follows:
Fact 3.20. For any a :=
n1
i=0
a
i
X
i
R, we have:
vec(a) = (a
0
,a
n1
,a
n2
,.. .,a
1
)
(9)
mat(a) = (mat(a))
(= A
) (10)
Gram(a) = (mat(a))
mat(a)(= A
A) (11)
Proof. Note that X
n
+ 1 0 1 X
n
X
i
X
ni
holds. Hence, we have
a := a(X
1
) :=
n1
i=0
a
i
X
i
=
n1
i=0
(a
i
)X
ni
= a
0
+
n1
i=1
(a
ni
)X
i
.
Thus, we obtain Eq. (9). We can derive Eq. (10) since
mat(a) =
I P . .. P
n1
vec(a)
= (mat(a))
via Eq. (6). We obtain Eq. (11) by definition.
Gram Root Decomposition over the Polynomial Ring: Application to Sphericalization of Discrete Gaussian
311
Next, we show an important lemma to analyze the
coefficient Gram matrix: for any a R, the coeffi-
cient Gram matrix of a is the coefficient matrix (not
coefficient “Gram” matrix) of the product of a and a:
Lemma 3.21. For any a R, Gram(a) = mat(aa).
Proof. Let A := mat(a) and a := vec(a). Then, we
have
Gram(a)
= AA
= A
A ( Theorem 3.16)
= A
I P . .. P
n1
a
=
I P . .. P
n1
A
a ( Lemma 3.17)
=
I P . .. P
n1
vec(aa),
where we use the fact mat(a) = A
(by Fact 3.20) and
Lemma 3.15.
The above lemma implies that each column of
Gram(a) is a rotation (by P
i
) of its first column vector
vec(aa):
Corollary 3.22. For any a R, we have
mat
1
(Gram(a)) = mat
1
(mat(aa)) = vec(aa).
4 OUR ALGORITHM FOR GRAM
ROOT DECOMPOSITION OVER
THE RING
In this section, we present an algorithm for Gram root
decomposition over the ring R.
We present our algorithm for Gram root decom-
position over the ring R in Algorithm 1. The inputs
of the algorithm are “short” polynomials e
1
,.. .,e
m
R s.t. e
i
B N for all i [m]. (Note that we
explained our algorithm with m = 1 in the abstract
and introduction section of this paper for simplic-
ity.) Then, the algorithm outputs ζ = (ζ
1
,.. .,ζ
l
)
R
l
s.t.
l
i=1
G
ζ
i
+
m
i=1
G
e
i
= mnB
2
I, where G
ζ
i
:=
Gram(ζ
i
) and G
e
i
:= Gram(e
i
). In other words, the
goal of the algorithm is to “diagonalize” the sum of
the coefficient Gram matrices
m
i=1
G
e
i
. Furthermore,
the output polynomials ζ
1
,.. .,ζ
l
are short (ζ
i
2mB), and the lower-bound of the minimum eigen-
value of G
ζ
i
is given (σ
min
(Z
i
)
2
n
). These condi-
tions on the output are necessary for the application
we present in Section 5.
We prove the correctness of Algorithm 1 in Sec-
tion 4.1. We then show that the outputs of Algo-
rithm 1 satisfies the bounds (ζ
i
2mB and
σ
min
(mat(ζ
i
))
2
n
) in Section 4.2.
4.1 Correctness
We show that Algorithm 1 works correctly:
Theorem 4.1. The output ζ
1
,.. .,ζ
l
R of Algo-
rithm 1 satisfies
l
i=1
G
ζ
i
= G and l <
5
2
n.
Proof. The first part (Algorithms 1 to 1) of the algo-
rithm decomposes the non-diagonal elements of
G := mnB
2
I
m
i=1
G
e
i
,
i.e., G :=
m
i=1
G
e
i
. Define σ := (σ
0
,.. .,σ
n1
)
:=
mat
1
(G); Then we have σ =
m
i=1
mat
1
(G
e
i
) =
m
i=1
vec(e
i
e
i
) via Corollary 3.22. On Algorithm 1,
we first decompose |σ
i
| by four non-negative in-
teger squares c
2
1
,.. .,c
2
4
. Such integer squares ex-
ist for any natural numbers according to Lagrange’s
four-square theorem, and we efficiently calculate
them via the Rabin–Shallit (RS) algorithm in The-
orem 4.2. For z
( j)
:= (c
j
sgn(σ
i
)c
j
· X
i
) · X
r
R on Algorithm 1, let τ
( j)
:= (τ
( j)
0
,.. .,τ
( j)
n1
) :=
mat
1
(Gram(z
( j)
)). Then, by Lemma 4.4, we have:
τ
( j)
0
= 2c
2
j
τ
( j)
i
= sgn(σ
i
)c
2
j
τ
( j)
ni
= sgn(σ
i
)c
2
j
τ
( j)
k
= 0 (i / {0,i, n i})
Hence, we have:
4
j=1
τ
( j)
0
= 2
4
j=1
c
2
j
= 2|σ
i
|
4
j=1
τ
( j)
i
= sgn(σ
i
)
4
j=1
c
2
j
= σ
i
4
j=1
τ
( j)
ni
= sgn(σ
i
)
4
j=1
c
2
j
= σ
i
Therefore, at Algorithm 1, mat
1
(
ζS
1
Gram(ζ)) =
(
n
2
1
i=1
2|σ
i
|,σ
1
,.. .,σ
n1
); thus, we have
G
ζS
1
Gram(ζ) = mnB
2
I (G +
ζS
1
Gram(ζ))
= mnB
2
I γI = βI,
where γ := σ
0
+
n
2
1
i=1
2|σ
i
|as defined on Algorithm 1.
Note that |σ
i
| σ
0
=
m
i=1
e
i
2
mB
2
holds by
Lemma 4.3; thus, γ (n1)σ
0
m(n1)B
2
. Hence,
we have mB
2
β.
The second part (Algorithms 1 to 1) of the algo-
rithm decomposes βI. The purpose of Algorithms 1
to 1 is to decompose βI with “short” polynomials:
This is needed only to satisfy ζ
i
2mB. For the
monomial z :=
2mB·X
r
on Algorithm 1, Gram(z) =
2mB
2
I holds by Lemma 4.4. Thus, at Algorithm 1,
we have
ζS
1
Gram(ζ) = l
· 2mB
2
I. The rest of
the algorithm is to decompose (β l
·2mB
2
)I = δI.
For the monomial z
( j)
:= c
j
· X
r
on Algorithm 1,
ICISSP 2025 - 11th International Conference on Information Systems Security and Privacy
312
Algorithm 1: Gram root decomposition over the ring.
Input : e
1
,.. .,e
m
R s.t. e
i
B for all i [m]
Output : ζ
1
,.. .,ζ
l
R s.t.
l
i=1
G
ζ
i
= G := mnB
2
I
m
i=1
G
e
i
Z
n×n
, ζ
i
2mB and
σ
min
(Z
i
)
2
n
, where Z
i
:= mat(ζ
i
) G
ζ
i
:= Gram(ζ
i
) and G
e
i
:= Gram(e
i
) for any i.
1 S
1
:=
/
0, S
2
:=
/
0 // Sets to store ζ
1
,...,ζ
l
Decompose non-diagonal elements of G:
2 Define G :=
m
i=1
G
e
i
and σ := (σ
0
,.. .,σ
n1
)
:= mat
1
(G). (c.f., Definition 3.12)
// |σ
i
| σ
0
=
m
i=1
e
i
2
mB
2
by Lemma 4.3. σ
i
= σ
ni
for i [1,
n
2
1], σ
n
2
+1
= 0 by Lemma 3.13
3 for i = 1 to
n
2
1 do
4 Find c
1
,.. .,c
4
N s.t.
4
j=1
c
2
j
= |σ
i
| with RS algorithm (Theorem 4.2)
5 for j = 1 to 4 do
6 z
( j)
:= (c
j
sgn(σ
i
)c
j
·X
i
) ·X
r
R for r
$
Z
n
, // z
( j)
=
q
2c
2
j
p
2|σ
i
|
2mB
7 Update S
1
:= S
1
{z
( j)
} // σ
min
(mat(z
( j)
))
2c
j
n
2
n
by Lemma 4.8
8 end
// mat
1
(
4
j=1
Gram(z
( j)
)) = (2|σ
i
|,0,...,0,σ
i
,...)
by Lemma 4.4
9 end
10 γ := σ
0
+
n
2
1
i=1
2|σ
i
| // (n 1)σ
0
(n 1)mB
2
by Lemma 4.3
Decompose diagonal elements of G:
11 β := mnB
2
γ [mB
2
,mnB
2
) // G
ζS
1
Gram(ζ) = mnB
2
I (G +
ζS
1
Gram(ζ)) = mnB
2
I γI = βI
12 l
:= β/2mB
2
(<
n
2
)
13 for i = 1 to l
do
14 z :=
2mB ·X
r
R for r
$
Z
n
// Gram(z) = 2mB
2
I by Lemma 4.4
15 Update S
2
:= S
2
{z} // σ
min
(mat(z)) =
p
λ
min
(G
z
) =
2mB > 1 >
2
n
, z =
2mB
16 end
17 δ := β l
·2mB
2
[0,2mB
2
)
18 Find c
1
,.. .,c
4
N s.t.
4
j=1
c
2
j
= δ with RS algorithm (Theorem 4.2)
19 for j = 1 to 4 do
20 z
( j)
:= c
j
·X
r
R for r
$
Z
n
// z
( j)
= c
j
δ <
2mB
21 Update S
2
:= S
2
{z
( j)
}
22 end
23 return S := S
1
S
2
//
ζS
2
Gram(ζ) = βI, G =
ζS
Gram(ζ), l := |S| = 4(
n
2
1)+ l
+ 4 <
5
2
n
Gram(z
( j)
) = c
2
j
I holds by Lemma 4.4. Thus we have
4
j=1
Gram(z
( j)
) =
4
j=1
c
2
j
I = δI. Hence, we obtain
ζS
2
Gram(ζ) = βI
at Algorithm 1. Therefore, the output S of the algo-
rithm satisfies
ζS
Gram(ζ) = G. We also have
l := |S| = 4(
n
2
1) + l
+ 4 <
5
2
n.
We complete the above proof by describing
the deferred facts; Theorem 4.2, Lemma 4.3, and
Lemma 4.4:
Theorem 4.2 (Rabin–Shallit (RS) algorithm (Ra-
bin and Shallit, 1986)). For any N N, there is a
randomized algorithm for finding
a,b,c, d N s.t. a
2
+ b
2
+ c
2
+ d
2
= N
within O(log
2
N loglog N) operations on average.
Lemma 4.3 (Bound on |σ
i
|). Let a R, a := vec(a),
G
a
:= Gram(a) and (σ
0
,.. .,σ
n1
) := mat
1
(G
a
).
Then, |σ
i
| σ
0
= a
2
holds for any i.
Proof. By Fact 3.10, we have σ
i
= a
P
i
a for any i ̸=
0. Then, by the Cauchy–Schwarz inequality, |σ
i
| =
|a
P
i
a| a∥∥P
i
a = a
2
holds.
Lemma 4.4 (Coefficient Gram matrices of binomi-
als and monomials). Let a R be a binomial: a =
X
r
·(a
0
+ a
i
X
i
) for i N and r Z. Let a := vec(a),
G
a
:= Gram(a) and
(σ
0
,.. .,σ
n1
) := mat
1
(G
a
).
Gram Root Decomposition over the Polynomial Ring: Application to Sphericalization of Discrete Gaussian
313
Then, we have:
σ
0
= a
2
= a
2
0
+ a
2
i
σ
i
= a
0
a
i
, σ
ni
= a
0
a
i
σ
k
= 0 (k / {0, i,n i})
In particular, for monomial a := a
0
X
r
R for r Z,
we have G
a
= a
2
0
I, i.e.,
(
σ
0
= a
2
0
σ
i
= 0 (i ̸= 0)
Proof. Follows from Fact 3.2 and Fact 3.10.
4.2 Bounds on the Outputs
We first prove that the outputs of the algorithm are
short polynomials:
Theorem 4.5. The output ζ = (ζ
1
,.. .,ζ
l
)
R
l
of
Algorithm 1 satisfies ζ
i
2mB for any i [l].
Proof. The binomial z
( j)
:= (c
j
sgn(σ
i
)c
j
·X
i
) ·X
r
on Algorithm 1 satisfies
z
( j)
=
q
2c
2
j
p
2|σ
i
|
2mB
by Lemma 3.13 and Lemma 4.3. The monomial z :=
2mB·X
r
R on Algorithm 1 satisfies z=
2mB.
Finally, z
( j)
:= c
j
·X
r
R on Algorithm 1 also satis-
fies z
( j)
= c
j
δ <
2mB.
Finally, we show that the minimum singular value
of the coefficient matrices of the outputs are lower-
bounded by
2
n
:
Theorem 4.6. The output ζ = (ζ
1
,.. .,ζ
l
)
R
l
of
Algorithm 1 satisfies σ
min
(mat(ζ
i
))
2
n
for any i.
Proof. The binomial z
( j)
:= (c
j
sgn(σ
i
)c
j
·X
i
) ·X
r
on Algorithm 1 satisfies σ
min
(mat(z
( j)
))
2c
j
n
2
n
by
Lemma 4.8.
On Algorithm 1, the monomial z :=
2mB ·X
r
satisfies σ
min
(mat(z)) =
p
λ
min
(G
z
) =
2mB > 1 >
2
n
. Furthermore, z
( j)
:= c
j
·X
r
on Algorithm 1 also
satisfies σ
min
(mat(z
( j)
)) c
j
1
2
n
.
We complete the above proof by presenting a de-
ferred core lemma: Lemma 4.8. We first show that
the coefficient matrix of the “inverse” polynomial is
the inverse of the coefficient matrix:
Fact 4.7 (Inverse of coefficient matrix). For any a
R, there exists b R[X ]/(X
n
+ 1) such that a ·b = 1.
Furthermore, for A := mat(a), we have A
1
= B =
mat(b). (Thus, such b is sufficient to derive A
1
).
Proof. Let A := mat(A) and define
b := vec(b) := A
1
(1,0,. ..,0)
.
Then, by Lemma 3.15, we have
vec(a ·b) = Ab = (1, 0,... ,0)
,
thus, we have a ·b = 1. Furthermore,we have mat(a ·
b) = AB = I via Theorem 3.16; thus, B = A
1
.
Then, we derive a lower bound of the singular
value of the coefficient matrix of binomials:
Lemma 4.8 (Inverse of binomials). Let z = c±cX
k
R for c N, and let g =
n1
i=0
g
i
X
i
R[X]/(X
n
+ 1)
be such that z ·g = 1 (i.e., the “inverse” of z). Then,
we have g
=
1
2c
. Furthermore, we have σ
min
(Z)
2c
n
, where Z := mat(z).
Proof. We let z = c + cX
k
since the proof for z = c
cX
k
is obtained similarly. By Fact 4.7 there exists g
s.t. z ·g = 1. Let z := vec(z) and G := mat(g). Then,
by Fact 4.7 and Lemma 3.15, we have
vec(z ·g) = Gz
=
g
0
g
n1
... g
1
g
1
g
0
... g
2
.
.
.
.
.
.
.
.
.
g
n1
g
n2
... g
0
c
.
.
.
c
.
.
.
=
1
0
.
.
.
0
g
0
g
nk
.
.
.
g
k1
g
n1
g
k
g
0
.
.
.
g
n1
g
nk1
c
c
=
1
0
.
.
.
0
.
Therefore, we have:
g
0
= g
nk
+
1
c
|g
j
| = ···= |g
x·k+ j mod n
|
for x N,0 j k 1
We can analyze the absolute value of g
i
s as follows:
When gcd(k,n) = d > 1, we have
|g
x·d mod n
| =
1
2c
for x N, and
g
x·d+ j mod n
= 0 for x N, j {1, .. .,d 1}.
When gcd(k,n) = 1, we have
|g
i
| =
1
2c
for any i {0,n 1}.
Thus, in any case, we have g
=
1
2c
. Furthermore,
we have
σ
min
(Z) = 1/Z
1
= 1/G
1/G
F
= 1/(
ng)
1/(
n
q
n(
1
2c
)
2
) 2c/n.
ICISSP 2025 - 11th International Conference on Information Systems Security and Privacy
314
5 APPLICATION:
SPHERICALIZING THE
DISCRETE GAUSSIAN OVER
THE RING
We apply our Gram root decomposition algorithm
(Algorithm 1) to sphericalize the discrete Gaussian
over the ring: Let r
0
,.. .,r
m+l
iid
R(D
Z
n
,s
) (Defini-
tion 3.8), i.e., polynomials with coefficients of the
spherical discrete Gaussian. For given e
1
,.. .,e
m
R, we analyze the distribution of (r
0
+
m
i=1
r
i
e
i
) in
Lemma 5.5. Furthermore, let ζ
1
,.. .,ζ
l
be the outputs
of Algorithm 1, then we show that the coefficients
of (r
0
e +
m
i=1
r
i
e
i
+
l
i=1
r
m+i
ζ
i
) follow the spherical
discrete Gaussian distribution in Theorem 5.6.
5.1 Building Blocks
The goal of this subsection is to present Lemma 5.4,
which concerns the convolution of the discrete Gaus-
sian. First, we describe the required basic facts about
the singular values of the Gram matrices:
Fact 5.1. For any G 0,
G
1
= (
p
G
1
)
. Thus,
we have:
σ
max
(
G
1
) = σ
max
(
p
G
1
)
σ
min
(
G
1
) = σ
min
(
p
G
1
)
Proof. Let S :=
G, then G = SS
. Thus, G
1
=
S
S
1
= (S
)(S
)
: we have S
=
p
G
1
.
Hence, we have
σ
max
(
G
1
) =
G
1
=
p
G
1
= σ
max
(
p
G
1
).
Thus, we also have
σ
min
(
G
1
) = 1/σ
max
(
G
1
) = 1/σ
max
(
p
G
1
)
= σ
min
(
p
G
1
).
Lemma 5.2 ((Golub and Van Loan, 1996, Theorem
8.1.5)). If A,B R
n×n
are symmetric matrices, then
for any i [n],
λ
i
(A) + λ
min
(B) λ
i
(A + B) λ
i
(A) + λ
max
(B)
Fact 5.3. For any G
1
,G
2
0, we have:
σ
max
(
p
G
1
+ G
2
)
q
σ
2
max
(
p
G
1
) + σ
2
max
(
p
G
2
) (12)
σ
min
(
p
G
1
+ G
2
)
q
σ
2
min
(
p
G
1
) + σ
2
min
(
p
G
2
)
2min{σ
min
(
p
G
1
),σ
min
(
p
G
2
)} (13)
σ
min
(
q
(G
1
1
+ G
1
2
)
1
)
q
(σ
2
min
(
p
G
1
) + σ
2
min
(
p
G
2
))
1
1
2
min{σ
min
(
p
G
1
),σ
min
(
p
G
2
)} (14)
Proof. By definition of singular value and
Lemma 5.2, we have
σ
max
(
p
G
1
+ G
2
) =
p
λ
max
(G
1
+ G
2
)
p
λ
max
(G
1
) + λ
max
(G
2
), and
σ
min
(
p
G
1
+ G
2
) =
p
λ
min
(G
1
+ G
2
)
p
λ
min
(G
1
) + λ
min
(G
2
).
Thus, we obtain Eq. (12) and Eq. (13). By Fact 5.1
and Eq. (12), we obtain (14) as follows:
σ
min
(
q
(G
1
1
+ G
1
2
)
1
)
= (σ
max
(
q
G
1
1
+ G
1
2
))
1
(σ
2
max
(
q
G
1
1
) + σ
2
max
(
q
G
1
2
))
1/2
= (σ
2
min
(
p
G
1
) + σ
2
min
(
p
G
2
))
1/2
Then, we prove Lemma 5.4, which is a general-
ization of Corollary 2.8:
Lemma 5.4 (Generalization of Corollary 2.8). Let
G
0
,.. .,G
m
R
n×n
be positive definite matrices. Let
L
1
(B
1
),.. .,L
m
(B
m
) Z
n
be full-rank integer lattices
with (nonsingular) basis B
1
,.. .,B
m
.
Let σ
min
:= min
i∈{0,...,m}
σ
min
(
G
i
) and
B
:= max
i∈{1,...,m}
B
i
len
. Assume that
σ
min
2B
η
ε
(Z
n
). Then, we have
m
i=1
D
L
i
,
G
i
+ D
Z
n
,
G
0
s
D
Z
n
,
m
i=1
G
i
.
Proof. We first show
D
L
1
,
G
1
+ D
Z
n
,
G
0
s
D
Z
n
,
G
0
+G
1
(15)
by using Corollary 2.8. We have
G
0
η
ε
(Z
n
)
because σ
min
(
G
0
) σ
min
η
+
ε
(Z
n
) according to
Fact 2.5 and the hypothesis. By Fact 5.3, we have
q
(G
1
0
+ G
1
1
)
1
η
ε
(L
1
(B
1
)) because we have
σ
min
(
q
(G
1
0
+ G
1
1
)
1
)
1
2
min{σ
min
(
p
G
0
),σ
min
(
p
G
1
)}
1
2
σ
min
B
1
len
η
+
ε
(Z
n
)
by the hypothesis (
2B
1
len
η
+
ε
(Z
n
) σ
min
). There-
fore, we obtain Eq. (15).
Gram Root Decomposition over the Polynomial Ring: Application to Sphericalization of Discrete Gaussian
315
Next, we show
D
L
2
,
G
2
+ D
Z
n
,
G
0
+G
1
s
D
Z
n
,
G
0
+G
1
+G
2
via Corollary 2.8 again. By Fact 2.5 and Fact 5.3 and
the hypothesis, we have
G
0
+ G
1
η
ε
(Z
n
) because
σ
min
(
p
G
0
+ G
1
) min{σ
min
(
p
G
0
),σ
min
(
p
G
1
)}
σ
min
η
+
ε
(Z
n
)
holds by the assumption η
+
ε
(Z
n
) σ
min
. Further-
more, we have
q
((G
0
+ G
1
)
1
+ G
1
2
)
1
η
ε
(L
1
(B
2
))
because
σ
min
(
q
((G
0
+ G
1
)
1
+ G
1
2
)
1
)
1
2
min{σ
min
(G
0
+ G
1
),σ
min
(G
2
)}
1
2
min{σ
min
(G
0
),σ
min
(G
1
),σ
min
(G
2
)}
1
2
σ
min
B
2
len
η
+
ε
(Z
n
)
holds by the assumption
2B
2
len
η
+
ε
(Z
n
) σ
min
.
Repeating the above, we obtain the claim.
5.2 Main Theorem
We first apply Lemma 5.4 to the discrete Gaussian
over the ring:
Lemma 5.5 (Applying Lemma 5.4 to the discrete
Gaussian over the ring). Let e
1
,.. .,e
m
, R and de-
fine E
i
:= mat(e
i
) and G
e
i
:= Gram(e
i
). Assume
that e
i
< B and σ
min
(E
i
) c hold for some con-
stant B, c > 0. Let r
0
,.. .,r
m
iid
R(D
Z
n
,s
) for s
2c
1
Bη
+
ε
(Z
n
), and define
z := r
0
+
m
i=1
r
i
e
i
. (16)
Then, we have
vec(z)
s
D
Z
n
,s
I
n
+
m
i=1
G
e
i
.
Proof. By Lemma 3.15, we have vec(z) = r
0
+
m
i=1
E
i
r
i
, where r
i
:= vec(r
i
) for i = 0,. .. ,m. By
Lemma 2.6, we obtain E
i
r D
E
i
Z
n
,sE
i
for any i.
Let G
0
:= s
2
I
n
, G
i
:= s
2
G
e
i
, B
i
:= Z
i
for all i.
Then, B
i
Z
n×n
is nonsingular according to Corol-
lary 3.18; thus L(B
i
) = B
i
Z
n
Z
n
is a full-rank inte-
ger lattice for all i. Let σ
min
:= min
i
σ
min
(
G
i
) and
B
:= max
i
B
i
len
, then we have
σ
min
= s ·min{1,min
i
σ
min
(E
i
)} = c ·s,
B
:= max
i
E
i
len
= max
i
e
i
< B.
Then, we obtain the claim by Lemma 5.4 since σ
min
=
cs
2B
η
ε
(Z
n
) holds by hypothesis.
Finally, we present the main theorem by adding
l
i=1
r
m+i
ζ
i
in Eq. (16), where ζ
1
,.. .ζ
l
are the outputs
of Algorithm 1 for given e
1
,.. .,e
m
. Then, a polyno-
mial with spherical discrete Gaussian coefficients is
obtained:
Theorem 5.6 (Sphericalize the discrete Gaussian
over the ring). Let e
1
,.. .,e
m
R and define E
i
:=
mat(e
i
) and G
e
i
:= Gram(e
i
). Assume that e
i
< B
and σ
min
(E
i
) c hold for some constant B,c > 0.
Given e
1
,.. .,e
m
as the inputs, let ζ
1
,.. .,ζ
l
be the
outputs of Algorithm 1. Let r
0
,.. .,r
m+l
iid
R(D
Z
n
,s
)
for s 2
mBmax{c
1
,
n
2
}η
+
ε
(Z
n
), and define
z := r
0
+
m
i=1
r
i
e
i
+
l
i=1
r
m+i
ζ
i
.
Then, we have
vec(z)
s
D
Z
n
,s
mnB
2
+1
.
Proof. The outputs ζ
1
,.. .,ζ
l
of Algorithm 1 satisfy
ζ
i
2mB and σ
min
(mat(ζ
i
))
2
n
for any i [l] by
Theorem 4.5 and Theorem 4.6, respectively. Hence,
by Lemma 5.5, we have
vec(z)
s
D
Z
n
,s
q
I
n
+
m
i=1
G
e
i
+
l
i=1
G
ζ
i
since s 2
mBmax{c
1
,
n
2
}η
+
ε
(Z
n
) by hypothesis.
Furthermore, the outputs ζ
1
,.. .,ζ
l
of Algorithm 1
satisfy
l
i=1
G
ζ
i
= mnB
2
I
m
i=1
G
e
i
via Theorem 4.1.
Thus, we obtain the claim.
6 CONCLUSION AND FUTURE
WORK
Many advanced lattice-based cryptosystems such as
identity-based encryption and functional encryption
require efficient and secure algorithms to sample dis-
crete Gaussian. The integral Gram root decompo-
sition of (Ducas et al., 2020) was developed in the
context of the discrete Gaussian sampling algorithm.
In this work we proposed an algorithm for Gram
root decomposition over the polynomial ring (Algo-
rithm 1). While the objective of this algorithm is
similar to the (ring version of) integral Gram root
decomposition of (Ducas et al., 2020), our algorithm
ensures the bounds of the norm of the output polyno-
mial ζ
i
(Theorem 4.5) and the minimum eigenvalue of
the coefficient Gram matrix of ζ
i
(Theorem 4.6). By
utilizing the bounds, we showed how to sphericalize
discrete Gaussian over the ring (Theorem 5.6).
Our further application would be an efficient
and secure discrete Gaussian sampling algorithm for
ring setting for advanced lattice-based cryptosystems,
which we leave for future work.
ICISSP 2025 - 11th International Conference on Information Systems Security and Privacy
316
REFERENCES
Aggarwal, D. and Regev, O. (2016). A note on discrete
Gaussian combinations of lattice vectors. Chicago
Journal of Theoretical Computer Science, (7).
Agrawal, S., Boneh, D., and Boyen, X. (2010). Efficient
lattice (H)IBE in the standard model. In Gilbert, H.,
editor, EUROCRYPT 2010, pages 553–572. Springer.
Agrawal, S., Freeman, D. M., and Vaikuntanathan, V.
(2011). Functional encryption for inner product pred-
icates from learning with errors. In Lee, D. H. and
Wang, X., editors, ASIACRYPT 2011, pages 21–40.
Springer.
Agrawal, S., Gentry, C., Halevi, S., and Sahai, A. (2013).
Discrete Gaussian leftover hash lemma over infinite
domains. In Sako, K. and Sarkar, P., editors, ASI-
ACRYPT 2013, pages 97–116. Springer.
Alagic, G., Apon, D., Cooper, D., Dang, Q., Dang, T.,
Kelsey, J., Lichtinger, J., Miller, C., Moody, D., Per-
alta, R., Perlner, R., Robinson, A., and Smith-Tone,
D. (2022). NIST IR 8413-upd1: Status report on the
third round of the NIST post-quantum cryptography
standardization process.
Bos, J., Ducas, L., Kiltz, E., Lepoint, T., Lyubashevsky, V.,
Schanck, J. M., Schwabe, P., Seiler, G., and Stehl
´
e, D.
(2018). CRYSTALS-Kyber: A CCA-secure module-
lattice-based KEM. In Euro S&P 2018, pages 353–
367.
Brakerski, Z., Gentry, C., and Vaikuntanathan, V. (2011).
Fully homomorphic encryption without bootstrap-
ping. ePrint 2011/277. https://eprint.iacr.org/2011
/277.
Brakerski, Z., Langlois, A., Peikert, C., Regev, O., and
Stehl
´
e, D. (2013). Classical hardness of learning with
errors. In STOC ’13, page 575–584. ACM.
Ducas, L., Galbraith, S., Prest, T., and Yu, Y. (2020). Inte-
gral matrix Gram root and lattice Gaussian sampling
without floats. In Canteaut, A. and Ishai, Y., editors,
EUROCRYPT 2020, pages 608–637. Springer.
Fouque, P.-A., Hoffstein, J., Kirchner, P., Lyubashevsky, V.,
Pornin, T., Prest, T., Ricosset, T., Seiler, G., Whyte,
W., and Zhang, Z. (2020). Falcon: Fast-fourier lattice-
based compact signatures over NTRU – specifications
v1.2. 2020. Technical Report, NIST.
Genise, N. and Micciancio, D. (2018). Faster Gaussian sam-
pling for trapdoor lattices with arbitrary modulus. In
Nielsen, J. B. and Rijmen, V., editors, EUROCRYPT
2018, pages 174–203. Springer.
Genise, N., Micciancio, D., Peikert, C., and Walter, M.
(2020). Improved discrete Gaussian and subGaus-
sian analysis for lattice cryptography. In Kiayias, A.,
Kohlweiss, M., Wallden, P., and Zikas, V., editors,
PKC 2020, pages 623–651. Springer.
Gentry, C., Peikert, C., and Vaikuntanathan, V. (2008).
Trapdoors for hard lattices and new cryptographic
constructions. In STOC ’08, page 197–206. ACM.
Golub, G. H. and Van Loan, C. F. (1996). Matrix Computa-
tions (3rd Ed.). Johns Hopkins University Press.
Hoffstein, J., Pipher, J., and Silverman, J. H. (1998). NTRU:
A ring-based public key cryptosystem. In Buhler, J. P.,
editor, ANTS 1998, pages 267–288. Springer.
Kiltz, E., Lyubashevsky, V., and Schaffner, C. (2018). A
concrete treatment of Fiat-Shamir signatures in the
quantum random-oracle model. In Nielsen, J. B. and
Rijmen, V., editors, EUROCRYPT 2018, pages 552–
586. Springer.
Langlois, A. and Stehl
´
e, D. (2015). Worst-case to average-
case reductions for module lattices. Des. Codes Cryp-
togr., 75(3):565–599.
Lyubashevsky, V., Peikert, C., and Regev, O. (2010). On
ideal lattices and learning with errors over rings. In
Gilbert, H., editor, EUROCRYPT 2010, pages 1–23.
Springer.
Micciancio, D. and Peikert, C. (2012). Trapdoors for lat-
tices: Simpler, tighter, faster, smaller. In Pointcheval,
D. and Johansson, T., editors, EUROCRYPT 2012,
pages 700–718. Springer.
Micciancio, D. and Regev, O. (2007). Worst-case to
average-case reductions based on Gaussian measures.
SIAM J. Comput., 37(1):267–302.
Micciancio, D. and Walter, M. (2017). Gaussian sampling
over the integers: Efficient, generic, constant-time. In
Katz, J. and Shacham, H., editors, CRYPTO 2017,
pages 455–485. Springer.
Nguyen, H. H. and Vu, V. H. (2016). Normal vector of a
random hyperplane. International Mathematics Re-
search Notices, 2018(6):1754–1778.
Okada, H., Fukushima, K., Kiyomoto, S., and Takagi, T.
(2023). Spherical gaussian leftover hash lemma via
the R
´
enyi divergence. In Tibouchi, M. and Wang, X.,
editors, ACNS 2023, pages 695–724. Springer Nature
Singapore.
Peikert, C. (2009). Public-key cryptosystems from the
worst-case shortest vector problem: Extended ab-
stract. In STOC ’09, page 333–342. ACM.
Peikert, C. (2010). An efficient and parallel Gaussian sam-
pler for lattices. In Rabin, T., editor, CRYPTO 2010,
pages 80–97. Springer.
Rabin, M. O. and Shallit, J. O. (1986). Randomized algo-
rithms in number theory. Communications on Pure
and Applied Mathematics, 39(S1):S239–S256.
Regev, O. (2005). On lattices, learning with errors, random
linear codes, and cryptography. In STOC ’05, pages
84–93. ACM.
Stehl
´
e, D., Steinfeld, R., Tanaka, K., and Xagawa, K.
(2009). Efficient public key encryption based on ideal
lattices. In Matsui, M., editor, ASIACRYPT 2009,
pages 617–635. Springer.
Tao, T. (2012). Topics in random matrix theory. Graduate
Studies in Mathematics, 132.
Gram Root Decomposition over the Polynomial Ring: Application to Sphericalization of Discrete Gaussian
317