Lova: Lattice-Based Folding Scheme from Unstructured Lattices

Lova is joint work with Duc Tu Pham (ENS Paris), Ngoc Khanh Nguyen (King’s College London), and Giacomo Fenzi (EPFL), and appears at Asiacrypt'24.

Our goal: IVC from lattices

Incrementally verifiable computation (IVC) is a cryptographic primitive that allows a long (possibly infinite) computation to be run, such that correctness of the state of the computation can be efficiently verified at any point. IVC and its generalisation, proof-carrying data (PCD), have found numerous applications in succinct blockchains, verifiable delay functions, SNARKs for machine computations, and more.

Originally, IVC and PCD were built on recursive SNARKs, but more recent constructions are built from so-called folding and (split-)accumulation schemes. Informally, a folding scheme “folds” several instances of a certain relation into a single instance, so that correctness of the folded instance implies correctness of all original instances. Until recently, folding (aka accumulation) schemes are instantiated using homomorphic commitments based on the discrete logarithm assumption.

Since we know that there exist homomorphic commitments from lattices, our goal is to construct a folding scheme from lattices. Specifically, we aim for a lattice equivalent of Nova (hence the name Lova 💕). 1

Lattice-based commitments

We start with the Ajtai commitment, which is secure under the Short Integer Solution (SIS) assumption. To commit to an integer vector $\mathbf{s}$ of length $m$, we sample a public matrix $\mathbf{A}$ of dimensions $n \times m$, and we commit $\mathbf{s}$ as $\mathbf{t} := \mathbf{A}\mathbf{s}$. However, this commitment is only secure as long as $\mathbf{s}$ is short, i.e., $|{\mathbf{s}}|_2 \le \beta$ for an appropriate parameter $\beta$. 2

This follows from the SIS assumption, which posits that given a randomly generated matrix $\mathbf{A}$ of suitable dimensions, it is infeasible to find a short (non-zero) vector $\mathbf{s}$ such that $\mathbf{A}\mathbf{s} = 0$.

A naïve attempt

Our goal is take two commitments $\mathbf{t}_1$ and $\mathbf{t}_2$, and fold them into a single commitment $\mathbf{t}’$.

As a naïve attempt to fold these two commitments, we can try taking a random linear combination of two commitments and witnesses. For example, sampling $c_1$ and $c_2$ from some sampling set (which we don’t define yet), we can define the new folded witness as $\mathbf{s}’ := c_1\mathbf{s}_1 + c_2\mathbf{s}_2$, and the corresponding commitment as $\mathbf{t}’ := c_1\mathbf{t}_1 + c_2\mathbf{t}_2$.

However, this immediately runs into a completeness issue. Even though $\mathbf{s}_i$ are short, the folded witness has norm $||\mathbf{s}’|| \le \sum_{i\in{1,2}} \vert c_i\vert \cdot \beta$, which is larger than $\beta$. Since we need witnesses to be small for the commitment scheme to be secure, this construction will only allow us to fold a (small!) constant number of times before the scheme becomes insecure.

A folding scheme with relaxed extraction

Achieving perfect completeness

In order to circumvent this issue, we use decomposition, a standard trick in the lattice literature with which we can ensure that values remain small. Specifically, consider a prover that holds the instances $\mathbf{t}_1, \mathbf{t}_2$ and the corresponding witnesses $\mathbf{s}_1, \mathbf{s}_2$. For simplicity, we write $\mathbf{T} :=\begin{bmatrix}\mathbf{t}_1 & \mathbf{t}_2 \end{bmatrix}$ and $\mathbf{S} :=\begin{bmatrix}\mathbf{s}_1 & \mathbf{s}_2 \end{bmatrix}$ (note that we have $\mathbf{T} := \mathbf{A}\mathbf{S}$). For an appropriate decomposition basis $b$, the prover first decomposes the witnesses as $\mathbf{S}_i = \sum_{j=0}^{k-1}b^j \cdot \tilde{\mathbf{s}}_{i,j}$, where each entry of $\tilde{\mathbf{s}}_{i,j}$ is at most $b$ (i.e., $|| \mathbf{s}_{i,j}||_2 \le b\cdot \sqrt{m}$). For legibility, we write $$\tilde{\mathbf{S}} = \mathbf{G}^{-1}(\mathbf{S}) = \begin{bmatrix}\tilde{\mathbf{s}}_{1,0} & \cdots & \tilde{\mathbf{s}}_{1,k-1} & \tilde{\mathbf{s}}_{2,0} & \cdots & \tilde{\mathbf{s}}_{2,k-1} \end{bmatrix} \in \mathbb{Z}_q^{m \times 2k}\enspace.$$ The prover then commits to this decomposition as $\tilde{\mathbf{T}} := \mathbf{A}\tilde{\mathbf{S}}$, and sends it to the verifier.

The verifer first then samples random challenges $c_1, \ldots, c_{2k}$, and sends $\mathbf{c} := \begin{bmatrix} c_1 & \cdots & c_{2k}\end{bmatrix}^{\top}$ to the prover.

Finally, the prover takes a random linear combination of its witness as the folded witness $\mathbf{S}’ := \tilde{\mathbf{S}} \mathbf{c}$, and the verifier does the same to compute the folded instance $\mathbf{T}’ := \tilde{\mathbf{T}} \mathbf{c}$. The verifier also needs to perform some consistency checks, namely, that the commitment to the decomposed witness matches the original commitment to the non-decomposed witness, i.e., $\mathbf{T} \overset{?}{=} \tilde{\mathbf{T}}\mathbf{G}$, where $\mathbf{G} := \begin{bmatrix}1 & b & \cdots & b^{k-1} \end{bmatrix} \otimes \mathbf{I}$ is a gadget matrix that contains powers of the basis $b$; in essence, we leverage the homomorphism of the commitment to link the new folded instance to the original instance.

Note that $\mathbf{T}’ = \mathbf{A}\mathbf{S}’$, and furthermore $||\mathbf{S}’_{:,i}|| \le 2k \cdot |\mathbf{c}_i|\cdot b\cdot \sqrt{m}$. By sampling $\mathbf{c}_i$ from $\{-1, 0, 1\}$ we ensure that $|\mathbf{c}_i| = 1$, and we can set the scheme parameters to ensure that $2k b \sqrt{m} \le \beta$.
This is a perfectly complete scheme for a lattice commitment, and the norm of the folded witness does not grow when folding!

Extractability

Let’s analyze the knowledge soundness of this construction. We can run a malicous prover $2k$ times with the same initial message $\tilde{\mathbf{T}}$, different challenges $\mathbf{c}^{(i)}$, and observe the prover’s folded witness output $\mathbf{S}’^{(i)}$ for $i \in [2k]$. Assuming that $\tilde{\mathbf{T}}\mathbf{c}^{(i)} = \mathbf{A}\mathbf{S}’^{(i)}$, and with the additional requirement that $\mathbf{c}^{(i)}$ and $\mathbf{c}^{(j)}$ differ in exactly one location (say, $\ell$), and such that $\mathbf{c}^{(i)}_\ell \neq - \mathbf{c}^{(j)}_\ell$ (i.e., one of those values is $0$, and the other is $\pm 1$). 3

By setting $\bar{\mathbf{S}}_{:,\ell} := \frac{1}{\mathbf{c}^{(i)}_\ell - \mathbf{c}^{(j)}_\ell}(\mathbf{S}’^{(i)} - \mathbf{S}’^{(j)})$, we have that $$\begin{align*} \mathbf{A}\bar{\mathbf{S}}_{:,\ell} &= \mathbf{A} \frac{1}{\mathbf{c}^{(i)}_\ell - \mathbf{c}^{(j)}_\ell} \left(\tilde{\mathbf{S}}\mathbf{c}^{(i)} - \tilde{\mathbf{S}}\mathbf{c}^{(j)}\right) \\\ &= \mathbf{A} \frac{1}{\mathbf{c}^{(i)}_\ell - \mathbf{c}^{(j)}_\ell} \left(\mathbf{c}^{(i)}_\ell \tilde{\mathbf{S}}- \mathbf{c}^{(j)}_\ell \tilde{\mathbf{S}}\right) \\\ &= \mathbf{A} \tilde{\mathbf{S}} \\\ &= \mathbf{T}_{:,\ell}\enspace, \end{align*}$$
and thus $\bar{\mathbf{S}}_{:,\ell}$ is a valid witness for the original instance (or, we’ve found a valid SIS solution, which is infeasible if we set our parameters right).

However, this extracted witness has norm $2\beta$, and if we extract $T$ consecutive folding, we’ll get a witness that has norm $(2\beta)^T$, which is essentially meaningless when $T$ is large.

Achieving perfect extractability

In order to get rid of this norm growth in the extracted witness, we will force the prover to provide additional information about the norm of its witness to the verifier. We thus augment our instance to also contain an inner product matrix $\mathbf{D}$, which should correspond to all inner products of columns of the witness, i.e., $\mathbf{D} := \mathbf{S}^\top \mathbf{S}$.

Along with the commitment to the decomposed witness, the prover now also sends the inner product matrix $\tilde{\mathbf{D}} := \tilde{\mathbf{S}}^\top \tilde{\mathbf{S}}$. Note that the diagonal of $\tilde{\mathbf{D}}$ contains the squared norm of the norm of the columns of $\tilde{\mathbf{S}}$.

To fold this second part of the instance, the verifier computes $\mathbf{D}’ := \mathbf{c}^\top \tilde{\mathbf{D}}\mathbf{c}$, which is consistent with the folded witness computed by an honest prover.

As before, the verifier needs to ensure consistency of this decomposed “instance” that the prover sends with the original instance, which it does by checking that $\mathbf{D} \overset{?}{=} \mathbf{G}^\top \tilde{\mathbf{D}}\mathbf{G}$.

Intuitively, adding this information about the norm of the witness to the relation itself ensures that it is directly checkable after all folding steps have concluded, and we can argue that with a malicious prover cannot get the verifier to output the verifier to output a folded instance that does not correspond to the norm of the witness except with some probability. To see why, note that if $\mathbf{D}$ and $\mathbf{D}’$ are not consistent, then the prover passed a polynomial identity check of degree 2, which by the Schwartz-Zippel-Demillo-Lipton lemma, will only succeed with probablity $\frac{2}{3}$ (the denominator of $3$ comes from our sampling set).

$\frac{2}{3}$ is obviously still a large error, which is why we need to boost the protocol by considering witness of size $m \times t$ for some security parameter $t$, which will drive the error of this check down to roughly $\left(\frac{2}{3}\right)^t$.

Is Lova concretely efficient?

Lova has some nice properties for concrete efficiency:

  • Because we use plain SIS, we don’t need any special structure of the modulus, so we can choose the hardware-friendly modulus $q= 2^{64}$, and we don’t need to do any modular reduction.
  • Almost all operations are also purely linear algebra, which can optimized very efficiently.
  • Because the protocol is very algebraic and consists of a single round, it is also very recursion-friendly, as we only need to hash out a single matrix of ternary challenges.

However, because of the high soundness error (see section above), the security parameter we need to use is quite large ($t > 300$), which leads to concretely large proof sizes (dozens of megabytes for witnesses of length $>2^{17}$) and prover times ($>10$ minutes).

Lova vs Latticefold

Concurrently to our work, Dan Boneh and Binyi Chen have proposed Latticefold, which is also a folding scheme from lattices. They use structured lattice assumptions, and focus much more on practical efficiency (for example, targeting CCS as their constraint system, and drilling down on an NTT-friendly variant of sumcheck). Because of this, it is to be expected that Latticefold will be more efficient in practice than Lova, from the additional structure of the R-SIS assumption.

However, Latticefold uses the sumcheck protocol to get around the relaxed extractability issue. Intuitively, this seems like a tool that is “too heavy” for the simple task of folding 4. For example, the folding approaches based on DLog are much more algebraic, and do little more than taking a random linear combination of the folding inputs.

Our goal with Lova was to provide a foundation for “algebraic” folding from lattices, and we believe that more efficient follow-ups to Lova, using similar techniques but more structured assumptions are very promising avenues for efficient post-quantum folding schemes.


  1. Pronounced like “lover”, hence the hearts 💕. ↩︎

  2. For example, when working over $\mathbb{Z}_q$, we can use $\beta \approx \sqrt{q}$, and set the commitment length $n$ to achieve an appropriate security level. ↩︎

  3. This is a simplification, and it is not guaranteed that this condition is satisfied, but we can argue that it happens sufficiently often such that we can efficiently find such challenges. ↩︎

  4. I’ve overheard this being described as “taking a sledgehammer to a small nail”. ↩︎

Previous