Christian Knabenhans

Christian Knabenhans Christian Knabenhans

Ph.D. student in security and privacy

EPFL

About me

I’m a second-year doctoral student at EPFL, working broadly on security and privacy topics. I am fortunate to be co-advised by Alessandro Chiesa in the COMPSEC lab, where I work on succinct and zero-knowledge proof systems, and Carmela Troncoso in the SPRING lab, where I design real-world systems to improve the security and privacy of at-risk users.

Before that, I worked on verifiable and robust Fully Homomorphic Encryption in Anwar Hithnawi’s Privacy-Preserving Systems Lab at ETH Zurich, and I hold a joint Master’s in Cyber-Security from EPFL and ETH. During my Bachelor’s and Master’s, I’ve also worked on static analysis and formal verification and privacy issues of machine learning.

In my free time, I am (or was) involved in the non-profits ETH Cyber Group — Student Initiative and L’Association Francophone des Étudiants de Zurich. I also fence and rant about opera.

Publications

Folding schemes (Kothapalli et al., CRYPTO 2022) are a conceptually simple, yet powerful cryptographic primitive that can be used as a building block to realise incrementally verifiable computation (IVC) with low recursive overhead without general-purpose non-interactive succinct arguments of knowledge (SNARK). Most folding schemes known rely on the hardness of the discrete logarithm problem, and thus are both not quantum-resistant and operate over large prime fields. Existing post-quantum folding schemes (Boneh, Chen, ePrint 2024/257) based on lattice assumptions instead are secure under structured lattice assumptions, such as the Module Short Integer Solution Assumption (MSIS), which also binds them to relatively complex arithmetic. In contrast, we construct Lova, the first folding scheme whose security relies on the (unstructured) SIS assumption. We provide a Rust implementation of Lova, which makes only use of arithmetic in hardware-friendly power-of-two moduli. Crucially, this avoids the need of implementing and performing any finite field arithmetic. At the core of our results lies a new exact Euclidean norm proof which might be of independent interest.

Homomorphic encryption has become a practical solution for protecting the privacy of computations on sensitive data. However, existing homomorphic encryption pipelines do not guarantee the correctness of the computation result in the presence of a malicious adversary. We propose two plaintext encodings compatible with state-of-the-art fully homomorphic encryption schemes that enable practical client-verification of homomorphic computations while supporting all the operations required for modern privacy-preserving analytics. Based on these encodings, we introduce VERITAS, a ready-to-use library for the verification of computations executed over encrypted data. VERITAS is the first library that supports the verification of any homomorphic operation. We demonstrate its practicality for various applications and, in particular, we show that it enables verifiability of homomorphic analytics with less than 3× computation overhead compared to the homomorphic encryption baseline.

Fully Homomorphic Encryption (FHE) is a powerful building block for secure and private applications. However, state-of-the-art FHE schemes do not offer any integrity guarantees, which can lead to devastating correctness and security issues when FHE is deployed in non-trivial settings. In this paper, we take a critical look at existing integrity solutions for FHE, and analyze their (often implicit) threat models, efficiency, and adequacy with real-world FHE deployments. We explore challenges of what we believe is the most flexible and promising integrity solution for FHE: namely, zero-knowledge Succinct Non-interactive ARguments of Knowledge (zkSNARKs); we showcase optimizations for both general-purpose zkSNARKs and zkSNARKs designed for FHE. We then present two software frameworks, circomlib-FHE and zkOpenFHE, which allow practitioners to automatically augment existing FHE pipelines with integrity guarantees. Finally, we leverage our tools to evaluate and compare different approaches to FHE integrity, and discuss open problems that stand in the way of a widespread deployment of FHE in real-world applications.

Recent advancements in privacy-preserving machine learning are paving the way to extend the benefits of ML to highly sensitive data that, until now, have been hard to utilize due to privacy concerns and regulatory constraints. Simultaneously, there is a growing emphasis on enhancing the transparency and accountability of machine learning, including the ability to audit ML deployments. While ML auditing and PPML have both been the subjects of intensive research, they have predominately been examined in isolation. However, their combination is becoming increasingly important. In this work, we introduce Arc, an MPC framework for auditing privacy-preserving machine learning. At the core of our framework is a new protocol for efficiently verifying MPC inputs against succinct commitments at scale. We evaluate the performance of our framework when instantiated with our consistency protocol and compare it to hashing-based and homomorphic-commitment-based approaches, demonstrating that it is up to 104× faster and up to 106× more concise.

Talks

Folding schemes are cryptographic tools that allow for space-efficient and incrementally updatable proofs of structured computations, such as Incrementally Verifiable Computation (IVC) and Proof-Carrying Data (PCD). However, most current folding schemes lack post-quantum security, and developing such schemes from post-quantum assumptions has proven technically challenging. In this talk, I will give an overview the construction of zero-knowledge Succinct Non-interactive Arguments of Knowledge (zkSNARKs) based on lattice assumptions and the challenges of building folding schemes from “noisy” cryptographic assumptions such as lattices.I will introduce Lova, a lattice analogue of the foundational Nova folding scheme, and discuss general techniques for achieving exact norm extraction, a complex but crucial requirement for many proof systems. Finally, I will present lattirust, a forthcoming high-performance library for lattice cryptography with a special emphasis on zkSNARKs. This talk is based on joint work with Giacomo Fenzi, Duc Tu Pham, and Ngoc Khanh Nguyen.

In recent years, FHE has made significant gains in performance and usability. As a result, we see a first wave of real-world deployments and an increasing demand for practical applications of FHE. However, deploying FHE in the real world requires addressing challenges that have so far received less attention, as the community was primarily focused on achieving efficiency and usability. Specifically, the assumption of a semi-honest evaluating party, which is at the core of most FHE research, is incompatible with a large number of deployment scenarios. Scenarios that violate this assumption do not simply suffer from correctness issues, as one might expect, but in fact enable an adversary to completely undermine the confidentiality guarantees of FHE, up to and including very practical key-recovery attacks. As a response, a variety of works have tried to augment FHE for settings beyond the traditional semi-honest assumption. This fundamentally revolves around guaranteeing some form of integrity for FHE, while retaining sufficient malleability to allow homomorphic computations. However, it remains unclear to what extent existing approaches actually address the challenges of real-world deployment, as we identify significant gaps between the assumptions these works generally make and the way state-of-the-art FHE schemes are used in practice. In this talk, we survey and analyze existing approaches to FHE integrity in the context of real-world deployment scenarios, identify capabilities, shortcomings, and promising candidates. We also implemented and evaluated these constructions experimentally on realistic workloads, and we give some numbers. Finally, we conclude with a discussion on current capabilities, recommendations for future research directions, and an overview of the hurdles on the path to our ideal end-goal: a cryptographic equivalent of a trusted execution environment, i.e., a cryptoprocessor enabling fully private and verifiable computation.

Fully Homomorphic Encryption (FHE) is seeing increasing real-world deployment to protect data in use by allowing computation over encrypted data. However, the same malleability that enables homomorphic computations also raises integrity issues, which have so far been mostly overlooked for practical deployments. While FHE’s lack of integrity has obvious implications for correctness, it also has severe implications for confidentiality: a malicious server can leverage the lack of integrity to carry out interactive key-recovery attacks.

As a result, virtually all FHE schemes and applications assume an honest-but-curious server who does not deviate from the protocol. However, this assumption is insufficient for a wide range of deployment scenarios. While there has been work that aims to address this gap, current approaches fail to fully address the needs and characteristics of modern FHE schemes and applications.

In this talk, I will discuss the need for and challenges of achieving verifiable Fully Homomorphic Encryption (vFHE). I will then introduce a new notion for maliciously secure verifiable FHE and provide concrete instantiations of the protocol based on a variety of integrity primitives (primarily based on Zero-Knowledge Proofs). Finally, I will examine the practicality of verifiable FHE today and highlight the need for further research on tailored integrity solutions for FHE. This is joint work with Alexander Viand and Anwar Hithnawi.

News

26 Aug 2024 Lova 💕 got accepted at Asiacrypt'24!

01 Aug 2024 I’m giving a talk about our lattice folding scheme Lova 💕 at KU Leuven 🇧🇪, and then I’ll be visiting King’s college 🇬🇧 for a week (with a pit stop at Royal Holloway)!

24 May 2024 Veritas got accepted at CCS'24!

09 May 2024 Arc got accepted at USENIX'24!

12 Apr 2024 I’m giving a talk at Real World Crypto 🇨🇦 this year! I’ll also be at HACS and FHE.org in Toronto, and visiting UWaterloo.

01 Sep 2023 I’m starting my Ph.D. at EPFL!

06 Jul 2023 I’ll be at the EPFL Summer Research Institute, and at PETS and HotPETS in Lausanne next week

10 May 2023 I’m giving a talk at the Stanford security seminar next Wednesday, and I’ll be presenting a poster at S&P 🇺🇸!

25 Apr 2023 I’ll be attending the Lattices meet Hashes workshop at the Bernoulli Center at EPFL 🇨🇭

15 Apr 2023 I’ll be starting a Ph.D. at EPFL 🇨🇭 in September 2023!

Projects

Veritas - Verifiable Encodings for Secure Homomorphic Analytics
In the Veritas project, we investigated verifiable encodings for secure homomorphic analytics, i.e., cryptographic authenticators to guarantee the correctness of privacy-preserving computations performed using fully homomorphic encryption (FHE). Concretely, I fully implemented our two main approaches and variants thereof on top of the Lattigo FHE library.
Transfer Learning in Property Inference Attacks
Predicting the privacy loss from gradient updates in collaborative learning using new affinity measures derived from transfer learning.
Automatic Inference of Hyperproperties
Today, automated reasoning about program behavior is a well-established discipline in computer science, with a wide array of tools and techniques. In the most common scenario, the goal is to prove trace properties of programs, such as termination or functional correctness.