Christian Knabenhans

Christian Knabenhans /ˈknaː.bn̩haːns/

Ph.D. student in security and privacy

EPFL

About me

I’m a doctoral student at EPFL, working broadly on security and privacy topics. I am fortunate to be co-advised by Alessandro Chiesa in the COMPSEC lab, where I work on succinct and zero-knowledge proof systems, and Carmela Troncoso in the SPRING lab, where I design real-world systems to improve the security and privacy of at-risk users.

Before that, I worked on verifiable and robust Fully Homomorphic Encryption in Anwar Hithnawi’s Privacy-Preserving Systems Lab at ETH Zurich, and I hold a joint Master’s in Cyber-Security from EPFL and ETH. During my Bachelor’s and Master’s, I’ve also worked on static analysis and formal verification and privacy issues of machine learning.

In my free time, I am involved in the non-profit EPFL Cyber Group Student Initiative. I also fence and rant about opera.

Publications

Collaborative documents (e.g., Google Docs, Microsoft 365) often contain sensitive information such as personal or financial data. In this work, we extend the protection of E2EE encryption, currently (mostly) restricted to the use case of messaging, to collaborative documents. We elicit and formalize the security and functional requirements of End-to-End Encrypted Collaborative Documents (E2EE-CD). We then put forth a generic framework to realize E2EE-CD, by combining an end-to-end encrypted asynchronous broadcast channel with any edit reconciliation mechanism which ensures globally consistent views of a document. We give formal proofs that directly relate the security of our E2EE-CD solution to the security of the underlying end-to-end encrypted communication channel. We then elicit additional deployment requirements for E2EE-CD for investigative journalists and design SignalCD, an E2EE-CD system built on top of Signal’s group messaging protocol tailored for this setting. We analyze the security guarantees of SignalCD, implement a prototype, and empirically show that our solution is efficient enough to permit real-time collaboration.

In times of crisis, humanitarian organizations bring aid to those affected (e.g., water, food, medical supplies, cash assistance). Prior works introduced privacy-preserving systems for digitizing the aid distribution process, increasing their efficiency and security. These solutions, by design, do not allow humanitarian organizations to collect metrics about the aid distribution process. Such assessments (e.g., the proportion of aid distributed to a minority) are crucial to enable the organizations to improve their operations, to perform their duty of care, and to enable transparency and accountability towards recipients, donors, and the public in general. In partnership with the International Committee of the Red Cross (ICRC), we identify assessments relevant to humanitarian aid deployments and these assessments’ security and privacy requirements. We introduce a generic framework that augments existing privacy-preserving humanitarian aid distributions with such assessments. This framework enables the collection of aggregate statistics about the aid distribution process without compromising the privacy of recipients, and without requiring any changes to the existing protocols. To realize our framework we introduce one-time functional encryption (1FE), for which we propose efficient realizations from standard cryptographic primitives. We design and implement two variants of our framework: a more efficient one, secure against semi-honest adversaries; and a more robust one, secure against malicious adversaries. We also introduce the novel notions of threat model agility and graceful degradation. These notions enable us to model the unstable environment of humanitarian aid distribution, where the capabilities of the adversary may change suddenly (e.g., when a militia takes over a region in conflict), invalidating the threat model under which the system was originally deployed. We believe these notions are of independent interest for other privacy-preserving applications deployed in unstable environments.

We study the security of a popular paradigm for constructing SNARGs, closing a key security gap left open by prior work. The paradigm consists of two steps: first, construct a public-coin succinct interactive argument by combining a functional interactive oracle proof (FIOP) and a functional commitment scheme (FC scheme); second, apply the Fiat–Shamir transformation in the random oracle model. Prior work did not consider this generalized setting nor prove the security of this second step (even in special cases). We prove that the succinct argument obtained in the first step satisfies state-restoration security, thereby ensuring that the second step does in fact yield a succinct non-interactive argument. This is provided the FIOP satisfies state-restoration security and the FC scheme satisfies a natural state-restoration variant of function binding (a generalization of position binding for vector commitment schemes). Moreover, we prove that notable FC schemes satisfy state-restoration function binding, allowing us to establish, via our main result, the security of several SNARGs of interest (in the random oracle model). This includes a security proof of Plonk, in the ROM, based on ARSDH (a falsifiable assumption).

Folding schemes (Kothapalli et al., CRYPTO 2022) are a conceptually simple, yet powerful cryptographic primitive that can be used as a building block to realise incrementally verifiable computation (IVC) with low recursive overhead without general-purpose non-interactive succinct arguments of knowledge (SNARK). Most folding schemes known rely on the hardness of the discrete logarithm problem, and thus are both not quantum-resistant and operate over large prime fields. Existing post-quantum folding schemes (Boneh, Chen, ePrint 2024/257) based on lattice assumptions instead are secure under structured lattice assumptions, such as the Module Short Integer Solution Assumption (MSIS), which also binds them to relatively complex arithmetic. In contrast, we construct Lova, the first folding scheme whose security relies on the (unstructured) SIS assumption. We provide a Rust implementation of Lova, which makes only use of arithmetic in hardware-friendly power-of-two moduli. Crucially, this avoids the need of implementing and performing any finite field arithmetic. At the core of our results lies a new exact Euclidean norm proof which might be of independent interest.

Homomorphic encryption has become a practical solution for protecting the privacy of computations on sensitive data. However, existing homomorphic encryption pipelines do not guarantee the correctness of the computation result in the presence of a malicious adversary. We propose two plaintext encodings compatible with state-of-the-art fully homomorphic encryption schemes that enable practical client-verification of homomorphic computations while supporting all the operations required for modern privacy-preserving analytics. Based on these encodings, we introduce VERITAS, a ready-to-use library for the verification of computations executed over encrypted data. VERITAS is the first library that supports the verification of any homomorphic operation. We demonstrate its practicality for various applications and, in particular, we show that it enables verifiability of homomorphic analytics with less than 3× computation overhead compared to the homomorphic encryption baseline.

Talks

Today, humanitarian distribution heavily relies on manual processes that can be slow, error-prone, and costly. Humanitarian aid organizations therefore have a strong incentive to digitalize the aid distribution process. This would allow them to scale up their operations, reduce costs, and increase the impact of their limited resources. Digitalizing the aid distribution process introduces new challenges, especially in terms of privacy and security. These challenges are particularly acute in the context of humanitarian aid, where the recipients are often vulnerable populations, and where the aid distribution process is subject to a high degree of scrutiny by the public, the media, and the donors. This is compounded by a very strong threat model, with adversaries ranging from corrupt officials to armed groups, and by the fact that the recipients themselves may not be able to protect their own privacy. This talk we propose is split into three main parts: first, we stress the need for assessments when deploying privacy-preserving applications in the real world, using concrete examples. In particular, we discuss the tension between supporting assessments and the security and privacy of the application’s users. Second, we reflect on our experience in designing privacy-preserving applications for various use cases, and discuss how we go from an informal, high-level need expressed by our partners, to a formal model and a concrete protocol. Here, we stress common pitfalls, and outline a methodology that we have synthesized from our experience. Finally, we discuss how we tackled the use case of a privacy-preserving aid distribution system with statistics, in collaboration with partners from the International Committee of the Red Cross. We present a general framework to collect and evaluate statistics in a privacy-preserving way (including one-time functional evaluation, a new primitive that we introduce), and we present three concrete instantiations of this framework (based on trusted execution environments, linear secret sharing, and threshold fully homomorphic encryption, respectively). This talk is based on joint work with Lucy Qin, Justinas Sukaitis, Vincent Graf Narbel, and Carmela Troncoso.

We study the security of a popular paradigm for constructing SNARGs, closing a key security gap left open by prior work. The paradigm consists of two steps: first, construct a public-coin succinct interactive argument by combining a FIOP (generalized interactive oracle proof) and a FCS (functional commitment scheme); second, apply the Fiat–Shamir transformation in the random oracle model. Prior work did not consider this generalized setting nor prove the security of this second step in restricted settings. We prove that the succinct argument obtained in the first step satisfies state-restoration security, thereby ensuring that the second step does in fact yield a succinct non-interactive argument. This is provided the FIOP satisfies state-restoration security and the FCS satisfies a natural state-restoration variant of function binding (the generalization of position binding for vector commitment schemes). Moreover, we show that using our approach, one can modularly compile the Plonk IOP with the linearized KZG polynomial commitment scheme into a secure SNARG in the random oracle model.

Folding schemes are cryptographic tools that allow for space-efficient and incrementally updatable proofs of structured computations, such as Incrementally Verifiable Computation (IVC) and Proof-Carrying Data (PCD). However, most current folding schemes lack post-quantum security, and developing such schemes from post-quantum assumptions has proven technically challenging. In this talk, I will give an overview the construction of zero-knowledge Succinct Non-interactive Arguments of Knowledge (zkSNARKs) based on lattice assumptions and the challenges of building folding schemes from “noisy” cryptographic assumptions such as lattices.I will introduce Lova, a lattice analogue of the foundational Nova folding scheme, and discuss general techniques for achieving exact norm extraction, a complex but crucial requirement for many proof systems. Finally, I will present lattirust, a forthcoming high-performance library for lattice cryptography with a special emphasis on zkSNARKs. This talk is based on joint work with Giacomo Fenzi, Duc Tu Pham, and Ngoc Khanh Nguyen.

In recent years, FHE has made significant gains in performance and usability. As a result, we see a first wave of real-world deployments and an increasing demand for practical applications of FHE. However, deploying FHE in the real world requires addressing challenges that have so far received less attention, as the community was primarily focused on achieving efficiency and usability. Specifically, the assumption of a semi-honest evaluating party, which is at the core of most FHE research, is incompatible with a large number of deployment scenarios. Scenarios that violate this assumption do not simply suffer from correctness issues, as one might expect, but in fact enable an adversary to completely undermine the confidentiality guarantees of FHE, up to and including very practical key-recovery attacks. As a response, a variety of works have tried to augment FHE for settings beyond the traditional semi-honest assumption. This fundamentally revolves around guaranteeing some form of integrity for FHE, while retaining sufficient malleability to allow homomorphic computations. However, it remains unclear to what extent existing approaches actually address the challenges of real-world deployment, as we identify significant gaps between the assumptions these works generally make and the way state-of-the-art FHE schemes are used in practice. In this talk, we survey and analyze existing approaches to FHE integrity in the context of real-world deployment scenarios, identify capabilities, shortcomings, and promising candidates. We also implemented and evaluated these constructions experimentally on realistic workloads, and we give some numbers. Finally, we conclude with a discussion on current capabilities, recommendations for future research directions, and an overview of the hurdles on the path to our ideal end-goal: a cryptographic equivalent of a trusted execution environment, i.e., a cryptoprocessor enabling fully private and verifiable computation.

News

1 Sep 2025 My student Mohamed Badr Taddist is presenting our privacy analysis of the C2PA ecosystem for journalists and publishers at two industry conferences.

14 Jul 2025 I’m attending the Proofs workshop at the Simons Institute this week!

11 Jul 2025 We just wrapped up the summer school on lattice crypto that I’m co-organising with Shannon Veitch and Jonathan Bootle. Slides and videos coming soon!

2 Jun 2025 I’m starting a summer internship at Brave, working with Sofía Celi on PIR things!

1 Jun 2025 I’m giving a talk about our end-to-end encrypted collaborative document system for investigative journalists at the European Broadcasting Union’s Media Cybersecurity Conference.

24 Feb 2025 I’ll be giving a talk on designing privacy-preserving system for the International Committee of the Red Cross at RWC'25 🇧🇬 (I’ll also give a talk on generalized IOPs at ZKProof)!

15 Dec 2024 I’m co-organizing a summer school on lattice-based cryptography at EPFL next summer!

26 Aug 2024 Lova got accepted at Asiacrypt'24!

01 Aug 2024 I’m giving a talk about our lattice folding scheme Lova at KU Leuven 🇧🇪, and then I’ll be visiting King’s college 🇬🇧 for a week (with a pit stop at Royal Holloway)!

24 May 2024 Veritas got accepted at CCS'24!

Supervision

Spring 2025Mohamed Badr TaddistM.Sc. thesis
Spring 2025Gustave Charles-SaigneM.Sc. semester project
Spring 2025Pedro Laginhas GouveiaM.Sc. semester project
Spring 2025Kwok Wai LiuM.Sc. semester project
Spring 2025Adrien BouquetM.Sc. semester project
Fall 2024Ganyuan Cao
→ PhD student at Télécom Paris, Institut Polytechnique de Paris
M.Sc. thesis
Fall 2024Emile HreichM.Sc. semester project
Fall 2024Jiajun JiangM.Sc. semester project
Fall 2024Xavier MarchonM.Sc. semester project
Fall 2024Giacomo FiorindoM.Sc. semester project
Spring 2024Zihan YuM.Sc. semester project
Spring 2023Antonio Merino-Gallardo
→ PhD student at HPI and IBM Research
M.Sc. semester project