Christian Knabenhans

Christian Knabenhans /ˈknaː.bn̩haːns/

Ph.D. student in security and privacy

EPFL

About me

I’m a doctoral student at EPFL, working broadly on security and privacy topics. I am fortunate to be co-advised by Alessandro Chiesa (COMPSEC lab) and Carmela Troncoso (SPRING lab). My research aims to close the gap between the theory and practice of advanced cryptographic primitives, with a focus on efficiency and meaningful security guarantees against real-world threats. This summer (2026), I will intern at Microsoft Research in Redmond, working with Greg Zaverucha.

In the summer of 2025, I interned at Brave with Sofía Celi. Before that, I worked on verifiable and robust Fully Homomorphic Encryption in Anwar Hithnawi’s Privacy-Preserving Systems Lab at ETH Zurich, and I hold a joint Master’s in cyber security from EPFL and ETH Zurich. During my Bachelor’s and Master’s, I’ve also worked on static analysis and formal verification and privacy issues of machine learning. In my free time, I am involved in the non-profit EPFL Cyber Group Student Initiative. I also fence and rant about opera.

Publications

As digital identity systems gain traction around the world, many see privacy-enhancing technologies (PETs) as the key to ensuring safe deployment. We critically examine whether this is the case using the European Digital Identity Framework (EUDIF) as an example. We leverage techniques from cryptographic modeling to formally capture the necessary leakage of the functionality of the EUDIF and its proposed applications. Then, we develop a harm analysis methodology that illustrates, using harm trees, how this leakage — and other constraints stemming from design decisions or the context of deployment — lead to harms. Moreover, our harm modeling enables us to distinguish between which pathways to harm are inherent to the core functionality, and which pathways can be prevented with PETs. Our analysis shows that, while PETs can reduce information flows, they fall short in mitigating the harms that deploying digital identity can bring to individuals and society.

On-the-fly multi-party computation (MPC), introduced by López-Alt, Tromer, and Vaikuntanathan (STOC 2012), enables clients to dynamically join a computation without remaining continuously online. Yet, the original proposal suffers from substantial efficiency and expressivity limitations hindering practical deployments. Even though various techniques have been proposed to mitigate these shortcomings, seeing on-the-fly MPC as a combination of independent building blocks jeopardizes the security of the original model. Thus, we revisit on-the-fly MPC in light of recent advances and extend its formal framework to incorporate efficiency and expressivity improvements. Our approach is built around mph{multi-group homomorphic encryption} (MGHE), which generalizes threshold and multi-key HE and serves as the core primitive for on-the-fly MPC. Our contributions are fourfold: i) We propose new security notions for MGHE (e.g., IND-CPA with partial decryption, circuit privacy) and justify their suitability to the on-the-fly MPC. ii) We present the first ideal functionality for MGHE in the Universal Composability (UC) framework and characterize the conditions under which it can be realized, via reductions to our proposed security notions. iii) We present a generic protocol that securely realizes our on-the-fly MPC functionality against a semi-malicious adversary from our MGHE functionality. iv) Finally, we provide two generic compilers that lift these protocols to withstand a fully malicious adversary by leveraging zero-knowledge arguments. Our analysis in the UC framework enables modular protocol analysis, where more efficient schemes can be seamlessly substituted as long as they meet the required security defined by the functionalities, retaining the security guarantees offered by the original construction.

We study the security of a popular paradigm for constructing SNARGs, closing a key security gap left open by prior work. The paradigm consists of two steps: first, construct a public-coin succinct interactive argument by combining a functional interactive oracle proof (FIOP) and a functional commitment scheme (FC scheme); second, apply the Fiat–Shamir transformation in the random oracle model. Prior work did not consider this generalized setting nor prove the security of this second step (even in special cases). We prove that the succinct argument obtained in the first step satisfies state-restoration security, thereby ensuring that the second step does in fact yield a succinct non-interactive argument. This is provided the FIOP satisfies state-restoration security and the FC scheme satisfies a natural state-restoration variant of function binding (a generalization of position binding for vector commitment schemes). Moreover, we prove that notable FC schemes satisfy state-restoration function binding, allowing us to establish, via our main result, the security of several SNARGs of interest (in the random oracle model). This includes a security proof of Plonk, in the ROM, based on ARSDH (a falsifiable assumption).

Collaborative documents (e.g., Google Docs, Microsoft 365) often contain sensitive information such as personal or financial data. In this work, we extend the protection of E2EE encryption, currently (mostly) restricted to the use case of messaging, to collaborative documents. We elicit and formalize the security and functional requirements of End-to-End Encrypted Collaborative Documents (E2EE-CD). We then put forth a generic framework to realize E2EE-CD, by combining an end-to-end encrypted asynchronous broadcast channel with any edit reconciliation mechanism which ensures globally consistent views of a document. We give formal proofs that directly relate the security of our E2EE-CD solution to the security of the underlying end-to-end encrypted communication channel. We then elicit additional deployment requirements for E2EE-CD for investigative journalists and design SignalCD, an E2EE-CD system built on top of Signal’s group messaging protocol tailored for this setting. We analyze the security guarantees of SignalCD, implement a prototype, and empirically show that our solution is efficient enough to permit real-time collaboration.

In times of crisis, humanitarian organizations bring aid to those affected (e.g., water, food, medical supplies, cash assistance). Prior works introduced privacy-preserving systems for digitizing the aid distribution process, increasing their efficiency and security. These solutions, by design, do not allow humanitarian organizations to collect metrics about the aid distribution process. Such assessments (e.g., the proportion of aid distributed to a minority) are crucial to enable the organizations to improve their operations, to perform their duty of care, and to enable transparency and accountability towards recipients, donors, and the public in general. In partnership with the International Committee of the Red Cross (ICRC), we identify assessments relevant to humanitarian aid deployments and these assessments’ security and privacy requirements. We introduce a generic framework that augments existing privacy-preserving humanitarian aid distributions with such assessments. This framework enables the collection of aggregate statistics about the aid distribution process without compromising the privacy of recipients, and without requiring any changes to the existing protocols. To realize our framework we introduce one-time functional encryption (1FE), for which we propose efficient realizations from standard cryptographic primitives. We design and implement two variants of our framework: a more efficient one, secure against semi-honest adversaries; and a more robust one, secure against malicious adversaries. We also introduce the novel notions of threat model agility and graceful degradation. These notions enable us to model the unstable environment of humanitarian aid distribution, where the capabilities of the adversary may change suddenly (e.g., when a militia takes over a region in conflict), invalidating the threat model under which the system was originally deployed. We believe these notions are of independent interest for other privacy-preserving applications deployed in unstable environments.

Folding schemes (Kothapalli et al., CRYPTO 2022) are a conceptually simple, yet powerful cryptographic primitive that can be used as a building block to realise incrementally verifiable computation (IVC) with low recursive overhead without general-purpose non-interactive succinct arguments of knowledge (SNARK). Most folding schemes known rely on the hardness of the discrete logarithm problem, and thus are both not quantum-resistant and operate over large prime fields. Existing post-quantum folding schemes (Boneh, Chen, ePrint 2024/257) based on lattice assumptions instead are secure under structured lattice assumptions, such as the Module Short Integer Solution Assumption (MSIS), which also binds them to relatively complex arithmetic. In contrast, we construct Lova, the first folding scheme whose security relies on the (unstructured) SIS assumption. We provide a Rust implementation of Lova, which makes only use of arithmetic in hardware-friendly power-of-two moduli. Crucially, this avoids the need of implementing and performing any finite field arithmetic. At the core of our results lies a new exact Euclidean norm proof which might be of independent interest.

Homomorphic encryption has become a practical solution for protecting the privacy of computations on sensitive data. However, existing homomorphic encryption pipelines do not guarantee the correctness of the computation result in the presence of a malicious adversary. We propose two plaintext encodings compatible with state-of-the-art fully homomorphic encryption schemes that enable practical client-verification of homomorphic computations while supporting all the operations required for modern privacy-preserving analytics. Based on these encodings, we introduce VERITAS, a ready-to-use library for the verification of computations executed over encrypted data. VERITAS is the first library that supports the verification of any homomorphic operation. We demonstrate its practicality for various applications and, in particular, we show that it enables verifiability of homomorphic analytics with less than 3× computation overhead compared to the homomorphic encryption baseline.

Fully Homomorphic Encryption (FHE) is a powerful building block for secure and private applications. However, state-of-the-art FHE schemes do not offer any integrity guarantees, which can lead to devastating correctness and security issues when FHE is deployed in non-trivial settings. In this paper, we take a critical look at existing integrity solutions for FHE, and analyze their (often implicit) threat models, efficiency, and adequacy with real-world FHE deployments. We explore challenges of what we believe is the most flexible and promising integrity solution for FHE: namely, zero-knowledge Succinct Non-interactive ARguments of Knowledge (zkSNARKs); we showcase optimizations for both general-purpose zkSNARKs and zkSNARKs designed for FHE. We then present two software frameworks, circomlib-FHE and zkOpenFHE, which allow practitioners to automatically augment existing FHE pipelines with integrity guarantees. Finally, we leverage our tools to evaluate and compare different approaches to FHE integrity, and discuss open problems that stand in the way of a widespread deployment of FHE in real-world applications.

Recent advancements in privacy-preserving machine learning are paving the way to extend the benefits of ML to highly sensitive data that, until now, have been hard to utilize due to privacy concerns and regulatory constraints. Simultaneously, there is a growing emphasis on enhancing the transparency and accountability of machine learning, including the ability to audit ML deployments. While ML auditing and PPML have both been the subjects of intensive research, they have predominately been examined in isolation. However, their combination is becoming increasingly important. In this work, we introduce Arc, an MPC framework for auditing privacy-preserving machine learning. At the core of our framework is a new protocol for efficiently verifying MPC inputs against succinct commitments at scale. We evaluate the performance of our framework when instantiated with our consistency protocol and compare it to hashing-based and homomorphic-commitment-based approaches, demonstrating that it is up to 104× faster and up to 106× more concise.

Talks

Today, humanitarian distribution heavily relies on manual processes that can be slow, error-prone, and costly. Humanitarian aid organizations therefore have a strong incentive to digitalize the aid distribution process. This would allow them to scale up their operations, reduce costs, and increase the impact of their limited resources. Digitalizing the aid distribution process introduces new challenges, especially in terms of privacy and security. These challenges are particularly acute in the context of humanitarian aid, where the recipients are often vulnerable populations, and where the aid distribution process is subject to a high degree of scrutiny by the public, the media, and the donors. This is compounded by a very strong threat model, with adversaries ranging from corrupt officials to armed groups, and by the fact that the recipients themselves may not be able to protect their own privacy. This talk we propose is split into three main parts: first, we stress the need for assessments when deploying privacy-preserving applications in the real world, using concrete examples. In particular, we discuss the tension between supporting assessments and the security and privacy of the application’s users. Second, we reflect on our experience in designing privacy-preserving applications for various use cases, and discuss how we go from an informal, high-level need expressed by our partners, to a formal model and a concrete protocol. Here, we stress common pitfalls, and outline a methodology that we have synthesized from our experience. Finally, we discuss how we tackled the use case of a privacy-preserving aid distribution system with statistics, in collaboration with partners from the International Committee of the Red Cross. We present a general framework to collect and evaluate statistics in a privacy-preserving way (including one-time functional evaluation, a new primitive that we introduce), and we present three concrete instantiations of this framework (based on trusted execution environments, linear secret sharing, and threshold fully homomorphic encryption, respectively). This talk is based on joint work with Lucy Qin, Justinas Sukaitis, Vincent Graf Narbel, and Carmela Troncoso.

We study the security of a popular paradigm for constructing SNARGs, closing a key security gap left open by prior work. The paradigm consists of two steps: first, construct a public-coin succinct interactive argument by combining a FIOP (generalized interactive oracle proof) and a FCS (functional commitment scheme); second, apply the Fiat–Shamir transformation in the random oracle model. Prior work did not consider this generalized setting nor prove the security of this second step in restricted settings. We prove that the succinct argument obtained in the first step satisfies state-restoration security, thereby ensuring that the second step does in fact yield a succinct non-interactive argument. This is provided the FIOP satisfies state-restoration security and the FCS satisfies a natural state-restoration variant of function binding (the generalization of position binding for vector commitment schemes). Moreover, we show that using our approach, one can modularly compile the Plonk IOP with the linearized KZG polynomial commitment scheme into a secure SNARG in the random oracle model.

Folding schemes are cryptographic tools that allow for space-efficient and incrementally updatable proofs of structured computations, such as Incrementally Verifiable Computation (IVC) and Proof-Carrying Data (PCD). However, most current folding schemes lack post-quantum security, and developing such schemes from post-quantum assumptions has proven technically challenging. In this talk, I will give an overview the construction of zero-knowledge Succinct Non-interactive Arguments of Knowledge (zkSNARKs) based on lattice assumptions and the challenges of building folding schemes from “noisy” cryptographic assumptions such as lattices.I will introduce Lova, a lattice analogue of the foundational Nova folding scheme, and discuss general techniques for achieving exact norm extraction, a complex but crucial requirement for many proof systems. Finally, I will present lattirust, a forthcoming high-performance library for lattice cryptography with a special emphasis on zkSNARKs. This talk is based on joint work with Giacomo Fenzi, Duc Tu Pham, and Ngoc Khanh Nguyen.

News

7 May 2026 I’m giving a talk on the privacy harms of digital identity frameworks at the consumer protection workshop co-located with S&P in San Francisco end of May 🇺🇸!

6 May 2026 I’ll be at ZKProof (giving a talk on lattice-based arguments) and the Cryptographic Applications Workshop (giving a talk about privacy harms of the European digital identity framework and sitting on a panel on digital identities) in Rome this weekend 🇮🇹!

5 May 2026 Our paper “On the Fiat–Shamir Security of Succinct Arguments from Functional Commitments” is accepted at CRYPTO'26 🌴!

5 April 2026 I’m interning at Microsoft Research in Redmond this summer 🇺🇸!

1 Mar 2026 I’ll be at the HACS and FHE.org workshops next week, and I’m giving a talk on end-to-end encrypted collaborative documents for investigative journalists at Real World Crypto 🇹🇼!

15 Jan 2026 Our paper “End-to-End-Encrypted Collaborative Documents” is accepted at USENIX Security'26!

8 Dec 2025 I’m going to the EU Parliament 🇪🇺 for a roundtable event on “Age verification in the digital single market”.

1 Nov 2025 Our paper “A Privacy-Preserving Humanitarian Aid Distribution System with Statistics” is accepted at PETS'26 🇨🇦!

7 Oct 2025 I was a co-author on two position papers accepted to the IAB/W3C workshop on age-based restriction on content access: “Private and Decentralized Age Verification Architecture” with folks at Brave, and “Limitations and Pitfalls of Integrating PETs in Online Age Verification” with EPFL+CISPA+MPI-SP folks.

1 Sep 2025 My student Mohamed Badr Taddist is presenting our privacy analysis of the C2PA ecosystem for journalists and publishers at two industry conferences.