**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Computer Aided Cryptanalysis from Ciphers to Side Channels

Abstract

In this dissertation, we study the security of cryptographic protocols and cryptosystems from the mathematical definition of the primitives, up to their physical implementations in the real world. We propose a representation of the chronological design using six layers (cryptographic primitives, cryptographic protocols, implementation, computer insecurity, side channel cryptanalysis and computer human interactions). We do the assumption that these layers should not be studied independently. Indeed, many negligible security weaknesses coming from different layers can be correlated to provide devastating practical attacks on cryptosystems. However, the complexity of a complete security analysis becomes huge and interdisciplinary knowledge is needed. These limitations are probably the reasons of the lack of complete security analysis in practice. We define a novel approach, to combine and study the six layers simultaneously. We propose to follow the data flow of a system and to perform security analysis across the six layers. This technique is applied in practice to the security analysis of computer keyboards, RC4, IEEE 802.11, and e-passports. Thanks to this method, we found 34 additional exploitable correlations in RC4 and we defined the best key recovery attacks on WEP and WPA. We also identified weaknesses in the design and the implementation of e-passports. Therefore, we show that the security risk of every layer seems to be related to its level of complexity. Thus, the implementation layer, the computer insecurity layer, the side channel layer and the computer human interfaces layer are subject to cost-effective attacks in practice. Interestingly, these layers are not intensively studied in cryptography, where research stays usually focused on the two first layers (and some side channel attacks). In this dissertation, we also propose frameworks for computer aided cryptanalysis. Indeed, when the complexity of a system is too important to perform manual analysis, some tools may automatically find weaknesses. Increasing complexity in systems adds new vulnerabilities. Straightforward but automated analysis becomes relevant. Two frameworks have been developed. The first one automatically highlights linear correlation in RC4. The second framework, called Autodafé automatically detects buffer overflows in modern software, using a technique called Fuzzing by Weighting Attacks with Markers.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related publications (54)

Loading

Loading

Loading

Related concepts (26)

Computer security

Computer security, cyber security, digital security or information technology security (IT security) is the protection of computer systems and networks from attacks by malicious actors that may resu

Side-channel attack

In computer security, a side-channel attack is any attack based on extra information that can be gathered because of the fundamental way a computer protocol or algorithm is implemented, rather than

Security

Security is protection from, or resilience against, potential harm (or other unwanted coercion) caused by others, by restraining the freedom of others to act. Beneficiaries (technically referents) of

This thesis presents work on the efficiency and security of cryptographic software. First it describes several efforts to construct very efficient implementations of cryptographic primitives. These include the Advanced Encryption Standard (AES) as well as several popular hash functions, spanning from the old MD5 to several candidates for the upcoming SHA-3 standard. Computer architectures considered range from low-end 8-bit microcontrollers to modern high-end graphics cards, and several key techniques for optimizing such implementations are presented. Results include compact implementations of SHA-1 and SHA-256 with reduced memory footprint and more than triple and double speed, respectively, compared with other fast implementations on the same architecture. Next it presents novel cryptanalytic attacks that can be performed on processor architectures with cache memories, like those used in modern personal computers. These belong to the category of side-channel attacks, exploiting unintended information leakage from an implementation to retrieve secret keys without having to find weaknesses in the cipher itself. The attacks even allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing and virtualization. Practical results include full AES key recovery in 65ms using only write access to an encrypted partition. Finally the thesis investigates a technique for reducing the number of boolean operations required to express the AES round function, resulting in a significant improvement over previous efforts. This can be useful both for very compact AES implementations in hardware and for bitslice implementations of AES, in which software simulates many copies of the same compact hardware. Such software eliminates the table lookups exploited in cache attacks, and hence this kind of implementaton can be completely immune to them.

Modern cryptography pushed forward the need of having provable security. Whereas ancient cryptography was only relying on heuristic assumptions and the secrecy of the designs, nowadays researchers try to make the security of schemes to rely on mathematical problems which are believed hard to solve. When doing these proofs, the capabilities of potential adversaries are modeled formally. For instance, the black-box model assumes that an adversary does not learn anything from the inner-state of a construction. While this assumption makes sense in some practical scenarios, it was shown that one can sometimes learn some information by other means, e.g., by timing how long the computation take. In this thesis, we focus on two different areas of cryptography. In both parts, we take first a theoretical point of view to obtain a result. We try then to adapt our results so that they are easily usable for implementers and for researchers working in practical cryptography. In the first part of this thesis, we take a look at post-quantum cryptography, i.e., at cryptographic primitives that are believed secure even in the case (reasonably big) quantum computers are built. We introduce HELEN, a new public-key cryptosystem based on the hardness of the learning from parity with noise problem (LPN). To make our results more concrete, we suggest some practical instances which make the system easily implementable. As stated above, the design of cryptographic primitives usually relies on some well-studied hard problems. However, to suggest concrete parameters for these primitives, one needs to know the precise complexity of algorithms solving the underlying hard problem. In this thesis, we focus on two recent hard-problems that became very popular in post-quantum cryptography: the learning with error (LWE) and the learning with rounding problem (LWR). We introduce a new algorithm that solves both problems and provide a careful complexity analysis so that these problems can be used to construct practical cryptographic primitives. In the second part, we look at leakage-resilient cryptography which studies adversaries able to get some side-channel information from a cryptographic primitive. In the past, two main disjoint models were considered. The first one, the threshold probing model, assumes that the adversary can put a limited number of probes in a circuit. He then learns all the values going through these probes. This model was used mostly by theoreticians as it allows very elegant and convenient proofs. The second model, the noisy-leakage model, assumes that every component of the circuit leaks but that the observed signal is noisy. Typically, some Gaussian noise is added to it. According to experiments, this model depicts closely the real behaviour of circuits. Hence, this model is cherished by the practical cryptographic community. In this thesis, we show that making a proof in the first model implies a proof in the second model which unifies the two models and reconciles both communities. We then look at this result with a more practical point-of-view. We show how it can help in the process of evaluating the security of a chip based solely on the more standard mutual information metric.

Block ciphers probably figure in the list of the most important cryptographic primitives. Although they are used for many different purposes, their essential goal is to ensure confidentiality. This thesis is concerned by their quantitative security, that is, by measurable attributes that reflect their ability to guarantee this confidentiality. The first part of this thesis deals with well know results. Starting with Shannon's Theory of Secrecy, we move to practical implications for block ciphers, recall the main schemes on which nowadays block ciphers are based, and introduce the Luby-Rackoff security model. We describe distinguishing attacks and key-recovery attacks against block ciphers and show how to turn the firsts into the seconds. As an illustration, we recall linear cryptanalysis which is a classical example of statistical cryptanalysis. In the second part, we consider the (in)security of block ciphers against statistical cryptanalytic attacks and develop some tools to perform optimal attacks and quantify their efficiency. We start with a simple setting in which the adversary has to distinguish between two sources of randomness and show how an optimal strategy can be derived in certain cases. We proceed with the practical situation where the cardinality of the sample space is too large for the optimal strategy to be implemented and show how this naturally leads to the concept of projection-based distinguishers, which reduce the sample space by compressing the samples. Within this setting, we re-consider the particular case of linear distinguishers and generalize them to sets of arbitrary cardinality. We show how these distinguishers between random sources can be turned into distinguishers between random oracles (or block ciphers) and how, in this setting, one can generalize linear cryptanalysis to Abelian groups. As a proof of concept, we show how to break the block cipher TOY100, introduce the block cipher DEAN which encrypts blocks of decimal digits, and apply the theory to the SAFER block cipher family. In the last part of this thesis, we introduce two new constructions. We start by recalling some essential notions about provable security for block ciphers and about Serge Vaudenay's Decorrelation Theory, and introduce new simple modules for which we prove essential properties that we will later use in our designs. We then present the block cipher C and prove that it is immune against a wide range of cryptanalytic attacks. In particular, we compute the exact advantage of the best distinguisher limited to two plaintext/ciphertext samples between C and the perfect cipher and use it to compute the exact value of the maximum expected linear probability (resp. differential probability) of C which is known to be inversely proportional to the number of samples required by the best possible linear (resp. differential) attack. We then introduce KFC a block cipher which builds upon the same foundations as C but for which we can prove results for higher order adversaries. We conclude both discussions about C and KFC by implementation considerations.