**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Secrecy

Summary

Secrecy is the practice of hiding information from certain individuals or groups who do not have the "need to know", perhaps while sharing it with other individuals. That which is kept hidden is known as the secret.
Secrecy is often controversial, depending on the content or nature of the secret, the group or people keeping the secret, and the motivation for secrecy.
Secrecy by government entities is often decried as excessive or in promotion of poor operation; excessive revelation of information on individuals can conflict with virtues of privacy and confidentiality. It is often contrasted with social transparency.
Secrecy can exist in a number of different ways: encoding or encryption (where mathematical and technical strategies are used to hide messages), true secrecy (where restrictions are put upon those who take part of the message, such as through government security classification) and obfuscation, where secrets are hidden in plain sight behind complex idiosyncratic lang

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people (9)

Related concepts (10)

Related publications (44)

Cryptography

Cryptography, or cryptology (from κρυπτός "hidden, secret"; and γράφειν graphein, "to write", or -λογία -logia, "study", respectively), is the practice and study of techniques for secure communicatio

WikiLeaks

WikiLeaks (ˈwɪkiliːks) is a publisher and media organisation founded in 2006. It operates as a non-profit and is funded by donations and media partnerships. It has published classified documents and o

Steganography

Steganography (ˌstɛɡəˈnɒɡrəfi ) is the practice of representing information within another message or physical object, in such a manner that the presence of the information is not evident to human i

Loading

Loading

Loading

Related courses (12)

This course introduces the basics of cryptography. We review several types of cryptographic primitives, when it is safe to use them and how to select the appropriate security parameters. We detail how they work and sketch how they can be implemented.

A decentralized system is one that works when no single party is in charge or fully trusted. This course teaches decentralized systems principles while guiding students through the development and testing of their own decentralized system incorporating messaging, encryption, and blockchain concepts.

This seminar introduces the participants to the current trends, problems, and methods in the area of communication security.

Related lectures (41)

Related units (6)

Shannon, in his landmark 1948 paper, developed a framework for characterizing the fundamental limits of information transmission. Among other results, he showed that reliable communication over a channel is possible at any rate below its capacity. In 2008, Arikan discovered polar codes; the only class of explicitly constructed low-complexity codes that achieve the capacity of any binary-input memoryless symmetric-output channel. Arikan's polar transform turns independent copies of a noisy channel into a collection of synthetic almost-noiseless and almost-useless channels. Polar codes are realized by sending data bits over the almost-noiseless channels and recovering them by using a low-complexity successive-cancellation (SC) decoder, at the receiver. In the first part of this thesis, we study polar codes for communications. When the underlying channel is an erasure channel, we show that almost all correlation coefficients between the erasure events of the synthetic channels decay rapidly. Hence, the sum of the erasure probabilities of the information-carrying channels is a tight estimate of the block-error probability of polar codes when used for communication over the erasure channel. We study SC list (SCL) decoding, a method for boosting the performance of short polar codes. We prove that the method has a numerically stable formulation in log-likelihood ratios. In hardware, this formulation increases the decoding throughput by 53% and reduces the decoder's size about 33%. We present empirical results on the trade-off between the length of the CRC and the performance gains in a CRC-aided version of the list decoder. We also make numerical comparisons of the performance of long polar codes under SC decoding with that of short polar codes under SCL decoding. Shannon's framework also quantifies the secrecy of communications. Wyner, in 1975, proposed a model for communications in the presence of an eavesdropper. It was shown that, at rates below the secrecy capacity, there exist reliable communication schemes in which the amount of information leaked to the eavesdropper decays exponentially in the block-length of the code. In the second part of this thesis, we study the rate of this decay. We derive the exact exponential decay rate of the ensemble-average of the information leaked to the eavesdropper in Wyner's model when a randomly constructed code is used for secure communications. For codes sampled from the ensemble of i.i.d. random codes, we show that the previously known lower bound to the exponent is exact. Our ensemble-optimal exponent for random constant-composition codes improves the lower bound extant in the literature. Finally, we show that random linear codes have the same secrecy power as i.i.d. random codes. The key to securing messages against an eavesdropper is to exploit the randomness of her communication channel so that the statistics of her observation resembles that of a pure noise process for any sent message. We study the effect of feedback on this approximation and show that it does not reduce the minimum entropy rate required to approximate a given process. However, we give examples where variable-length schemes achieve much larger exponents in this approximation in the presence of feedback than the exponents in systems without feedback. Upper-bounding the best exponent that block codes attain, we conclude that variable-length coding is necessary for achieving the improved exponents.

In a multistage secret sharing (MSSS) scheme, the authorised subsets of participants could recover a number of secrets in different stages. A one-stage multisecret sharing (OSMSS) scheme is a special case of MSSS schemes in which all the secrets are recovered simultaneously. In these schemes, in addition to the individual shares, the dealer should provide the participants with a number of public values associated with the secrets. The less the number of public values, the more efficient is the scheme. It is desired that the MSSS and OSMSS schemes provide computational security. In this study, the authors show that in the OSMSS schemes any unauthorised coalition of the participants can reduce the uncertainty of the secrets. In addition, in MSSS schemes recovering a secret causes reducing uncertainty of the unrecovered secrets. Furthermore, by introducing a new multi-use MSSS scheme based on weil pairing, they reduce the number of public values comparing with the previous schemes.

We consider the problem of measuring how much a system reveals about its secret inputs. We work in the black-box setting: we assume no prior knowledge of the system's internals, and we run the system for choices of secrets and measure its leakage from the respective outputs. Our goal is to estimate the Bayes risk, from which one can derive some of the most popular leakage measures (e.g., min-entropy leakage). The state-of-the-art method for estimating these leakage measures is the frequentist paradigm, which approximates the system's internals by looking at the frequencies of its inputs and outputs. Unfortunately, this does not scale for systems with large output spaces, where it would require too many input-output examples. Consequently, it also cannot be applied to systems with continuous outputs (e.g., time side channels, network traffic). In this paper, we exploit an analogy between Machine Learning (ML) and black-box leakage estimation to show that the Bayes risk of a system can be estimated by using a class of ML methods: the universally consistent learning rules; these rules can exploit patterns in the input-output examples to improve the estimates' convergence, while retaining formal optimality guarantees. We focus on a set of them, the nearest neighbor rules; we show that they significantly reduce the number of black-box queries required for a precise estimation whenever nearby outputs tend to be produced by the same secret; furthermore, some of them can tackle systems with continuous outputs. We illustrate the applicability of these techniques on both synthetic and real-world data, and we compare them with the state-of-the-art tool, leakiEst, which is based on the frequentist approach.