**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Rounding

Summary

Rounding means replacing a number with an approximate value that has a shorter, simpler, or more explicit representation. For example, replacing $with$, the fraction 312/937 with 1/3, or the expression with .
Rounding is often done to obtain a value that is easier to report and communicate than the original. Rounding can also be important to avoid misleadingly precise reporting of a computed number, measurement, or estimate; for example, a quantity that was computed as but is known to be accurate only to within a few hundred units is usually better stated as "about ".
On the other hand, rounding of exact numbers will introduce some round-off error in the reported result. Rounding is almost unavoidable when reporting many computations – especially when dividing two numbers in integer or fixed-point arithmetic; when computing mathematical functions such as square roots, logarithms, and sines; or when using a floating-point representation with a fixed number of significant digits.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related publications (27)

Loading

Loading

Loading

Related people (3)

Related units (1)

Related concepts (38)

Floating-point arithmetic

In computing, floating-point arithmetic (FP) is arithmetic that represents subsets of real numbers using an integer with a fixed precision, called the significand, scaled by an integer exponent of a

Double-precision floating-point format

Double-precision floating-point format (sometimes called FP64 or float64) is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric

Round-off error

In computing, a roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm us

Related lectures (39)

In today's digital world, sampling is at the heart of any signal acquisition device. Imaging devices are ubiquitous examples that capture two-dimensional visual signals and store them as the pixels of discrete images. The main concern is whether and how the pixels provide an exact or at least a fair representation of the original visual signal in the continuous domain. This motivates the design of exact reconstruction or approximation techniques for a target class of images. Such techniques benefit different imaging tasks such as super-resolution, deblurring and compression. This thesis focuses on the reconstruction of visual signals representing a shape over a background, from their samples. Shape images have only two intensity values. However, the filtering effect caused by the sampling kernel of imaging devices smooths out the sharp transitions in the image and results in samples with varied intensity levels. To trace back the shape boundaries, we need strategies to reconstruct the original bilevel image. But, abrupt intensity changes along the shape boundaries as well as diverse shape geometries make reconstruction of this class of signals very challenging. Curvelets and contourlets have been proved as efficient multiresolution representations for the class of shape images. This motivates the approximation of shape images in the aforementioned domains. In the first part of this thesis, we study generalized sampling and infinite-dimensional compressed sensing to approximate a signal in a domain that is known to provide a sparse or efficient representation for the signal, given its samples in a different domain. We show that the generalized sampling, due to its linearity, is incapable of generating good approximation of shape images from a limited number of samples. The infinite-dimensional compressed sensing is a more promising approach. However, the concept of random sampling in this scheme does not apply to the shape reconstruction problem. Next, we propose a sampling scheme for shape images with finite rate of innovation (FRI). More specifically, we model the shape boundaries as a subset of an algebraic curve with an implicit bivariate polynomial. We show that the image parameters are solutions of a set of linear equations with the coefficients being the image moments. We then replace conventional moments with more stable generalized moments that are adjusted to the given sampling kernel. This leads to successful reconstruction of shapes with moderate complexities from samples generated with realistic sampling kernels and in the presence of moderate noise levels. Our next contribution is a scheme for recovering shapes with smooth boundaries from a set of samples. The reconstructed image is constrained to regenerate the same samples (consistency) as well as forming a bilevel image. We initially formulate the problem by minimizing the shape perimeter over the set of consistent shapes. Next, we relax the non-convex shape constraint to transform the problem into minimizing the total variation over consistent non-negative-valued images. We introduce a requirement -called reducibility- that guarantees equivalence between the two problems. We illustrate that the reducibility effectively sets a requirement on the minimum sampling density. Finally, we study a relevant problem in the Boolean algebra: the Boolean compressed sensing. The problem is about recovering a sparse Boolean vector from a few collective binary tests. We study a formulation of this problem as a binary linear program, which is NP hard. To overcome the computational burden, we can relax the binary constraint on the variables and apply a rounding to the solution. We replace the rounding procedure with a randomized algorithm. We show that the proposed algorithm considerably improves the success rate with only a slight increase in the computational cost.

Key schedules in lightweight block ciphers are often highly simplified, which causes weakness that can be exploited in many attacks. Today it remains an open problem on how to use limited operations to guarantee enough diffusion of key bits in lightweight key schedules. Also, there are few tools special for detecting weakness in the key schedule. In 2013 Huang et al. pointed out that insufficient actual key information (AKI) in computation chains is responsible for many attacks especially the meet-in-the-middle (MITM) attacks. Motivated by this fact, in this paper we develop an efficient (with polynomial time complexity) and effective tool to search the computation chains which involve insufficient AKI for iterated key schedules of lightweight ciphers. The effectiveness of this tool is shown by an application on TWINE-80. Then, we formulate the cause of key bits leakage phenomenon, where the knowledge of subkey bits is leaked or overlapped by other subkey bits in the same computation chain. Based on the interaction of diffusion performed by the key schedule and by the round function, a necessary condition is thus given on how to avoid key bits leakage. Therefore, our work sheds light on the design of lightweight key schedules by guiding how to quickly rule out unreasonable key schedules and maximize the security under limited diffusion.

Related courses (23)

A first graduate course in algorithms, this course assumes minimal background, but moves rapidly. The objective is to learn the main techniques of algorithm analysis and design, while building a repertory of basic algorithmic solutions to problems in many domains.

The goal of the course is to introduce relativistic quantum field theory as the conceptual and mathematical framework describing fundamental interactions.

This course introduces the theory and application of modern convex optimization from an engineering perspective.

Onur Özen, Cihangir Tezcan, Kerem Varici

Design and analysis of lightweight block ciphers have become more popular due to the fact that the future use of block ciphers in ubiquitous devices is generally assumed to be extensive. In this respect, several lightweight block ciphers are designed, of which PRESENT and HIGHT are two recently proposed ones by Bogdanov et al. and Hong et al. respectively. In this paper, we propose new attacks on PRESENT and HIGHT. Firstly, we present the first related-key cryptanalysis of 128-bit keyed PRESENT by introducing 17-round related-key rectangle attack with time complexity approximately 2^104 memory accesses. Moreover, we further analyze the resistance of HIGHT against impossible differential attacks by mounting new 26-round impossible differential and 31-round related-key impossible differential attacks where the former requires time complexity of 2^119.53 reduced round HIGHT evaluations and the latter is slightly better than exhaustive search.