The coastline paradox is the counterintuitive observation that the coastline of a landmass does not have a well-defined length. This results from the fractal curve–like properties of coastlines; i.e., the fact that a coastline typically has a fractal dimension. Although the "paradox of length" was previously noted by Hugo Steinhaus, the first systematic study of this phenomenon was by Lewis Fry Richardson, and it was expanded upon by Benoit Mandelbrot.
The measured length of the coastline depends on the method used to measure it and the degree of cartographic generalization. Since a landmass has features at all scales, from hundreds of kilometers in size to tiny fractions of a millimeter and below, there is no obvious size of the smallest feature that should be taken into consideration when measuring, and hence no single well-defined perimeter to the landmass. Various approximations exist when specific assumptions are made about minimum feature size.
The problem is fundamentally different from the measurement of other, simpler edges. It is possible, for example, to accurately measure the length of a straight, idealized metal bar by using a measurement device to determine that the length is less than a certain amount and greater than another amount—that is, to measure it within a certain degree of uncertainty. The more accurate the measurement device, the closer results will be to the true length of the edge. When measuring a coastline, however, the closer measurement does not result in an increase in accuracy—the measurement only increases in length; unlike with the metal bar, there is no way to obtain a maximum value for the length of the coastline.
In three-dimensional space, the coastline paradox is readily extended to the concept of fractal surfaces, whereby the area of a surface varies depending on the measurement resolution.
The basic concept of length originates from Euclidean distance. In Euclidean geometry, a straight line represents the shortest distance between two points. This line has only one length.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In mathematics, a fractal dimension is a term invoked in the science of geometry to provide a rational statistical index of complexity detail in a pattern. A fractal pattern changes with the scale at which it is measured. It is also a measure of the space-filling capacity of a pattern, and it tells how a fractal scales differently, in a fractal (non-integer) dimension. The main idea of "fractured" dimensions has a long history in mathematics, but the term itself was brought to the fore by Benoit Mandelbrot based on his 1967 paper on self-similarity in which he discussed fractional dimensions.
In mathematics, Hausdorff dimension is a measure of roughness, or more specifically, fractal dimension, that was introduced in 1918 by mathematician Felix Hausdorff. For instance, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension.
In mathematics, a fractal is a geometric shape containing detailed structure at arbitrarily small scales, usually having a fractal dimension strictly exceeding the topological dimension. Many fractals appear similar at various scales, as illustrated in successive magnifications of the Mandelbrot set. This exhibition of similar patterns at increasingly smaller scales is called self-similarity, also known as expanding symmetry or unfolding symmetry; if this replication is exactly the same at every scale, as in the Menger sponge, the shape is called affine self-similar.
This course provides an introduction to the physical phenomenon of turbulence, its probabilistic description and modeling approaches including RANS and LES. Students are equipped with the basic knowle
The goal of this paper is to characterize function distributions that general neural networks trained by descent algorithms (GD/SGD), can or cannot learn in polytime. The results are: (1) The paradigm of general neural networks trained by SGD is poly-time ...
We prove non-uniqueness for a class of weak solutions to the Navier???Stokes equations which have bounded kinetic energy, integrable vorticity, and are smooth outside a fractal set of singular times with Hausdorff dimension strictly less than 1. ...
Roughness, defined as unevenness of material surfaces, plays an important role in determining how engineering components or natural objects interact with other bodies and their environment. The emergence of fractal roughness on natural and engineered surfa ...