In science and engineering, a log–log graph or log–log plot is a two-dimensional graph of numerical data that uses logarithmic scales on both the horizontal and vertical axes. Power functions – relationships of the form – appear as straight lines in a log–log graph, with the exponent corresponding to the slope, and the coefficient corresponding to the intercept. Thus these graphs are very useful for recognizing these relationships and estimating parameters. Any base can be used for the logarithm, though most commonly base 10 (common logs) are used.
Given a monomial equation taking the logarithm of the equation (with any base) yields:
Setting and which corresponds to using a log–log graph, yields the equation:
where m = k is the slope of the line (gradient) and b = log a is the intercept on the (log y)-axis, meaning where log x = 0, so, reversing the logs, a is the y value corresponding to x = 1.
The equation for a line on a log–log scale would be:
where m is the slope and b is the intercept point on the log plot.
To find the slope of the plot, two points are selected on the x-axis, say x1 and x2. Using the above equation:
and
The slope m is found taking the difference:
where F1 is shorthand for F(x1) and F2 is shorthand for F(x2). The figure at right illustrates the formula. Notice that the slope in the example of the figure is negative. The formula also provides a negative slope, as can be seen from the following property of the logarithm:
The above procedure now is reversed to find the form of the function F(x) using its (assumed) known log–log plot. To find the function F, pick some fixed point (x0, F0), where F0 is shorthand for F(x0), somewhere on the straight line in the above graph, and further some other arbitrary point (x1, F1) on the same graph. Then from the slope formula above:
which leads to
Notice that 10log10(F1) = F1. Therefore, the logs can be inverted to find:
or
which means that
In other words, F is proportional to x to the power of the slope of the straight line of its log–log graph.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
We discuss a set of topics that are important for the understanding of modern data science but that are typically not taught in an introductory ML course. In particular we discuss fundamental ideas an
In statistics, a power law is a functional relationship between two quantities, where a relative change in one quantity results in a relative change in the other quantity proportional to a power of the change, independent of the initial size of those quantities: one quantity varies as a power of another. For instance, considering the area of a square in terms of the length of its side, if the length is doubled, the area is multiplied by a factor of four.
The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto, is a power-law probability distribution that is used in description of social, quality control, scientific, geophysical, actuarial, and many other types of observable phenomena; the principle originally applied to describing the distribution of wealth in a society, fitting the trend that a large portion of wealth is held by a small fraction of the population.
In mathematics, the logarithm is the inverse function to exponentiation. That means that the logarithm of a number x to the base b is the exponent to which b must be raised to produce x. For example, since 1000 = 103, the logarithm base 10 of 1000 is 3, or log10 (1000) = 3. The logarithm of x to base b is denoted as logb (x), or without parentheses, logb x, or even without the explicit base, log x, when no confusion is possible, or when the base does not matter such as in big O notation.
We prove the bigness of the Chow-Mumford line bundle associated to a Q-Gorenstein family of log Fano varieties of maximal variation with uniformly K-stable general geometric fibers. This result generalizes a theorem of Codogni and Patakfalvi to the logarit ...
Covers information measures like entropy, Kullback-Leibler divergence, and data processing inequality, along with probability kernels and mutual information.
The log law of the wall, joining the inner, near-wall mean velocity profile (MVP) in wall-bounded turbulent flows to the outer region, has been a permanent fixture of turbulence research for over hundred years, but there is still no general agreement on th ...
The dynamic of fine sediment in rivers is closely related to the interactions between fine particles, the riverbed and the flow conditions. The accumulation of fine sediment in the riverbed reduces vertical water exchanges and can have detrimental effects ...