In probability and statistics, the log-logistic distribution (known as the Fisk distribution in economics) is a continuous probability distribution for a non-negative random variable. It is used in survival analysis as a parametric model for events whose rate increases initially and decreases later, as, for example, mortality rate from cancer following diagnosis or treatment. It has also been used in hydrology to model stream flow and precipitation, in economics as a simple model of the distribution of wealth or income, and in networking to model the transmission times of data considering both the network and the software.
The log-logistic distribution is the probability distribution of a random variable whose logarithm has a logistic distribution.
It is similar in shape to the log-normal distribution but has heavier tails. Unlike the log-normal, its cumulative distribution function can be written in closed form.
There are several different parameterizations of the distribution in use. The one shown here gives reasonably interpretable parameters and a simple form for the cumulative distribution function.
The parameter is a scale parameter and is also the median of the distribution. The parameter is a shape parameter. The distribution is unimodal when and its dispersion decreases as increases.
The cumulative distribution function is
where , ,
The probability density function is
An alternative parametrization is given by the pair in analogy with the logistic distribution:
The th raw moment exists only when when it is given by
where B is the beta function.
Expressions for the mean, variance, skewness and kurtosis can be derived from this. Writing for convenience, the mean is
and the variance is
Explicit expressions for the skewness and kurtosis are lengthy.
As tends to infinity the mean tends to , the variance and skewness tend to zero and the excess kurtosis tends to 6/5 (see also related distributions below).
The quantile function (inverse cumulative distribution function) is :
It follows that the median is ,
the lower quartile is
and the upper quartile is .
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
The Dagum distribution (or Mielke Beta-Kappa distribution) is a continuous probability distribution defined over positive real numbers. It is named after Camilo Dagum, who proposed it in a series of papers in the 1970s. The Dagum distribution arose from several variants of a new model on the size distribution of personal income and is mostly associated with the study of income distribution. There is both a three-parameter specification (Type I) and a four-parameter specification (Type II) of the Dagum distribution; a summary of the genesis of this distribution can be found in "A Guide to the Dagum Distributions".
Probability distribution fitting or simply distribution fitting is the fitting of a probability distribution to a series of data concerning the repeated measurement of a variable phenomenon. The aim of distribution fitting is to predict the probability or to forecast the frequency of occurrence of the magnitude of the phenomenon in a certain interval. There are many probability distributions (see list of probability distributions) of which some can be fitted more closely to the observed frequency of the data than others, depending on the characteristics of the phenomenon and of the distribution.
Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The phenomenon may be time- or space-dependent. Cumulative frequency is also called frequency of non-exceedance. Cumulative frequency analysis is performed to obtain insight into how often a certain phenomenon (feature) is below a certain value. This may help in describing or explaining a situation in which the phenomenon is involved, or in planning interventions, for example in flood protection.
This course aims to introduce the basic principles of machine learning in the context of the digital humanities. We will cover both supervised and unsupervised learning techniques, and study and imple
Machine learning and data analysis are becoming increasingly central in sciences including physics. In this course, fundamental principles and methods of machine learning will be introduced and practi
This course is an introduction to quantitative risk management that covers standard statistical methods, multivariate risk factor models, non-linear dependence structures (copula models), as well as p
Visual perception is indispensable for many real-world applications. However, perception models deployed in the real world will encounter numerous and unpredictable distribution shifts, for example, changes in geographic locations, motion blur, and adverse ...
Atomistic simulations performed with a family of model potential with tunable hardness have proven to be a great tool for advancing the understanding of wear processes at the asperity level. They have been instrumental in finding a critical length scale, w ...
2024
, , ,
This paper addresses intra-client and inter-client covariate shifts in federated learning (FL) with a focus on the overall generalization performance. To handle covariate shifts, we formulate a new global model training paradigm and propose Federated Impor ...