In statistics, there is a negative relationship or inverse relationship between two variables if higher values of one variable tend to be associated with lower values of the other. A negative relationship between two variables usually implies that the correlation between them is negative, or — what is in some contexts equivalent — that the slope in a corresponding graph is negative. A negative correlation between variables is also called anticorrelation or inverse correlation.
Negative correlation can be seen geometrically when two normalized random vectors are viewed as points on a sphere, and the correlation between them is the cosine of the arc of separation of the points on the sphere. When this arc is more than a quarter-circle (θ > π/2), then the cosine is negative. Diametrically opposed points represent a correlation of –1 = cos(π). Any two points not in the same hemisphere have negative correlation.
An example would be a negative cross-sectional relationship between illness and vaccination, if it is observed that where the incidence of one is higher than average, the incidence of the other tends to be lower than average. Similarly, there would be a negative temporal relationship between illness and vaccination if it is observed in one location that times with a higher-than-average incidence of one tend to coincide with a lower-than-average incidence of the other.
A particular inverse relationship is called inverse proportionality, and is given by where k > 0 is a constant. In a Cartesian plane this relationship is displayed as a hyperbola with y decreasing as x increases.
In finance, an inverse correlation between the returns on two different assets enhances the risk-reduction effect of diversifying by holding them both in the same portfolio.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies.
Recent work suggests that serial dependence, where perceptual decisions are biased toward previous stimuli, arises from the prior that sensory input is temporally correlated. However, existing studies have mostly used random stimulus sequences that do not ...
Cambridge2023
, , , ,
Background: Modifications in brain function remain relatively unexplored in progressive multiple sclerosis (PMS), despite their potential to provide new insights into the pathophysiology of the disease at this stage. Objectives: To characterize the dynamic ...
Determining the height of the planetary boundary layer (PBL) is of crucial importance as it is a key parameter in air-quality modelling and weather forecasting. Continuous remote sensing measurements allow to estimate this parameter based on temperature, h ...