Network scienceNetwork science is an academic field which studies complex networks such as telecommunication networks, computer networks, biological networks, cognitive and semantic networks, and social networks, considering distinct elements or actors represented by nodes (or vertices) and the connections between the elements or actors as links (or edges). The field draws on theories and methods including graph theory from mathematics, statistical mechanics from physics, data mining and information visualization from computer science, inferential modeling from statistics, and social structure from sociology.
Semiregular variable starIn astronomy, a semiregular variable star, a type of variable star, is a giant or supergiant of intermediate and late (cooler) spectral type showing considerable periodicity in its light changes, accompanied or sometimes interrupted by various irregularities. Periods lie in the range from 20 to more than 2000 days, while the shapes of the light curves may be rather different and variable with each cycle. The amplitudes may be from several hundredths to several magnitudes (usually 1-2 magnitudes in the V filter).
Open sourceOpen source is source code that is made freely available for possible modification and redistribution. Products include permission to use the source code, design documents, or content of the product. The open-source model is a decentralized software development model that encourages open collaboration. A main principle of open-source software development is peer production, with products such as source code, blueprints, and documentation freely available to the public.
Latent and observable variablesIn statistics, latent variables (from Latin: present participle of lateo, “lie hidden”) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured. Such latent variable models are used in many disciplines, including political science, demography, engineering, medicine, ecology, physics, machine learning/artificial intelligence, bioinformatics, chemometrics, natural language processing, management, psychology and the social sciences.
Wireless sensor networkWireless sensor networks (WSNs) refer to networks of spatially dispersed and dedicated sensors that monitor and record the physical conditions of the environment and forward the collected data to a central location. WSNs can measure environmental conditions such as temperature, sound, pollution levels, humidity and wind. These are similar to wireless ad hoc networks in the sense that they rely on wireless connectivity and spontaneous formation of networks so that sensor data can be transported wirelessly.
PerformativityPerformativity is the concept that language can function as a form of social action and have the effect of change. The concept has multiple applications in diverse fields such as anthropology, social and cultural geography, economics, gender studies (social construction of gender), law, linguistics, performance studies, history, management studies and philosophy. The concept is first described by philosopher of language John L. Austin when he referred to a specific capacity: the capacity of speech and communication to act or to consummate an action.
Performative utteranceIn the philosophy of language and speech acts theory, performative utterances are sentences which not only describe a given reality, but also change the social reality they are describing. In a 1955 lecture series, later published as How to Do Things with Words, J. L. Austin argued against a positivist philosophical claim that the utterances always "describe" or "constate" something and are thus always true or false.
Leaky bucketThe leaky bucket is an algorithm based on an analogy of how a bucket with a constant leak will overflow if either the average rate at which water is poured in exceeds the rate at which the bucket leaks or if more water than the capacity of the bucket is poured in all at once. It can be used to determine whether some sequence of discrete events conforms to defined limits on their average and peak rates or frequencies, e.g. to limit the actions associated with these events to these rates or delay them until they do conform to the rates.
Best, worst and average caseIn computer science, best, worst, and average cases of a given algorithm express what the resource usage is at least, at most and on average, respectively. Usually the resource being considered is running time, i.e. time complexity, but could also be memory or some other resource. Best case is the function which performs the minimum number of steps on input data of n elements. Worst case is the function which performs the maximum number of steps on input data of size n.
Errors-in-variables modelsIn statistics, errors-in-variables models or measurement error models are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in the dependent variables, or responses. In the case when some regressors have been measured with errors, estimation based on the standard assumption leads to inconsistent estimates, meaning that the parameter estimates do not tend to the true values even in very large samples.