Neural networkA neural network can refer to a neural circuit of biological neurons (sometimes also called a biological neural network), a network of artificial neurons or nodes in the case of an artificial neural network. Artificial neural networks are used for solving artificial intelligence (AI) problems; they model connections of biological neurons as weights between nodes. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed.
Redshift surveyIn astronomy, a redshift survey is a survey of a section of the sky to measure the redshift of astronomical objects: usually galaxies, but sometimes other objects such as galaxy clusters or quasars. Using Hubble's law, the redshift can be used to estimate the distance of an object from Earth. By combining redshift with angular position data, a redshift survey maps the 3D distribution of matter within a field of the sky. These observations are used to measure detailed statistical properties of the large-scale structure of the universe.
Feedforward neural networkA feedforward neural network (FNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. Its flow is uni-directional, meaning that the information in the model flows in only one direction—forward—from the input nodes, through the hidden nodes (if any) and to the output nodes, without any cycles or loops, in contrast to recurrent neural networks, which have a bi-directional flow.
Types of artificial neural networksThere are many types of artificial neural networks (ANN). Artificial neural networks are computational models inspired by biological neural networks, and are used to approximate functions that are generally unknown. Particularly, they are inspired by the behaviour of neurons and the electrical signals they convey between input (such as from the eyes or nerve endings in the hand), processing, and output from the brain (such as reacting to light, touch, or heat). The way neurons semantically communicate is an area of ongoing research.
Residual neural networkA Residual Neural Network (a.k.a. Residual Network, ResNet) is a deep learning model in which the weight layers learn residual functions with reference to the layer inputs. A Residual Network is a network with skip connections that perform identity mappings, merged with the layer outputs by addition. It behaves like a Highway Network whose gates are opened through strongly positive bias weights. This enables deep learning models with tens or hundreds of layers to train easily and approach better accuracy when going deeper.
Principal component analysisPrincipal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and enabling the visualization of multidimensional data. Formally, PCA is a statistical technique for reducing the dimensionality of a dataset. This is accomplished by linearly transforming the data into a new coordinate system where (most of) the variation in the data can be described with fewer dimensions than the initial data.
Tired lightTired light is a class of hypothetical redshift mechanisms that was proposed as an alternative explanation for the redshift-distance relationship. These models have been proposed as alternatives to the models that involve the expansion of the universe. The concept was first proposed in 1929 by Fritz Zwicky, who suggested that if photons lost energy over time through collisions with other particles in a regular way, the more distant objects would appear redder than more nearby ones.
Strong gravitational lensingStrong gravitational lensing is a gravitational lensing effect that is strong enough to produce , arcs, or even Einstein rings. Generally, for strong lensing to occur, the projected lens mass density must be greater than the critical density, that is . For point-like background sources, there will be multiple images; for extended background emissions, there can be arcs or rings. Topologically, multiple image production is governed by the odd number theorem.
ReionizationIn the fields of Big Bang theory and cosmology, reionization is the process that caused electrically neutral atoms in the universe to reionize after the lapse of the "dark ages". Reionization is the second of two major phase transitions of gas in the universe (the first is recombination). While the majority of baryonic matter in the universe is in the form of hydrogen and helium, reionization usually refers strictly to the reionization of hydrogen, the element.
Deep reinforcement learningDeep reinforcement learning (deep RL) is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of the state space. Deep RL algorithms are able to take in very large inputs (e.g.