Indoor positioning systemAn indoor positioning system (IPS) is a network of devices used to locate people or objects where GPS and other satellite technologies lack precision or fail entirely, such as inside multistory buildings, airports, alleys, parking garages, and underground locations. A large variety of techniques and devices are used to provide indoor positioning ranging from reconfigured devices already deployed such as smartphones, WiFi and Bluetooth antennas, digital cameras, and clocks; to purpose built installations with relays and beacons strategically placed throughout a defined space.
GeopositioningGeopositioning, also known as geotracking, geolocalization, geolocating, geolocation, or geoposition fixing, is the process of determining or estimating the geographic position of an object. Geopositioning yields a set of geographic coordinates (such as latitude and longitude) in a given map datum; positions may also be expressed as a bearing and range from a known landmark. In turn, positions can determine a meaningful location, such as a street address.
Positioning systemA positioning system is a system for determining the position of an object in space. One of the most well-known and commonly used positioning systems is the Global Positioning System (GPS). Positioning system technologies exist ranging from worldwide coverage with meter accuracy to workspace coverage with sub-millimeter accuracy. Interplanetary-radio communication systems not only communicate with spacecraft, but they are also used to determine their position.
Global Positioning SystemThe Global Positioning System (GPS), originally Navstar GPS, is a satellite-based radio navigation system owned by the United States government and operated by the United States Space Force. It is one of the global navigation satellite systems (GNSS) that provides geolocation and time information to a GPS receiver anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites.
Computer visionComputer vision tasks include methods for , , and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions. Understanding in this context means the transformation of visual images (the input to the retina in the human analog) into descriptions of the world that make sense to thought processes and can elicit appropriate action.
Simultaneous localization and mappingSimultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. While this initially appears to be a chicken or the egg problem, there are several algorithms known to solve it in, at least approximately, tractable time for certain environments. Popular approximate solution methods include the particle filter, extended Kalman filter, covariance intersection, and GraphSLAM.
Tone mappingTone mapping is a technique used in and computer graphics to map one set of colors to another to approximate the appearance of high-dynamic-range images in a medium that has a more limited dynamic range. Print-outs, CRT or LCD monitors, and projectors all have a limited dynamic range that is inadequate to reproduce the full range of light intensities present in natural scenes. Tone mapping addresses the problem of strong contrast reduction from the scene radiance to the displayable range while preserving the image details and color appearance important to appreciate the original scene content.
Machine visionMachine vision (MV) is the technology and methods used to provide -based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision refers to many technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of computer science.
PhotogrammetryPhotogrammetry is the science and technology of obtaining reliable information about physical objects and the environment through the process of recording, measuring and interpreting photographic images and patterns of electromagnetic radiant imagery and other phenomena. The term photogrammetry was coined by the Prussian architect Albrecht Meydenbauer, which appeared in his 1867 article "Die Photometrographie." There are many variants of photogrammetry. One example is the extraction of three-dimensional measurements from two-dimensional data (i.
Satellite laser rangingIn satellite laser ranging (SLR) a global network of observation stations measures the round trip time of flight of ultrashort pulses of light to satellites equipped with retroreflectors. This provides instantaneous range measurements of millimeter level precision which can be accumulated to provide accurate measurement of orbits and a host of important scientific data. The laser pulse can also be reflected by the surface of a satellite without a retroreflector, which is used for tracking space debris.