Summary
In , ridge detection is the attempt, via software, to locate ridges in an , defined as curves whose points are local maxima of the function, akin to geographical ridges. For a function of N variables, its ridges are a set of curves whose points are local maxima in N − 1 dimensions. In this respect, the notion of ridge points extends the concept of a local maximum. Correspondingly, the notion of valleys for a function can be defined by replacing the condition of a local maximum with the condition of a local minimum. The union of ridge sets and valley sets, together with a related set of points called the connector set, form a connected set of curves that partition, intersect, or meet at the critical points of the function. This union of sets together is called the function's relative critical set. Ridge sets, valley sets, and relative critical sets represent important geometric information intrinsic to a function. In a way, they provide a compact representation of important features of the function, but the extent to which they can be used to determine global features of the function is an open question. The primary motivation for the creation of ridge detection and valley detection procedures has come from and computer vision and is to capture the interior of elongated objects in the image domain. Ridge-related representations in terms of watersheds have been used for . There have also been attempts to capture the shapes of objects by graph-based representations that reflect ridges, valleys and critical points in the image domain. Such representations may, however, be highly noise sensitive if computed at a single scale only. Because scale-space theoretic computations involve convolution with the Gaussian (smoothing) kernel, it has been hoped that use of multi-scale ridges, valleys and critical points in the context of scale space theory should allow for more a robust representation of objects (or shapes) in the image. In this respect, ridges and valleys can be seen as a complement to natural interest points or local extremal points.
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (6)
EE-623: Perception and learning from multimodal sensors
The course will cover different aspects of multimodal processing (complementarity vs redundancy; alignment and synchrony; fusion), with an emphasis on the analysis of people, behaviors and interaction
CS-413: Computational photography
The students will gain the theoretical knowledge in computational photography, which allows recording and processing a richer visual experience than traditional digital imaging. They will also execute
MICRO-512: Image processing II
Study of advanced image processing; mathematical imaging. Development of image-processing software and prototyping in Jupyter Notebooks; application to real-world examples in industrial vision and bio
Show more
Related publications (77)
Related concepts (7)
Blob detection
In computer vision, blob detection methods are aimed at detecting regions in a that differ in properties, such as brightness or color, compared to surrounding regions. Informally, a blob is a region of an image in which some properties are constant or approximately constant; all the points in a blob can be considered in some sense to be similar to each other. The most common method for blob detection is convolution.
Feature (computer vision)
In computer vision and , a feature is a piece of information about the content of an image; typically about whether a certain region of the image has certain properties. Features may be specific structures in the image such as points, edges or objects. Features may also be the result of a general neighborhood operation or feature detection applied to the image. Other examples of features are related to motion in image sequences, or to shapes defined in terms of curves or boundaries between different image regions.
Corner detection
Corner detection is an approach used within computer vision systems to extract certain kinds of features and infer the contents of an image. Corner detection is frequently used in motion detection, , video tracking, image mosaicing, panorama stitching, 3D reconstruction and object recognition. Corner detection overlaps with the topic of interest point detection. A corner can be defined as the intersection of two edges. A corner can also be defined as a point for which there are two dominant and different edge directions in a local neighbourhood of the point.
Show more