With the increasing demand of information for more immersive applications such as Google Street view or 3D movies, the efficient analysis of visual data from cameras has gained more importance. This visual information permits to extract some crucial information in scenes such as similarity for recognition, 3D scene information, textures and patterns of objects. Multi-camera systems provide detailed visual information about a scene with images from different positions and viewing angles. The conventional perspective cameras that are commonly used in these systems however, have limited field of view. Therefore, they require either the deployment of many of these cameras or the capture of many images from different points to extract sufficient details about a scene. It increases the amount of data to be processed and the maintenance costs for such systems. Omnidirectional vision systems overcome this problem due to their 360-degree field of view and have found wide application areas in robotics, surveillance. These systems often use special refractive elements such as fisheye lenses or mirror-lense systems. The resulting images however inherit from the specific geometry of these units. Therefore, the analysis of these images with methods designed for perspective cameras results in degraded performance in omnivision systems. In this thesis, we focus on the analysis of multi-view omnidirectional images for efficient scene information extraction. We propose a novel spherical framework for omnidirectional image processing by exploiting the property that most omnidirectional images can be uniquely mapped on the sphere. We propose solutions for three common multiview image processing problems, namely feature detection, dense depth estimation and super-resolution for omnidirectional images. We first address the feature extraction problem in omnidirectional images. We develop a scale-invariant feature detection method which carefully handles the geometry of the images by performing the scale-space analysis directly on their native manifolds such as the parabola or the sphere. We then propose a new descriptor and a matching criteria that take into account the geometry and also eliminate the need for orientation computation. We also demonstrate that the proposed method can be used to match features in images captured by different types of sensors such as perspective, omnidirectional or spherical cameras. We then propose a dense depth estimation method to extract the 3D scene information from multiple omnidirectional images. We propose a graph-cut method adapted to the geometry of those images to minimize an energy function formulated for the dense depth estimation problem. We also propose a parallel graph-cut method that gives a significant speed improvement without a big penalty in accuracy. We show that the proposed method can be applied to multi-camera depth estimation and depth-based arbitrary view synthesis. Finally, we consider the multi-view omnidirect