Concept

Deep image prior

Résumé
Deep image prior is a type of convolutional neural network used to enhance a given image with no prior training data other than the image itself. A neural network is randomly initialized and used as prior to solve inverse problems such as noise reduction, super-resolution, and inpainting. Image statistics are captured by the structure of a convolutional image generator rather than by any previously learned capabilities. Inverse problems such as noise reduction, super-resolution, and inpainting can be formulated as the optimization task , where is an image, a corrupted representation of that image, is a task-dependent data term, and R(x) is the regularizer. This forms an energy minimization problem. Deep neural networks learn a generator/decoder which maps a random code vector to an image . The image corruption method used to generate is selected for the specific application. In this approach, the prior is replaced with the implicit prior captured by the neural network (where for images that can be produced by a deep neural networks and otherwise). This yields the equation for the minimizer and the result of the optimization process . The minimizer (typically a gradient descent) starts from a randomly initialized parameters and descends into a local best result to yield the restoration function. A parameter θ may be used to recover any image, including its noise. However, the network is reluctant to pick up noise because it contains high impedance while useful signal offers low impedance. This results in the θ parameter approaching a good-looking local optimum so long as the number of iterations in the optimization process remains low enough not to overfit data. Typically, the deep neural network model for deep image prior uses a U-Net like model without the skip connections that connect the encoder blocks with the decoder blocks. The authors in their paper mention that "Our findings here (and in other similar comparisons) seem to suggest that having deeper architecture is beneficial, and that having skip-connections that work so well for recognition tasks (such as semantic segmentation) is highly detrimental.
À propos de ce résultat
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.