Publication

Motion Vector Estimation of Textureless Objects Exploiting Reaction-Diffusion Cellular Automata

Abstract

Conventional motion estimation algorithms extract motion vectors from image sequences based on the image's local-brightness differences in consecutive images. Therefore, motion vectors are extracted along the moving edges formed by moving objects over their background. However, in the case of "textureless" moving objects, motion vectors inside the objects cannot be detected because no brightness (texture) differences exist inside the object. Severe issues may incur in motion-related imaging applications because motion-vectors of vast (inner) regions of textureless objects can not be detected, although the inner part is moving with the object's edges. To solve this problem, we propose an unconventional image-processing algorithm that generates spatial textures based on object's edge information, allowing the detection of the textures motion. The model is represented by a 2-D crossbar array of a 1-D reaction-diffusion (RD) model where 1-D spatial patterns are created inside objects and aggregated to form textures. Computer simulations confirm the approach, showing the formation of textures over approaching objects, which may open applications in machine vision and automated decision systems.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.