Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Shape representations are critical for visual analysis of cultural heritage materials. This article studies two types of shape representations in a bag-of-words-based pipeline to recognize Maya glyphs. The first is a knowledge-driven Histogram of Orientation Shape Context (HOOSC) representation, and the second is a data-driven representation obtained by applying an unsupervised Sparse Autoencoder (SA). In addition to the glyph data, the generalization ability of the descriptors is investigated on a larger-scale sketch dataset. The contributions of this article are four-fold: (1) the evaluation of the performance of a data-driven auto-encoder approach for shape representation; (2) a comparative study of hand-designed HOOSC and data-driven SA; (3) an experimental protocol to assess the effect of the different parameters of both representations; and (4) bridging humanities and computer vision/machine learning for Maya studies, specifically for visual analysis of glyphs. From our experiments, the data-driven representation performs overall in par with the hand-designed representation for similar locality sizes on which the descriptor is computed. We also observe that a larger number of hidden units, the use of average pooling, and a larger training data size in the SA representation all improved the descriptor performance. Additionally, the characteristics of the data and stroke size play an important role in the learned representation.
Thanh Trung Huynh, Quoc Viet Hung Nguyen, Thành Tâm Nguyên, Trung-Dung Hoang