Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Character-level Neural Machine Translation(NMT) models have recently achieved impressive results on many language pairs. They mainly do well for Indo-European language pairs, where the languages share the same writing system. However, for translating between Chinese and English, the gap between the two different writing systems poses a ma-jor challenge because of a lack of systematic correspondence between the individual linguistic units.In this paper, we enable character-level NMT for Chinese, by breaking down Chinese characters into linguistic units similar to that of Indo-European languages. We use the Wubi encoding scheme, which preserves the original shape and semantic in-formation of the characters, while also being reversible. We show promising results from training Wubi-based models on the character-and subword-level with recurrent as well as convolutional models.