An analysis on the use of autoencoders for representation learning: Fundamentals, learning task case studies, explainability and challenges

TitleAn analysis on the use of autoencoders for representation learning: Fundamentals, learning task case studies, explainability and challenges
Publication TypeJournal Article
Year of Publication2020
AuthorsCharte, David, Charte Francisco, del Jesus M. J., and Herrera F.
JournalNeurocomputing
Volume404
Pagination93-107
KeywordsAutoencoders, Deep learning, Feature extraction, Representation learning
Abstract

In many machine learning tasks, learning a good representation of the data can be the key to building a well-performant solution. This is because most learning algorithms operate with the features in order to find models for the data. For instance, classification performance can improve if the data is mapped to a space where classes are easily separated, and regression can be facilitated by finding a manifold of data in the feature space. As a general rule, features are transformed by means of statistical methods such as principal component analysis, or manifold learning techniques such as Isomap or locally linear embedding. From a plethora of representation learning methods, one of the most versatile tools is the autoencoder. In this paper we aim to demonstrate how to influence its learned representations to achieve the desired learning behavior. To this end, we present a series of learning tasks: data embedding for visualization, image denoising, semantic hashing, detection of abnormal behaviors and instance generation. We model them from the representation learning perspective, following the state of the art methodologies in each field. A solution is proposed for each task employing autoencoders as the only learning method. The theoretical developments are put into practice using a selection of datasets for the different problems and implementing each solution, followed by a discussion of the results in each case study and a brief explanation of other six learning applications. We also explore the current challenges and approaches to explainability in the context of autoencoders. All of this helps conclude that, thanks to alterations in their structure as well as their objective function, autoencoders may be the core of a possible solution to many problems which can be modeled as a transformation of the feature space.

Notes

TIN2015-68854-R; TIN2017-89517-P; DeepSCOP Ayudas Fundación BBVA a Equipos de Investigación Científica en Big Data 2018

DOI10.1016/j.neucom.2020.04.057