Publication

A Machine Learning-based Framework for Forecasting Sales of New Products With Short Life Cycles Using Deep Neural Networks

Résumé

Demand forecasting is becoming increasingly important as firms launch new products with short life cycles more frequently. This paper provides a framework based on state-of-the-art techniques that enables firms to use quantitative methods to forecast sales of newly launched, short-lived products that are similar to previous products when there is limited availability of historical sales data for the new product. In addition to exploiting historical data using time-series clustering, we perform data augmentation to generate sufficient sales data and consider two quantitative cluster assignment methods. We apply one traditional statistical (ARIMAX) and three machine learning methods based on deep neural networks (DNNs) – long short-term memory, gated recurrent units, and convolutional neural networks. Using two large data sets, we investigate the forecasting methods’ comparative performance and, for the larger data set, show that clustering generally results in substantially lower forecast errors. Our key empirical finding is that simple ARIMAX considerably outperforms the more advanced DNNs, with mean absolute errors up to 21%–24% lower. However, when adding Gaussian white noise in our robustness analysis, we find that ARIMAX’s performance deteriorates dramatically, whereas the considered DNNs display robust performance. Our results provide insights for practitioners on when to use advanced deep learning methods and when to use traditional methods.

À propos de ce résultat
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Concepts associés (32)
Apprentissage profond
L'apprentissage profond ou apprentissage en profondeur (en anglais : deep learning, deep structured learning, hierarchical learning) est un sous-domaine de l’intelligence artificielle qui utilise des réseaux neuronaux pour résoudre des tâches complexes grâce à des architectures articulées de différentes transformations non linéaires. Ces techniques ont permis des progrès importants et rapides dans les domaines de l'analyse du signal sonore ou visuel et notamment de la reconnaissance faciale, de la reconnaissance vocale, de la vision par ordinateur, du traitement automatisé du langage.
Types of artificial neural networks
There are many types of artificial neural networks (ANN). Artificial neural networks are computational models inspired by biological neural networks, and are used to approximate functions that are generally unknown. Particularly, they are inspired by the behaviour of neurons and the electrical signals they convey between input (such as from the eyes or nerve endings in the hand), processing, and output from the brain (such as reacting to light, touch, or heat). The way neurons semantically communicate is an area of ongoing research.
Réseau neuronal convolutif
En apprentissage automatique, un réseau de neurones convolutifs ou réseau de neurones à convolution (en anglais CNN ou ConvNet pour convolutional neural networks) est un type de réseau de neurones artificiels acycliques (feed-forward), dans lequel le motif de connexion entre les neurones est inspiré par le cortex visuel des animaux. Les neurones de cette région du cerveau sont arrangés de sorte qu'ils correspondent à des régions qui se chevauchent lors du pavage du champ visuel.
Afficher plus
Publications associées (121)

Coupling a recurrent neural network to SPAD TCSPC systems for real-time fluorescence lifetime imaging

Edoardo Charbon, Claudio Bruschini, Andrei Ardelean, Paul Mos, Yang Lin

Fluorescence lifetime imaging (FLI) has been receiving increased attention in recent years as a powerful diagnostic technique in biological and medical research. However, existing FLI systems often suffer from a tradeoff between processing speed, accuracy, ...
Berlin2024

Robust machine learning for neuroscientific inference

Steffen Schneider

Modern neuroscience research is generating increasingly large datasets, from recording thousands of neurons over long timescales to behavioral recordings of animals spanning weeks, months, or even years. Despite a great variety in recording setups and expe ...
EPFL2024

Machine Learning for Modeling Stock Returns

Teng Andrea Xu

Throughout history, the pace of knowledge and information sharing has evolved into an unthinkable speed and media. At the end of the XVII century, in Europe, the ideas that would shape the "Age of Enlightenment" were slowly being developed in coffeehouses, ...
EPFL2024
Afficher plus
MOOCs associés (31)
Neuronal Dynamics - Computational Neuroscience of Single Neurons
The activity of neurons in the brain and the code used by these neurons is described by mathematical neuron models at different levels of detail.
Neuronal Dynamics - Computational Neuroscience of Single Neurons
The activity of neurons in the brain and the code used by these neurons is described by mathematical neuron models at different levels of detail.
Afficher plus

Graph Chatbot

Chattez avec Graph Search

Posez n’importe quelle question sur les cours, conférences, exercices, recherches, actualités, etc. de l’EPFL ou essayez les exemples de questions ci-dessous.

AVERTISSEMENT : Le chatbot Graph n'est pas programmé pour fournir des réponses explicites ou catégoriques à vos questions. Il transforme plutôt vos questions en demandes API qui sont distribuées aux différents services informatiques officiellement administrés par l'EPFL. Son but est uniquement de collecter et de recommander des références pertinentes à des contenus que vous pouvez explorer pour vous aider à répondre à vos questions.