Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.
DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.
In a society which produces and consumes an ever increasing amount of information, methods which can make sense out of al1 this data become of crucial importance. Machine learning tries to develop models which can make the information load accessible. Thre ...
Due to daylight variability, a design cannot be thoroughly assessed using single-moment simulations, which is why we need dynamic performance metrics like Daylight Autonomy and Useful Daylight Illuminance. Going one step further, the annual variation in pe ...
As data collections become larger and larger, data loading evolves to a major bottleneck. Many applications already avoid using database systems, e.g., scientific data analysis and social networks, due to the complexity and the increased data-to-query time ...
A growing amount of data is produced daily resulting in a growing demand for storage solutions. While cloud storage providers offer a virtually infinite storage capacity, data owners seek geographical and provider diversity in data placement, in order to a ...
In recent years, ontology for the Product Lifecycle Management domain has raised a lot of interest in research communities, both academic and industrial. It has emerged as a convenient method for supporting the concept of closed lifecycle information loop, ...
Co-clustering has not been much exploited in biomedical in- formatics, despite its success in other domains. Most of the previous applications were limited to analyzing gene expression data. We performed co-clustering analysis on other types of data and ob ...
Institute of Electrical and Electronics Engineers2007
As data collections become larger and larger, users are faced with increasing bottlenecks in their data analysis. More data means more time to prepare the data, to load the data into the database and to execute the desired queries. Many applications alread ...
The ALICE experiment at CERN LHC is using a PROOF-enabled cluster for fast physics analysis, detector calibration and reconstruction of small data samples. The current system (CAF - CERN Analysis Facility) consists of some 120 CPU cores and about 45 TB of ...
Exploratory data analysis, or EDA for short, is a term coined by John W. Tukey for describing the act of looking at data to see what it seems to say. This article gives a description of some typical EDA procedures and discusses some of the principles Of ED ...
Geographic Information Science methods and tools are likely to help to extract useful and so far unknown information from large spatially explicit genetic datasets to understand the distribution of diversity among and within sheep and goat breeds. Consider ...