Résumé
Denormalization is a strategy used on a previously-normalized database to increase performance. In computing, denormalization is the process of trying to improve the read performance of a database, at the expense of losing some write performance, by adding redundant copies of data or by grouping data. It is often motivated by performance or scalability in relational database software needing to carry out very large numbers of read operations. Denormalization differs from the unnormalized form in that denormalization benefits can only be fully realized on a data model that is otherwise normalized. A normalized design will often "store" different but related pieces of information in separate logical tables (called relations). If these relations are stored physically as separate disk files, completing a database query that draws information from several relations (a join operation) can be slow. If many relations are joined, it may be prohibitively slow. There are two strategies for dealing with this. One method is to keep the logical design normalized, but allow the database management system (DBMS) to store additional redundant information on disk to optimize query response. In this case it is the DBMS software's responsibility to ensure that any redundant copies are kept consistent. This method is often implemented in SQL as indexed views (Microsoft SQL Server) or materialized views (Oracle, PostgreSQL). A view may, among other factors, represent information in a format convenient for querying, and the index ensures that queries against the view are optimized physically. Another approach is to denormalize the logical data design. With care this can achieve a similar improvement in query response, but at a cost—it is now the database designer's responsibility to ensure that the denormalized database does not become inconsistent. This is done by creating rules in the database called constraints, that specify how the redundant copies of information must be kept synchronized, which may easily make the de-normalization procedure pointless.
À propos de ce résultat
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Concepts associés (5)
Third normal form
Third normal form (3NF) is a database schema design approach for relational databases which uses normalizing principles to reduce the duplication of data, avoid data anomalies, ensure referential integrity, and simplify data management. It was defined in 1971 by Edgar F. Codd, an English computer scientist who invented the relational model for database management. A database relation (e.g. a database table) is said to meet third normal form standards if all the attributes (e.g.
Denormalization
Denormalization is a strategy used on a previously-normalized database to increase performance. In computing, denormalization is the process of trying to improve the read performance of a database, at the expense of losing some write performance, by adding redundant copies of data or by grouping data. It is often motivated by performance or scalability in relational database software needing to carry out very large numbers of read operations.
Forme normale (bases de données relationnelles)
Dans une base de données relationnelle, une forme normale désigne un type de relation particulier entre les entités. La normalisation consiste à restructurer une base de données pour respecter certaines formes normales, afin d'éviter la redondance des données (des données apparaissent plusieurs fois) et d'assurer l'intégrité des données. Le but essentiel de la normalisation est d’éviter les anomalies transactionnelles pouvant découler d’une mauvaise modélisation des données et ainsi éviter un certain nombre de problèmes potentiels tels que les anomalies de lecture, les anomalies d’écriture, la redondance des données et la contre-performance.
Afficher plus
Cours associés (1)
PHYS-512: Statistical physics of computation
This course covers the statistical physics approach to computer science problems ranging from graph theory and constraint satisfaction to inference and machine learning. In particular the replica and
Séances de cours associées (10)
Propagation de la croyance : méthodes clés et analyse
Couvre la propagation de la croyance, une méthode clé pour l'analyse et l'algorithme.
Modèle de données du PALO : approches multidimensionnelles et relationnelles
Explique les modèles de données du PALO, en mettant l'accent sur les approches multidimensionnelles et relationnelles.
Information Retrieval Basics: Longueur du document et normalisation
Explore la longueur du document, la normalisation, la compensation des biais et l'évaluation du modèle de récupération de l'information.
Afficher plus