Summary
A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Efficient multiplication algorithms have existed since the advent of the decimal system. If a positional numeral system is used, a natural way of multiplying numbers is taught in schools as long multiplication, sometimes called grade-school multiplication, sometimes called the Standard Algorithm: multiply the multiplicand by each digit of the multiplier and then add up all the properly shifted results. It requires memorization of the multiplication table for single digits. This is the usual algorithm for multiplying larger numbers by hand in base 10. A person doing long multiplication on paper will write down all the products and then add them together; an abacus-user will sum the products as soon as each one is computed. This example uses long multiplication to multiply 23,958,233 (multiplicand) by 5,830 (multiplier) and arrives at 139,676,498,390 for the result (product). 23958233 × 5830 ——————————————— 00000000 ( = 23,958,233 × 0) 71874699 ( = 23,958,233 × 30) 191665864 ( = 23,958,233 × 800) 119791165 ( = 23,958,233 × 5,000) ——————————————— 139676498390 ( = 139,676,498,390) In some countries such as Germany, the above multiplication is depicted similarly but with the original product kept horizontal and computation starting with the first digit of the multiplier: 23958233 · 5830 ——————————————— 119791165 191665864 71874699 00000000 ——————————————— 139676498390 Below pseudocode describes the process of above multiplication. It keeps only one row to maintain the sum which finally becomes the result. Note that the '+=' operator is used to denote sum to existing value and store operation (akin to languages such as Java and C) for compactness. multiply(a[1..p], b[1..q], base) // Operands containing rightmost digits at index 1 product = [1..
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (9)
EE-735: Online learning in games
This course provides an overview of recent developments in online learning, game theory, and variational inequalities and their point of intersection with a focus on algorithmic development. The prima
CS-101: Advanced information, computation, communication I
Discrete mathematics is a discipline with applications to almost all areas of study. It provides a set of indispensable tools to computer science in particular. This course reviews (familiar) topics a
MATH-111(e): Linear Algebra
L'objectif du cours est d'introduire les notions de base de l'algèbre linéaire et ses applications.
Show more
Related lectures (48)
Datapath Subsystems Design
Covers the design of datapath subsystems, focusing on various types of adders and multiplication algorithms.
Fundamental Properties of Determinant
Explores the fundamental properties of determinants, including left vs. right multiplication and the Sarrus rule for 3x3 matrices.
Solving Poisson Problem: Fourier Transform Approach
Explores using Fourier transform to simplify solving the Poisson problem, transforming convolutions into multiplications for an explicit solution.
Show more
Related publications (142)
Related concepts (16)
Divide-and-conquer algorithm
In computer science, divide and conquer is an algorithm design paradigm. A divide-and-conquer algorithm recursively breaks down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem. The divide-and-conquer technique is the basis of efficient algorithms for many problems, such as sorting (e.g., quicksort, merge sort), multiplying large numbers (e.
Arbitrary-precision arithmetic
In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are limited only by the available memory of the host system. This contrasts with the faster fixed-precision arithmetic found in most arithmetic logic unit (ALU) hardware, which typically offers between 8 and 64 bits of precision.
Booth's multiplication algorithm
Booth's multiplication algorithm is a multiplication algorithm that multiplies two signed binary numbers in two's complement notation. The algorithm was invented by Andrew Donald Booth in 1950 while doing research on crystallography at Birkbeck College in Bloomsbury, London. Booth's algorithm is of interest in the study of computer architecture. Booth's algorithm examines adjacent pairs of bits of the 'N'-bit multiplier Y in signed two's complement representation, including an implicit bit below the least significant bit, y−1 = 0.
Show more