Perceptrons: an introduction to computational geometry is a book written by Marvin Minsky and Seymour Papert and published in 1969. An edition with handwritten corrections and additions was released in the early 1970s. An expanded edition was further published in 1987, containing a chapter dedicated to counter the criticisms made of it in the 1980s. The main subject of the book is the perceptron, a type of artificial neural network developed in the late 1950s and early 1960s. The book was dedicated to psychologist Frank Rosenblatt, who in 1957 had published the first model of a "Perceptron". Rosenblatt and Minsky knew each other since adolescence, having studied with a one-year difference at the Bronx High School of Science. They became at one point central figures of a debate inside the AI research community, and are known to have promoted loud discussions in conferences, yet remained friendly. This book is the center of a long-standing controversy in the study of artificial intelligence. It is claimed that pessimistic predictions made by the authors were responsible for a change in the direction of research in AI, concentrating efforts on so-called "symbolic" systems, a line of research that petered out and contributed to the so-called AI winter of the 1980s, when AI's promise was not realized. The crux of Perceptrons is a number of mathematical proofs which acknowledge some of the perceptrons' strengths while also showing major limitations. The most important one is related to the computation of some predicates, such as the XOR function, and also the important connectedness predicate. The problem of connectedness is illustrated at the awkwardly colored cover of the book, intended to show how humans themselves have difficulties in computing this predicate. The perceptron is a neural net developed by psychologist Frank Rosenblatt in 1958 and is one of the most famous machines of its period. In 1960, Rosenblatt and colleagues were able to show that the perceptron could in finitely many training cycles learn any task that its parameters could embody.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (4)
EE-451: Image analysis and pattern recognition
This course gives an introduction to the main methods of image analysis and pattern recognition.
BIO-322: Introduction to machine learning for bioengineers
Students understand basic concepts and methods of machine learning. They can describe them in mathematical terms and can apply them to data using a high-level programming language (julia/python/R).
CS-456: Deep reinforcement learning
This course provides an overview and introduces modern methods for reinforcement learning (RL.) The course starts with the fundamentals of RL, such as Q-learning, and delves into commonly used approac
Show more
Related lectures (37)
Committee Machine: Statistical Physics Approach
Explores hidden variables, graphical models, and computational gaps in neural network learning.
Introduction to Learning by Stochastic Gradient Descent: Simple Perceptron
Covers the derivation of the stochastic gradient descent formula for a simple perceptron and explores the geometric interpretation of classification.
Neural Networks: Logic and Applications
Explores the logic of neuronal function, the Perceptron model, deep learning applications, and levels of abstraction in neural models.
Show more
Related publications (69)

A new variable shape parameter strategy for RBF approximation using neural networks

Jan Sickmann Hesthaven

The choice of the shape parameter highly effects the behaviour of radial basis function (RBF) approximations, as it needs to be selected to balance between the ill-conditioning of the interpolation matrix and high accuracy. In this paper, we demonstrate ho ...
PERGAMON-ELSEVIER SCIENCE LTD2023

Privacy-preserving federated neural network training and inference

Sinem Sav

Training accurate and robust machine learning models requires a large amount of data that is usually scattered across data silos. Sharing, transferring, and centralizing the data from silos, however, is difficult due to current privacy regulations (e.g., H ...
EPFL2023

Deep Learning Detection of GPS Spoofing

Mirjana Stojilovic, Olivia Jullian Parra

Unmanned aerial vehicles (UAVs) are widely deployed in air navigation, where numerous applications use them for safety-of-life and positioning, navigation, and timing tasks. Consequently, GPS spoofing attacks are more and more frequent. The aim of this wor ...
Springer, Cham2022
Show more
Related units (1)
Related concepts (2)
History of artificial intelligence
The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning.
Connectionism
Connectionism (coined by Edward Thorndike in the 1930s) is a name of an approach to the study of human mental processes and cognition that utilizes mathematical models known as connectionist networks or artificial neural networks. Connectionism has had many 'waves' along the time since its beginnings. The first wave appeared in the 1950s with Warren Sturgis McCulloch and Walter Pitts both focusing on comprehending neural circuitry through a formal and mathematical approach, and Frank Rosenblatt who published the 1958 book “The Perceptron: A Probabilistic Model For Information Storage and Organization in the Brain” in Psychological Review, while working at the Cornell Aeronautical Laboratory.

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.