History of Neural Networks — Part 01

1940's to the 1970s’.

MadhushaPrasad
4 min readAug 13, 2023

--

In 1943, A study proposing a possible mechanism by which neurons function was co-authored by neurophysiologist Warren McCulloch and mathematician Walter Pitts. They constructed a basic neural network with electrical circuits to try to explain how neurons in the brain could function.

The idea that repetition strengthens brain connections is crucial to human learning, and Donald Hebb introduced the concept in his 1949 book, The Organization of Behavior. He contended that a stronger bond is formed between nerves that fire in unison. In his 1949 book, The Organization of Behavior, Donald Hebb established the idea that repetition develops brain connections. This principle is fundamental to human learning. He argued that the connection between nerves that fire in unison is stronger.

A hypothetical neural network might be simulated after the advent of more powerful computers in the 1950s. Nathanial Rochester, working in the IBM labs, made the initial move in this direction. His initial attempt to do so was unsuccessful. Bernard Widrow and Marcian Hoff, two Stanford students, gave them the names "ADALINE" and "MADALINE" in 1959. The names are derived from the fact that MALEs (Multiple Adaptive Linear Elements) are used, showcasing Stanford’s penchant for acronyms.

ADALINE was created to recognize binary patterns so that it could anticipate the next bit if it were reading streaming bits from a phone line. By employing a neural network to create an adaptive filter, MADALINE was the first system of its kind to be employed in a practical setting Despite the system’s antiquity, it is still in widespread use in the business sector, just like ATC systems.

Widrow and Hoff, in 1962, came up with a learning technique that uses the formula Weight Change = (Pre-Weight line value) * (Error / (Number of Inputs)) to determine whether or not the weight should be adjusted (i.e., 0 or 1). The theory behind it is that if a single active perceptron has a significant inaccuracy, it can be spread out across the network by adjusting the weight values of the other nodes in the network. If the line before the weight is zero, then using this rule will still result in an error, which will be rectified in due time. If the error is conserved by being equally distributed over all weights, then the error disappears.

Although neural networks would go on to great success in the future, traditional von Neumann architecture eventually became the standard in computing, leaving neural research in the dust. In a twist of fate, it was John von Neumann who proposed modeling neurological processes with electronic components like telegraph relays and vacuum tubes.

During this same time period, a paper was published arguing that a single-layered neural network cannot be extended to a multi-layered neural network. Further, many experts in the field were employing a learning function that was essentially incorrect since it lacked line-wide differentiation. As a result, funding for study and development dried up completely.

Given the state of the art in terms of actual technology at the time, neural networks' early successes led to an unrealistic overestimation of their potential, which made the situation worse. Unfulfilled promises and looming anxiety over deeper philosophical issues marred the experience. Ideas about how so-called “thinking machines” might change human nature have persisted throughout literature.

A self-programming computer is an intriguing concept. Microsoft Windows 2000 has thousands of problems that could be fixed if the operating system could rewrite its own code. Such ideas held promise, but their implementation was daunting. Furthermore, von Neumann's architecture was beginning to gain favor. There were some breakthroughs, but overall, research efforts were minimal.

We will go into greater detail about how Kohonen and Anderson independently created a comparable network in 1972. Without realizing it, they were each building an array of analog ADALINE circuits by using matrix algebra to explain their thoughts. The neurons are programmed to produce several outputs rather than a single one.

In 1975, an unsupervised multilayer network was created.

I hope you have gained knowledge from this blog about the history of 1940’s to 1970’s and this is not finished yet. This blog will continue with history from the 1980s to the present. I am still studying and writing about it. It will be published soon.

Follow me on GitHub: MadhushaPrasad

--

--

MadhushaPrasad

Open Source Enthusiast | Full Stack Developer👨🏻‍💻 | JavaScript & Type Script , Git & GitHub Lover | Undergraduate — Software Engineering‍💻 | SLIIT👨🏻‍🎓