History of Neural Networks — Part 02

MadhushaPrasad
3 min readAug 18, 2023

In this article, I will continue the previous article, History of Neural Networks — Part 01, Then let's talk about the History of Neural Networks from The 1980’s to the present.

In 1982, there was a new interest in this field. John Hopfield from Caltech delivered a letter to the National Academy of Sciences. His methodology centered around creating more efficient machines using bidirectional relationships. In the past, neuronal connections were strictly unidirectional.

In the same year, Reilly and Cooper used a “hybrid network” characterized by different layers, each using a different problem-solving strategy.

Similarly, 1982 saw a collaborative US-Japan conference on cooperative/competitive neural networks. Japan unveiled its fifth-generation initiative in neural networks, raising concerns about the U.S. lagging behind in the field. (Fifth generation computing refers to artificial intelligence. Previous generations included various technological advances.) As a result, increased funding led to increased research efforts.

By 1986, with the focus on multi-layer neural networks, a challenge arose in extending the Widrow-Hoff rule to these layers. Three separate groups of researchers, including David Rumelhart, formerly of Stanford’s psychology department, proposed similar ideas, now known as post-propagation networks. These networks distribute pattern recognition errors throughout the network. Unlike hybrid networks with two layers, back-propagation networks involve multiple layers. As a result, backpropagation networks are characterized as “slow learners”, requiring thousands of iterations to learn.

Today, neural networks find applications in various domains, some of which will be discussed later in our presentation. The underlying idea behind neural networks is that if they work in nature, they can be adapted to computers. However, the future of neural networks depends on hardware development. Like advanced chess-playing machines like Deep Blue, the efficiency of fast neural networks depends on specialized hardware.

Advances in neural network research progress relatively slowly. Neural networks require weeks to learn due to processor limitations. Some companies are trying to develop a “silicon compiler” to generate custom integrated circuits for neural network applications. Different types of chips digital, analog, and optical are being developed. Analog signals, which are often dismissed, more closely resemble the behavior of neurons in the brain than digital signals. While digital signals have binary states, analog signals span a range of values. However, integrating optical chips into commercial applications may still take time.

If you’ve gained knowledge about the History of Neural Networks from The 1980’s to the present through this article, a round of applause would be greatly appreciated, and I will attach some references here to help you gain more knowledge about the History of Neural Networks.

References

  1. A Brief History Of Neural Networks.
  2. A Brief History Of Neural Networks
  3. History-of-AI-and-Neural-Networks
  4. A Brief History of Neural Nets and Deep Learning

Follow me on GitHub: Madhusha Prasad

--

--

MadhushaPrasad

Open Source Enthusiast | Full Stack Developer👨🏻‍💻 | JavaScript & Type Script , Git & GitHub Lover | Undergraduate — Software Engineering‍💻 | SLIIT👨🏻‍🎓