In the rapidly evolving landscape of AI, neuromorphic computing is becoming increasingly relevant amid concerns about cost and capability of AI systems. Inspired by the human brain’s architecture, it is garnering attention from academia and commercial enterprise. But what is neuromorphic computing, why does it matter, and why are big companies beginning to throw their considerable weight behind it?

The origins of AI traces back to the 1950s with early research into small scale neural networks based on the functions of neurons in the brain, but it was the development of deep neural networks in the 1980s, and their refinement in the 2000s that shifted neural networks from a fringe research field to an incredibly valuable commercial enterprise. The advanced models simulate aspects of the working of the human brain, being constructed of neurons arranged in layers communicating with one another. They have proven to be highly effective in discovering patterns within vast data sets. Within numerous fields such as vision, natural language processing, and so on their sophistication has grown rapidly opening up a host of applications.

However, with the increased complexity of models and the amount of the data required to train them, the cost of running these models has increased also. An analysis of neural networks published in 2022 looking at historic data observed an exponential increase in the cost of training these models, an increase that applies also to the running of these models as well.

A graph of the year

Description automatically generated with medium confidence

Concerns about the financial and environmental sustainability of AI systems if current patterns continue have led researchers to look to the human brain. This is nothing new, artificial neural networks are already simplified representations of the brain’s architecture. This time, however, the focus is not just on harnessing the unique computation possible in some networks, but its efficiency. The entire brain takes roughly ten watts to function – less than most light bulbs. This metabolic efficiency is owed to a style of computation whereby individual neurons within the brain communicate via “spikes” – small electrical signals sent between neurons. These spikes are infrequent, so that at a given moment very few neurons are expending energy. In contrast, conventional computers constantly update every portion of a neural network, leading to unnecessary energy expenditure. Neuromorphic computing aims to capture those same efficiencies by replicating biological features in computer hardware, instead of simulating them on regular computers. If accomplished successfully, this would have huge implications for the AI world, where training state of the art models currently already costs hundreds of millions of pounds.

This technology has yet to mature and faces a number of technical challenges in developing the hardware and software to achieve state of the art performance. For starters, there is the race to find suitable components to make up these systems – materials need to be developed that can mimic neurons in the brain in the way that simulations have done artificially till now. There is also the question of how to train these networks. Most modern deep learning systems use a process called ‘gradient descent’. Trained on data sets, they adapt by trial-and-error in order to improve their accuracy. However, the type of neural network implemented on this hardware (a ‘spiking neural network’, or SNN) cannot straightforwardly apply these same algorithms.

There are a number of promising approaches to how these efficient networks might learn. One straightforward answer has been to let a deep neural network using gradient descent learn how to process a dataset before copying what it’s learnt to a more efficient SNN. This would not solve the issue of the cost of training, but it would make AI systems more efficient in their daily operation. Others are developing algorithms that approximate gradient descent in SNNs. A different approach argues that using gradient descent neglects the unique advantages of more realistic neural networks. Instead, such systems should focus on methods of learning that mimic the brain. It stands to reason that this should work – after all the brain is capable of very robust computation with the same toolkit. However, attempts to replicate this using local learning rules have fallen short of industry standards.

Despite the challenges in production, and the fact that algorithms are still in their early stages, industry is awake to the opportunity in efficient computational systems. The two main commercial players are Intel and IBM. Intel has produced their Loihi line of chips, the first being announced in 2017 and the second in 2021, and the hardware shows good promise. They are energy efficient; consume a single watt compared to conventional computers consuming several hundreds. In addition, there is evidence of rapid improvement; the latest iteration has 10 times as many neurons packed into a single chip. IBM’s TrueNorth, announced in 2015, is even more efficient and its latest iteration NorthPole, recently announced in October 2023, is 25 times as energy efficient and faster than comparable GPUs. This most recent iteration has really closed the distance to the time when these chips become commercially viable. A lot of money is also being invested into research outside these commercial groups. Europe, for example, has given more than a billion Euros in funding to a series of projects looking at producing more biologically faithful neural networks to be used in researching the brain.

If research funding speaks to the obvious need for development in this area, industrial support speaks to an optimism about its potential to generate value in the near future. Energy efficiency is only going to become more of an issue as AI models continue to grow in complexity and their use becomes more widespread. Given the pressing need and the rate of development, neuromorphic hardware is a space to watch closely in the next few years. Development is likely to continue at the current pace, given that AI training is projected to cost potentially hundreds of billions of dollars in the near future there is no doubt that organisations will continue to fund such research. That number alone makes energy efficient hardware an emerging industry with massive potential value should it catch up to conventional hardware in terms of capabilities whilst maintaining orders of magnitude lower costs in usage.

Categories:

Tags: