In our likeness
Traditional computers like to work in exactly the same way every time, just like teachers in high school who taught us to solve problems in a methodical, step-by-step manner to ensure perfect answers each time — leaving nothing to chance. However, in the case of some very difficult problems, such as an explorer finding his or her way to the North Pole or snagging the cheapest tour that includes all the best spots in Europe or planning a visit to all the key shops in a market, the brain can do a much better job than a computer.
Believe it or not, these tasks belong to a class of problems of legendary difficulty. Say, you want to visit five medical shops in a city in the fastest possible time because someone’s life depends on it. You will then have to search for the one correct answer from the possible number of itineraries, which in this case is 120 (factorial of five). If a computer is asked, it would just start taking each itinerary and testing whether it gives the shortest time. Now, if you wanted to visit 35 shops the number of itineraries to search from is a whopping 1000,000,000,000, 000,000,000,000,000,000,000,000,000 possible trips (factorial of 35 is 10^40). These many searches make any computer dizzy, making such class of problems — known as NP Hard Problems — daunting!
Unlike a traditional computer, the brain uses “chance” to randomly guess answers and can solve such difficult problems much better. An “educated guess” works better than a brute force search for such large problems. Inspired by nature, our research group working at IIT Bombay has come up with the concept of a computer chip that mimics the “chancy” human neurons.
But how does the brain play the game of chance? It turns out that the brain is composed of neurons that can spike somewhat randomly. To be more precise, neurons spike randomly but the probability of randomness in spiking depends upon messages from its neighbours, probabilistically. This is much like a gossip session among friends where a person may listen to everyone but it is not clear when he or she will start to speak. But the conversation will proceed and the discussion will be resolved. The mathematical model of such a network of neurons, each receiving signals from other neurons but then firing probabilistically, is called the Boltzmann Machine.
It is named after Ludwig Boltzmann, a statistical physicist who showed that every system spontaneously evolves to reduce its energy while behaving sort of randomly at the level of atoms. For example, water flows downhill, magnets will align to make interesting shapes to reduce energy and so on. Similarly, neurons in a network will settle to a collective pattern of firing such that the network has the lowest energy.
And how does this system perform useful computation? Well, the network connections are specially designed such that it represents a real problem. The energy of the network depends on the spiking pattern. Nature will then ensure that the network reaches a collective firing pattern that will be the minimum energy of the network. This firing pattern will represent the solution.
A typical problem is called maximum cut. The problem is to find the line that divides a network of points (or nodes) into two groups such that the line cuts through the maximum number of edges (connection between two nodes) once. When each node is represented by a neuron, the neurons belonging to one group will fire together and the other group will be silent to show the two distinct groups — this is the solution required by maximum cut. This pattern of firing will express the solution.
Then there is the travelling salesman problem, a classic algorithmic problem in the field of computer science and operations research. A salesman needs to visit a certain number of cities in a particular order so as to incur the least cost. The solution of this sort of problem is extremely relevant for optimal delivery scheduling for online businesses such as Amazon and Flipkart. The travelling salesman problem is part of the Millenium Prize Problems — seven problems identified by the Clay Mathematics Institute in the US, which will award a million dollars to anyone who solves any of them exactly. And it turns out, such problems can be mapped to a Boltzmann Machine and solved only approximately!
What then is stopping computers from emulating the brain? Basically, our ability to implement an energy-efficient neuron which fires per chance or probabilistically. That is where our recent work comes in.
Our team of researchers from IIT Bombay — in collaboration with Intel Labs — has demonstrated such a neuron based on nanoscale resistors with switchable resistance, called memristor, which is short for memory resistor. The key is to design a specific type of randomness in the neuronal spiking behaviour. The memristors are made of a manganese-based oxide developed at the IIT Bombay Nanofabrication Facility, unlike the standard chips made of primarily silicon. These memristors “naturally” exhibit the random switching behaviour akin to biological neurons.
We have shown that such a network of neurons can solve these difficult optimisation problems and it produces excellent area and power efficiency compared to standard electronics. Such a demonstration presents the benefits of such “chancy” strategies implemented in hardware and should excite more researchers to trudge on towards more sophisticated demonstrations.
The writer is a professor of electrical engineering at IIT Bombay