When computers remember.
Memory is always considered secondary to logic. The supremacy of the pure logic of machines over random, unpruned memories of the human mind has been an enduring conviction. We aspire towards machine-like reasoning as impressed in some school curricula.
Artificial intelligence has recently put a dent in that conviction. How does AI “learn” logic? It seems that requires a unique confluence of memory with logic. What to remember or forget depends upon past memories.
Today memories are like a hard disk; they just store data without understanding what it means. The data has to be brought to the processor to perform any sort of operation — such as search or modify. Earlier, people worked mostly on “modifying” data which required limited data to be brought to the processor and the question was how fast the processor could modify the data, such as write a document in MS word. The old document needs to be brought once, updated and returned to the memory. This used to be the main function of a computer.
Nowadays, “search” dominates everyday use. This requires vast amounts of data to be brought to the processor and sifted. The “bringing” causes a bottleneck — the process is slow as the data travels through a few long and slow wires between chips (memory and processors). In comparison, the wiring is fast and dense within the chip. This is called the classical “interconnect bottleneck”. The solution is to perform most of the operation in the memory by adding processing power to it.
Why didn’t people do so earlier? Memory and processors use different types of building blocks and it is simpler and cheaper to make them separately. That, however, was before the “search era”. Now it makes more sense to invent a better memory plus processor manufacturing technology even if it is more expensive — because the performance boost will make the user much happier.
Enter, in-memory computing (INC). This embodies the idea that memory-based logic is efficient for computing. Here, a large number of simple computations occur in the memory bank. Such sifting within the memory produces startling efficiency.
That brings us to the main question. While the transistor or switch is the critical building-block for logic gates (that go on to create microprocessors or microcontrollers), what is the corresponding computing unit for memory? This is where a new invention comes in.
Sandip Lashkare, a graduate student of the Indian Institute of Technology, Bombay, has recently lead a study to demonstrate a memory, which can perform “universal” logic operations. Universal logic operations are necessary to build any digital computer.
Memristor is an amalgamation of the words “memory” and “resistor”. A memristor is an electrical component that limits or regulates the flow of electrical current in a circuit and remembers the amount of charge that has previously flowed through it. Memristors are important because they can retain memory without power. Memory is stored in the resistance of memristors, which can be modified by high voltage. Resistance can be increased for high positive voltage and decreased for high negative voltage and can be measured at low voltages.
A typical memristor has two metal terminals or contacts like a resistor. The resistance change layer is a thin oxide film. Only two terminals are used to control and read the resistance state. The study shows that if a third “nanoscale” terminal is added, it can control the change between the two existing terminals — much like a transistor. The only difference is that the transistor, being a purely logic device, “forgets” after the operation is complete. The three-terminal memristor, however, “remembers” the result of the operation — producing “in-memory computing”, that is, logic operations in memory. The study (https://bit.ly/3kKexyt), published in IEEE Electron Devices Letters, demonstrates the impact of the three-terminal memristor on computing performance.
So what produces this magical third terminal control? A big enabler is the oxide material of the memristor. An age-old question is whether resistance changes — such as a capacitor breakdown, where a bolt of lightning strikes inside the oxide to convert the “insulator” of a capacitor to a “short” — are based on a local conducting filament formation. The nanoscale third terminal is able to resolve the position of the resistance change. It shows that the resistance change is uniform and spread across the volume of the oxide layer — which makes such memory more dependable and uniform — compared to the erratic nature of filament formation through lightning strikes. This non-localised and spread out resistance change is the key physical property that enables the critical third terminal control for “universal” logic gates. This complementary study (https://bit .ly/3iEPI4Z) was published in the ACS Applied Electronic Materials.
The work is being partly supported by the Semiconductor Research Corporation, an international industry consortium with members such as Intel and TI. The industry is keeping a close eye on the study that has a co-author from Intel. The research is also being supported in part by the ministry of electronics and IT as well as the department of science and technology — that funds electronics research to enable key science and technology development.
We have already applied for a patent that heralds a new circuit element for memory-based logical computers of the future.
The writer is a professor in the department of electrical engineering in IIT Bombay