Categories: News

Biological computers can reduce electricity costs: how it works

Modern computer technology already consumes a lot of the world's electricity, and with the introduction of artificial intelligence, this volume could increase dozens of times. Heiner Linke, professor of nanophysics at Lund University, tells in a material for The Conversation how the impending crisis can be mitigated with the help of «biological» computers. SPEKA publishes an adapted translation with notes.

Fast and expensive: why computer calculations are so energy-intensive

Modern computers are a triumph of technology. A single computer chip contains billions of nanometer-scale transistors that operate extremely reliably and perform millions of operations per second.

However, this high speed and reliability comes at the cost of significant energy consumption: data centers and consumer IT devices such as computers and smartphones use approximately 3% of the world's electricity, and the use of artificial intelligence is likely to increase consumption even more.

Electricity use by data centers is now higher than that of almost every country

(It is difficult to estimate exactly how much electricity the growing AI industry consumes determine. There are significant differences in how large AI models work, but running a large language model like ChatGPT consumes, according to various sources, 10-30 times more electricity than a similar Google query. As AI is further implemented in various fields, the problem will only grow — ed.)

But what if we could change the way computers work so that they could perform computational tasks as quickly as they do today, while consuming much less energy? Nature can offer us some potential solutions here.

In 1961, IBM scientist Rolf Landauer addressed the question of whether we need to spend so much energy on computational tasks. He came up with the Landauer principle, which states that erasing information from memory is inevitably accompanied by the release of energy in the form of heat.

It follows that a single computational task – for example, setting a bit, the smallest unit of computer information, to zero or one – should consume (at room temperature) about 10⁻²¹ joules (J) of energy, or 0.0000000000000000000000285J in physical form.

This is a very small amount. If we could operate computers at such levels, the amount of electricity used to compute and manage waste heat in data centers by means of cooling systems would not be a concern.

However, there is a problem. To perform a bitwise operation near the Landauer limit, it must be performed infinitely slowly. At the same time, the faster the calculation, the more energy is used.

More recently, this has been demonstrated by experiments designed to simulate computational processes: energy dissipation begins to increase noticeably when you perform more than one operation per second. Processors operating at a clock speed of a billion cycles per second, typical of modern semiconductors, use about 10⁻¹¹ J per bit – about ten billion times more than the Landauer limit.

The solution may be to design computers in a fundamentally different way. The reason traditional computers work so fast is that they work sequentially, one operation at a time. If instead we could use a very large number of «computers» working in parallel, each could run much slower.

For example, a “hare” processor that performs a billion operations in one second could be replaced by a billion “tortoise” processors, each of which takes a full second to complete its task, at a much lower energy cost per operation. A 2023 paper showed that a computer could operate near the Landauer limit, using orders of magnitude less energy than current computers.

Parallel processing

Is it possible to have billions of independent “computers” that operate in parallel? Small-scale parallel processing is already commonly used today, for example, when about 10 000 graphics processors working simultaneously to train artificial intelligence models.

However, this is not done to reduce speed or increase energy efficiency, but rather out of necessity. The heating and excess heat make it impossible to increase the computing power of a single processor, so the processors are used in parallel.

An alternative computing system that is much closer to what would be needed to approach the Landauer limit is known as networked biocomputing. It uses biological motor proteins, which are tiny machines that help perform mechanical tasks inside cells.

(Network-based biocomputation — is an interdisciplinary approach to computing that models biological systems, such as molecules, cells, or biological processes, to perform computational tasks. This approach focuses on exploiting the natural network properties of biological systems, such as protein interactions, neural networks, or genetic regulatory systems — ed.)

How does a biocomputer work

This system involves encoding the computational task into a nanoscale maze of channels with carefully designed cross-sections, typically composed of polymer patterns deposited on silicon wafers. All possible paths through the labyrinth are explored in parallel by a very large number of long thread-like molecules called biofilaments, which are fed by motor proteins.

Each filament is only a few nanometers in diameter and about a micrometer in length (1000 nanometers). Each of them acts as a separate «computer’, encoding information according to its spatial position in the labyrinth.

This architecture is particularly well suited to solving so-called combinatorial problems. These are problems with many possible solutions, such as planning problems, which are very computationally demanding for sequential computers. Experiments confirm that such a biocomputer requires 1,000 to 10,000 times less energy to compute than an electronic processor.

This is possible because biological motor proteins themselves have evolved to use no more energy than is necessary to perform their task at the required speed. Typically, this is a few hundred steps per second, a million times slower than transistors.

So far, researchers have only built small biological computers to prove the concept.

To be competitive with electronic computers in terms of speed and computation, and to explore a very large number of possible solutions in parallel, networked biocomputing must be scaled up. A detailed analysis shows that this should be possible with modern semiconductor technologies.

However, there are numerous obstacles to scaling these machines, including the difficulty of learning to precisely control each of the biothreads, reducing error rates, and integrating them with modern technology. If such problems can be overcome in the next few years, the resulting processors will be able to solve certain types of complex computational problems with significantly reduced power consumption.

Neomorphic Computing

It is also an interesting opportunity to compare the energy use of the human brain. The brain is often described as being very energy efficient. It consumes only a few watts of energy — much less than AI models — for operations such as breathing or thinking.

However, it does not appear that the core physical elements of the brain conserve energy. Firing a synapse, which is comparable to a single computational step, actually uses about the same amount of energy as a transistor per bit.

However, the architecture of the brain is highly interconnected and works fundamentally differently from electronic processors and networked biocomputers. So-called neuromorphic computing attempts to mimic this aspect of brain function, but using new types of computing hardware as opposed to biocomputing.

(Neuromorphic computing, also known as neuromorphic engineering, — is an approach to computing that mimics the human brain. It involves developing hardware and software that models the neural and synaptic structures and functions of the brain for processing information. — ed.)

It would be very interesting to compare neuromorphic architectures with the Landauer limit to see if the same insights from understanding biocomputers can be transferred there in the future. If so, it could also be the key to a huge leap forward in computer energy efficiency in the coming years.

Natasha Kumar

Natasha Kumar has been a reporter on the news desk since 2018. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining The Times Hub, Natasha Kumar worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my natasha@thetimeshub.in 1-800-268-7116

Share
Published by
Natasha Kumar

Recent Posts

The third generation of Nothing Phone is expected to undergo drastic changes in the hardware

The Nothing Phone (2a) and Nothing Phone (2a) Plus are powered by MediaTek chipsets, but…

3 hours ago

Biden administration launches new investigation into Chinese-made chips

The Biden administration announced a new trade investigation into Chinese legacy chips. They are used…

3 hours ago

Cryptocurrency analyst predicts rapid growth of Bitcoin cryptocurrency to $250,000

Tom Lee, a well-known analyst and cryptocurrency advocate, has predicted that the price of Bitcoin…

4 hours ago

Stalker developers congratulated gamers on Christmas with “Shchedryk” and a postcard

The GSC Game World team, developers of the cult game Stalker, congratulated fans on the…

4 hours ago

Donation fraud and obstruction of the Armed Forces of Ukraine: blogger Shavlyuk's arrest extended for another two months

In the Vinnytsia City Court on December 25, the court approved the investigation's motion and…

5 hours ago

CPR reveals Kremlin narratives in media of Global South countries

The Center for Countering Disinformation monitors the information space of Global South countries and identifies…

5 hours ago