John Hennessy (New York, 1952) and David Patterson (Illinois, United States, 1947) are not just any two people in computer science. His work during the 1970s in the so-called architecture of computers (the way in which these devices are built) allowed their manufacture to be standardized and their efficiency improved, giving way to the current technological boom and that of past decades. Hennessy is also the current president of Alphabet, Google's parent company, and has been president of Stanford University. Patterson, meanwhile, was a professor at the University of Berkeley for 40 years until 2016. Both won the Turing Prize in 2017 for their contribution to computational science.
In the same way, the BBVA Foundation announced this week that it has awarded the Frontiers of Knowledge Award in Information and Communication Technologies to both for "founding as a new scientific area computer architecture, the discipline that designs the brain of every computer system, its central processor." Together, they created in the 80's the RISC system, the acronym for reduced instruction set computers , which is still in force in today's computers (used by about 99% of the products on the market, according to data from the ACM . " We are in a new golden age of computers ”, they assure in this interview conducted by videoconference.
Question. When they started working, the computer architecture was a mess, with all the manufacturers operating on their own. Is that right?
David Patterson. Computers had been designed decades ago, and developed in a particular way. What was relatively new were microprocessors. Their emergence led us to the idea that things had to be done in a completely different way. About all due to the so-called Moore's Law , which since 1965 states that approximately every two years the number of transistors in a microprocessor doubles. Now, microprocessors are more powerful s that mainframe computers.
Q. How would you explain the system you devised, RISC to a person who has no idea about computers?
DP Well, John and I have a lot of experience with that question… When a program ( software ) talks to the machine ( hardware ) uses a vocabulary. The name of that vocabulary instruction set (set instruction). We can imagine a vocabulary that has long words of many syllables. If we read a novel composed with those words, it would take us longer, because it would be more difficult to understand. The alternative is to have many more shorter words, which would allow you to read more quickly even if the novel was longer. The question is, where is the balance between both ways to achieve the greatest efficiency? In the end, we found that it was four times faster to use a shorter and simpler vocabulary. At first it was a controversial question, and almost philosophical.
Q. Can you say in any way that your work has allowed the boom in technology that we are experiencing?
John Hennessy: Well, we were motivated by all the changes that were they were producing. And I think it's a good reminder that whenever there is a disruption, like the one that was happening back then with the jump to microprocessors, you have to look back and review the way that you are troubleshooting and asking if That way is still valid.
Q. What did computers look like at that time?
JH They were huge computers and central units. The computer that we used to develop our work was the VAX-11/780 [a computer marketed in 1977 by Digital Equipment Corporation (DEC), a company that was acquired by Compaq in the 1990s and this, in turn, by Hewlett Packard in 2002,], which cost between $ 250,000 and $ 500,000, and was much slower than any smart phone today.
DP was the size of a refrigerator. I remember teaching class and saying “one day a single chip will be faster than this refrigerator”… It was so big that it took time for electricity to reach all of its components. At that time, the students laughed, they believed that the bigger the faster…
Q. Are we experiencing a golden age of computers, not only traditional ones, but also thanks to mobiles and other devices? Do you dare to forecast where we are headed, given that Moore's Law is coming to an end?
DP Absolutely. In the past, half the development had to do with advances in semiconductors, and the other half with what John and I do, how we put these devices together. Due to the end of Moore's Law and the fact that people continue to want their computers to get faster and faster, the burden will increasingly fall on architecture. This is going to be the decade of computer architecture.
JH This can be seen in the Apple M1 chip [the first the company has installed in its Macs after its break with Intel], which combines specific processors for each task. We are making the leap towards general processors towards component specialization for more efficiency. The key is specialization: making small computers that do one thing more efficiently.
Q. What do you think of quantum computers? Will they be a viable alternative?
DP I'm curious to hear from John about this. It's exciting technology, but there will be only 20 or 30 things it can do, and they are large computers too – there will be no quantum phones. These devices will be housed in data centers. It would not be efficient
JH I think the area in which we are working is what we call near-intermediate quantum , looking for applications that can work with smaller computers, because in the short or medium term these machines will not be capable of solve big problems. Applications are being sought, it is a hunt that is currently being carried out, there is no killer application (successful application that leads to an advance in a technology) .
DP And there is also the issue of the cold with which they have to operate. And it is not going to be of much use for machine learning either, because it is difficult to enter data in these devices
Q. Do you think we depend too much on computers and technology in general nowadays?
JH I think it may be . But it's simple: turn off notifications and don't check your mobile every five minutes. That would create a healthier lifestyle. Mind you, it requires some discipline.
DP I am part of the television generation. I grew up with it, and some parents let their children see what they want whenever they want. Not mine, they imposed restrictions on me. It is the same with technology: if you allow your children to use the internet and technology in general all the time, you are probably not raising them in the right way. You're not going to have a balanced life, especially with the pandemic, if all you do is watch Netflix. Or video games …
It's one thing to do your job, especially during the pandemic … And thanks to the fact that computers exist … without them, I wouldn't have a job to begin with. Technology is enabling, but it is also addictive and seductive. It is dangerous to use it too much, but I am not sure what the solution is.
DP There is a famous visionary in computer science named Alan Kay. 40 years ago he had a revolutionary idea called the Dynabook. His definition was that "the computer will be so important that if you leave it at home, you will have to turn around to get it." What he was talking about, we see now, was from the mobile phone. It is a critical technology that we use a lot. But I can't fault people who create products that people love to use. Scientists are asked to be careful what they create because of its consequences, but this is more of a popularity problem
Q. Do you think we should limit what machines are capable of learning to do?
DP Mi colleague Stewart J. Russell is one of the great promoters of artificial intelligence and has written some of the most famous works on artificial intelligence. He is one of those who has started to define rules about what computers can do before it becomes sensitive. And, although this is not my area, I believe that the danger is still distant: perhaps a century or more remains. I am already impressed with the things that machines can do. Above all, driverless cars : when this technology is implemented, hundreds of billions will be saved in traffic accidents. It will have the same importance as the emergence of the internet.
JH It does not mean that you have to worry about technology. All technologies have good and bad uses. Artificial intelligence can become a very powerful weapon, and there should be international agreement that it should never be used. We have to reach agreements. And, of course, there will also be economic imbalances, just like the Industrial Revolution. But new opportunities will also be created.
You can follow EL PAÍS TECNOLOGÍA RETINA at Facebook , Twitter , Instagram or subscribe here to our Newsletter .