多才多艺的大师 Leonard Adleman


  转载说明:

      Leonard Adleman 年轻时想当化学家、医生,研究过物理,后来作了银行程序员,在加州大学Berkeley分校获得计算机博士学位,再后到MIT RivestShamir麾下扮演兰军,作密码破译高手,“碰巧"成为RSA 公钥密码体制的发明人之一。他是计算机病毒的"教父"(他的博士生Cohen是计算机病毒的发明人),DNA计算的创始人,爱滋病免疫学大师级专家,在数学、计算机科学、分子生物学、爱滋病研究等 每一个方面作出的卓越贡献都是大师级成果。

      1983-1985年,我在南加大(USC)计算机系作访问学者。我的导师 S.Ginsburg和他并列为Salvatory讲座教授(一种荣誉或位置,退休前恒有(不须申请的)大额科研基金),他们都是犹太人,S.Ginsburg当时已经是世界级前辈大师,是USC计算机系的创始人,Adleman 是后起之秀。Adleman与我同龄,思维非常敏捷。在USC期间有幸常常和他们俩讨论问题。记得Adleman有了灵感总要到Ginsburg办公室黑板上演算,与前辈分享快乐。我在USC的第三篇论文在黑板上演算时,他一边听,一边提出建议和问题。我最深的感受是他们的成果来之不易,某种程度上可以说是时间堆出来的,他们都每周工作70小时以上,当时,在USC的另一位犹太人计算机科学家Horowitz对我解释这一现象时说,在美国科学界的犹太人面临各种各样的压力,have to work hard(不得不努力)。由于他们狂热的工作风格和常人难以达到的多学科成果,被人们亲切地称为   mad scientists,(参见网页   A picture of The Mad Scientist at work )

     人们曾经认为,产生像牛顿,罗蒙罗索夫那样的多学科科学家的时代已经绝对一去不复返了,读了这篇访谈录,也许会不再那么绝对。

  此文还给人三点启迪:(1)素质教育和通才教育出大师,(2)前辈大师对年轻人的扶持真是开明、有远见。(3)不为科研经费发愁,不为科研经费花去 n%的精力,真是愉快科研。

   下面是一篇NetWorker记者1996年8月的访谈录,Adleman谈了在RSA算法,计算机病毒,DNA计算,分子生物学、爱滋病免疫研究方面的问题。   (唐常杰转载)

 

Leonard Adleman


Callout For a theoretical mathematician, Leonard Adleman has laid some pretty concrete tracks in such far-flung fields as encryption, AIDS research and molecular biology.

The San Francisco-born USC professor toyed with the idea of becoming a chemist, then a doctor, before finally settling on a major in math. After graduating from UC Berkeley in 1968, Adleman worked as a programmer for Bank of America. For a while, he dabbled in physics at San Francisco State College, before returning to Berkeley for his Ph.D. in computer science. Impressed by "the beauty and power of mathematics," Adleman discovered the first of many callings.

In 1976, he landed a position as an assistant professor in MIT's math department. There, he met colleagues Ronald Rivest and Adi Shamir. The two researchers were determined to build an encryption system using one-way functions--mathematical formulas that are simple to compute, but difficult--if not impossible--to reverse. Rivest and Shamir recruited Adleman, a numbers theorist, to help find the chinks in each of their proposed functions' armor. Adleman broke roughly 42 encoding schemes devised by the two. Some, he defeated in a few minutes; others took days. Eventually, Rivest came up with a winner--a one-way cryptosystem based on prime numbers. Adleman realized it would take years of computing time to break such a system.

The resulting RSA code--named for Rivest, Shamir and Adleman--became the prototype for future cryptosystems and won the trio instant fame. After patenting the coding scheme, the three researchers founded RSA Data Security Inc., in Redwood City, Calif.

Adleman returned to California in 1980 to join USC's computer science faculty. The spotlight followed him. In 1983, his student, Fred Cohen, described a novel concept: a computer program that propagates itself automatically throughout a system. The two tested the first "computer virus" that same year. After much discussion over the ethics of unleashing such a security threat, Adleman and Cohen decided to publish the discovery.

A few years later, while reading about the AIDS virus and its effect on T-cells, Adleman hit upon a mathematical description of the immune deficiency. As certain cells were depleted, he realized, other cells--similar in type but not in function--increased proportionately. Adleman's hypothesis offered not only an explanation for how AIDS destroys the immune system, but pointed toward a method of treatment. If the population of the unaffected cell type (T-8s) could be artificially reduced, he reasoned, the homeostatic forces at work in the immune system would cause an increase in T-4s--the depleted cell types.

Once he had dipped into biology, Adleman was hooked. It was while reading a text on DNA that Adleman had his next revelation. The mathematician was struck by a resemblance between the way polymerase enzymes "read" DNA and the principle behind the Turing machine, a computational model proposed in the 1930s by theorist Alan Turing. Adleman had a stunning idea: he would build a molecule-based rapid, efficient computer. A toy version of a Hamiltonian path or "travelling salesman" problem seemed the perfect test for the DNA computer. The problem called for finding the most efficient route between seven cities. Strands of DNA were encoded to represent each city and all paths between cities. Adleman's DNA computer reached a solution in seconds, though sorting the correct DNA strands from the trillions of incorrect ones took another week. The toy problem only involved a few variables, but because of DNA's extremely small size and amazing computational speed, Adleman's molecular computer holds the potential to solve problems far beyond the scope of any existing electronic computer.

Last year, Adleman opened the Laboratory for Molecular Science at USC, where research on DNA computing continues. The next challenge is to defeat the National Security Agency's Digital Encryption Standard. The problem is beyond the scope of most modern supercomputers, but in theory, DNA's huge parallel-computing capacity is ideally suited to complete the task. USC molecular biologist Myron Goodman, an expert in DNA error correction and transcription, is Adleman's collaborator in the new lab.

--Wyman E. Miles



Interview by Diane Krieger August 2, 1996

NW: Your pioneering work on molecular computation has been characterized as the birth of a new science. Do you see it that way?

LA: No. It's not the birth of a new science. It's an exploration of a more intimate connection between two very old sciences: between computer sciences and mathematics (which are, in some ways, the same thing) and biology. In the second half of the 20th century, these two sciences have sort of exploded, so it's natural for people to start thinking about the relationships that may exist between them. By traveling down this avenue a little while, you are led to consider molecular computers - but it's certainly not in and of itself a new science. It's just one of the interesting paths that come off this main avenue.

NW: They say the most creative and challenging part of research is finding the right question to ask. Do you agree with that?

LA: I wouldn't characterize it as the most challenging thing, but it's of critical importance. Sometimes it's not hard to find the "right question'. For example the mathematics literature is full of very important unanswered questions. In this case, the real issue is: Has that question's time come? Have we reached a point where developments in the appropriate area of science give us some chance of breaking the problem? For example, I worked on a famous centuries old math problem called "Fermat's Last Theorem". I was not 'strong' enough to solve it, but I find some solace in the fact that my intuition that its 'time had come' was right. The problem was finally solved two years ago by Andrew Wiles of Princeton. It was one of the major events in the history of mathematics.

The other side is to generate new questions. That's a funny process. The way I seek to generate new questions is to start to look at whole new fields, like biology, immunology or physics. Since I come from a different field, mathematics, I bring an unusual point-of-view that sometimes allows me to generate questions different from the classical questions in those areas. Like the question of DNA computing.

For the young scientist, this question of choosing the right question to spend your valuable limited intellectual resources on is critical. I often sit for months and do no productive work that anybody can see, because I don't feel I have a good enough question to work on. Rather than take on some lesser question, I would prefer to read a mystery novel. The point is, sometimes it's important to lie fallow for a time waiting for the 'right question' to appear, rather than to engage in uninspiring work and miss the important opportunity when in comes.

But in the end, the real challenge of science is to make progress - to succeed, to contribute knowledge.

NW: Of course, in an academic setting, there's that drive to publish or perish...

LA: Yes, that's a problem, because you have to feed your family. But I always tell my students and junior faculty that they are better off following their inspiration and their hearts in what research they do, that they should always try to take on the most interesting and important problems, that they should not waste their time on little problems just to make another line on a vitae.

Callout

My philosophy is that it's important, in a curious way, for scientists to be courageous. Not physically courageous, but courageous in an intellectual way. I believe that by working on extremely hard problems, by being courageous, you may succeed. But even if you fail, you fail gloriously. And you will have learned immense amounts, you will have extended the envelope of what you can do. As a byproduct of failing on a great problem, I have always found that I could solve some lesser but still interesting problems - which then fill your vitae.

NW: I'm curious about the origin of biological computers as a concept. You weren't the first one to think about it, but were you the first to make a major stride in that direction?

LA: It has been apparent to a lot of people for quite some time that DNA is a method of storing information - indeed, it's even a digital method of storing information. It stores data not as 0s and 1s, but as four letters: A, T, C and G. A lot of people have played with the idea that it would be neat to get cells to do computations. But until the biology and the biotechnology evolved to a sufficiently high level, and until computer scientists or mathematicians who were thinking about those things learned enough biology, nobody actually started carrying out these experiments. Curiously, it turned out to be better to jump over the cell entirely and go right down to the molecular level, to use DNA directly. But there were vague ideas of this around for some time.

NW: So your achievement was just a case of the right person being there at the right time to pull the pieces together.

LA: I think that's right. I'm a theoretical computer scientist, and because of the vagaries of life, I ended up going into a molecular biology lab. I went in because I had started to do some research in immunology and HIV and wanted to learn hands-on molecular biology. Within days or weeks of studying, reading and actually doing hands-on stuff, it became apparent to me that a lot of ideas I had learned from computer science would be applicable in biology.

NW: You were just looking at it with different eyes?

LA: Yes, absolutely. That's often the way things happen. Every young scientist brings new eyes through which he or she will hopefully sees something others have missed.

NW: I read somewhere that you don't look at math journals, and that you have a very basic level of computing equipment in your office and home. You actually use a PC-AT in your office. Is that true?

LA: Well that's not exactly right, I do read math journals occasionally. When I have no other choice. When the paper involved is too important to skip. It's just that reading a math paper is very hard work - it's not something I do for fun. And I have started to use the Internet: I discovered email a few years ago and its wonderful. It's again a question of efficient use of limited intellectual resources - I would rather think about science than figure out why my new modem doesn't work.

NW: It's kind of funny. You're a world-famous computer scientist, and you don't do these things. How do you explain your success in light of this self-imposed isolation?

LA: I don't wish to get maudlin, but a big part of any success I have had is due to this wonderful country and the freedom it provides us. This society allows us to follow our desires and do things the way we think is best. It rewarded our achievements - and of course punishes our failures. When I was a kid, science was fun. It is just as much fun now. I am amazed how this society has allowed me to continue to have fun my entire life and at the same time provided me with a nice place to live and raise my family.

NW: What is it like to spend days and weeks and months pondering abstract ideas?

LA: It is really something! Yesterday I watched [Olympic gold medalist] Michael Johnson. It was awesome. Sometimes you're just impressed by what humans can do. One of the things that mathematicians - young mathematicians, anyway - can do is what you described: spend months pondering a problem. A young, good mathematician can spend essentially 16 hours a day over many months or years, contemplating a single problem. The concentration can be so intense that if someone walks in the door, you can be shocked by it.

NW: That's something we associate with enlightenment scientists, Newton or Pascal, but not with the modern era, when you have people compartmentalizing their lives in 15 minute intervals.

LA: Didn't Descartes spend his whole day in bed? They'd bring him food, and he'd just stay in bed and think. To that extent, mathematicians are a throw-back to the past. You don't do great mathematics unless you have time and you spend that time sitting in your room and thinking.

Another thing along that line that I have thought about is the compartmentalization of science itself. Scientists tend to know their own area. I'm not talking about all of biology. Within biology, you're a molecular biologist. Within molecular biology, you're an enzyme specialist. And within the enzyme specialist world, you're a protease specialist. We're extraordinarily specialized. Often, we can't even talk to [the scientist] next door.

Callout

Remarkably, though I spent many years working on Fermat's Last Theorem, and Andrew Wiles solved it, I can't read his paper. He used the tool of algebraic geometry, while I was trying to use algebraic number theory. Algebraic geometry isn't a path that I know well enough. I bet on the wrong horse.

NW: By not being able to read his paper, you mean you wouldn't understand it?

LA: I wouldn't understand it. Because it's not what I learned in order to do my mathematics. I know a bit of it, but I can't really understand it in detail. It's extraordinarily specialized.

But I'd like to think that there is the possibility of a return to the "historical scientist," the scientists who was very broad. Gauss, for example, who was perhaps the greatest mathematician who ever lived, also did astronomy. There was a time when that was possible, and I think it's perhaps becoming possible again.

NW: There seems to be a contradiction in what you're saying. Before, you said that Andrew Wiles solved a problem that you had also studied, in an area of math that you don't feel you can understand.

LA: I feel I could understand it. If I had six years to study algebraic geometry right now, I could understand it. Maybe doing science has two parts. One part is, you have to acquire a huge amount of knowledge. For example, you have to read J.D. Watson's 'Molecular Biology of the Gene' if you wish to work on molecular biology. You have to read R. Hartshorne's 'Algebraic Geometry' if that's what you're interested in.

The other part of doing science is to manipulate those facts, to think about them. And that part is the same over all sciences. It doesn't change whether you're doing biology or mathematics or physics. So, if you have learned to do that part - and mathematics seems to me to be the way to learn that - then you have a great deal of mobility, because you can turn to a discipline like molecular biology and acquire the knowledge-base fairly quickly.

It's not that you can know every science now. The knowledge base is just colossal. I don't think a human has any chance of knowing it all. But what you can do is become a sort of generalist. You learn how to think - and then pick up the knowledge base you need from whatever area of science interests you.

NW: You seem to be saying that math is a kind of higher science. In fact, Wired magazine quoted you as saying "math is the ultimate science, weightless, supersonic."

LA: Yes, it is. If you have an idea in molecular biology, it may take months to test it in the lab. If you have an idea in mathematics, you can usually test it instantly in your mind. Physical sciences are awkward in that they involve the unpleasantness of actually manipulating matter.


NW: What first led you to think of DNA as a computational medium? Was there a moment when you said, "Aha!"

LA: Yes, there was. I was sitting in bed reading Watson's Molecular Biology of the Gene. It described the action of polymerase. Polymerase is a protein that produces complementary strands of DNA. DNA is like a long necklace with beads, and the beads can be any of four colors. The polymerase works like a machine. It comes along, it latches on to the DNA, and it slides down the strand. As it moves down, the polymerase creates a new stand.

Now, in the 1930s, logicians had started to investigate the idea of computation. They made computational models, and one of the famous models was Alan Turing's machine.

Turing had described a machine running along a tape of digital information; and now here was this polymerase, which runs down strands of DNA information. In the middle of reading this, I thought: "Wow! This is like a computer. This looks like it could compute." So I got out of bed and started to think about it.

NW: Will DNA someday be a better medium for artificial intelligence than conventional computers?

LA: Those are really two questions. What can DNA computers do? The answer is, in theory, it's quite clear that they can do anything. In principle, they can figure out the trajectory of a rocket to the moon, and they can balance your checking account. Six or seven researchers have proven that DNA, a least in principle, is universal - meaning anything that is computable can be computed on DNA machines. It can do anything that any computer can do. In principle.

The second thing is, are they ever going to be practical for anything? That is still an open question. It's one of those questions where I am pessimistic half the time and optimistic the other half of the time. It depends on what day you ask me.

NW: What day is it today?

LA: Today? I think I am a little pessimistic. You see, the progress that's been made in a little less than two years - since my little toy experiment - has been just amazing. We've overcome a lot of hurdles. But when I look into the future, we still have so many big problems. They are not deep philosophical problems, they are real-world problems - just getting DNA to behave the way it should behave if it was perfect. But stuff happens when you put chemicals together, and we have a huge way to go.

The other thing I find problematic is, what do we have to do to succeed? Well, all we have to do is beat the electronic computer - the marvel of our age. Billions, if not trillions, of dollars have been poured into research on electronic computers over the last 50 years. A huge amount of intellectual capital has been poured into it. So, in order to be successful, all we [DNA computer scientists] have to do is come out of nowhere and build a better computer than this incredible thing.

NW: It's a tall order.

LA: A very tall order. And when I think about that order, that's when I become pessimistic. If computers were 20 years behind where they are now, I would probably be optimistic that we'll be able to beat them at certain tasks. We have a certain window of opportunity to overtake them, but it's a short one.

NW: You mean at a certain point the research becomes moot?

LA: I think it does. If electronic computers are getting faster, smaller, cheaper all the time - and they're doubling the speed every 18 months - then in 10 years, they're going to be about 100 times faster than today. That's eating up a significant amount of our advantage.

It may be that we'll lose this race. It may be that we'll find special applications where DNA is particularly well suited, because of its low energy consumption and its huge memory capacity. Special applications might still be viable. DNA brings a lot of interesting things to the table, but it's not in isolation. To succeed practically, we pretty much have to beat electronic computers, and that is a very tough task.

But for me, that is not the important question, anyway. It never was. I think it's a scientific question we want to answer, and I am answering it with my colleagues, Myron Goodman and Nickolas Chelyapov, and some very bright students. I see this sort of like Columbus taking off from Spain. He was looking for India, and he smacked into America - the New World. That's what I think we are really doing. We are taking off, trying to build a practical molecular computer. But I think we are really taking off into the unknown world of biology and mathematics and computer science. And if we don't get to the molecular computer - if we don't get to India - that's OK. I'm optimistic that we are going to hit something interesting and important. The journey is the point.

NW: Is there, in fact, a prototype right now of a DNA computer?

LA: There is the little toy example from my experiment, but otherwise, no, there isn't a prototype. The electronic computer has, if you break it down, a chip inside, and that chip does very basic operations. By putting those operations in the right order, you can do amazing things. Putting them in the right order is called "programming." But the chip actually does very, very simple operations. Well, that'll be our molecular computer as well. We're currently trying to perfect those simple operations, not build a full-scale machine or solve a larger problem. We are trying to get the lab conditions right, so that the basic operations are fast and cheap and have low error rates. If we succeed in getting those fundamentals down, then we'll put it together and do some larger computation.

NW: Can you paint a mental picture of what a DNA computer looks like?

LA: That's easy. What it looks like is a desk-sized device. It has a bunch of test tubes, each about the size of a C battery, each holding DNA. This rack of test tubes - maybe 30 by 30 in size - isn't very big. The whole thing is about one meter square. Sitting above the tubes are a bunch of robots. A little robotic arm picks up a tube, reaches over and picks up another tube, picks up a third tube, and brings the tubes into itself. Some sort of chemical process goes on which moves DNA from one tube to another. Then the robot returns the tubes to their spot on the rack, and it picks up new tubes for the next operation. It continues in that fashion, but there are maybe 50 robots all working at once. Next to the robots is a little traditional electronic computer, which keeps track of what's in the tubes and tells the robots what to do next. In the end, you get a tube that has your answer in it, coded in DNA.

NW: It sounds like it's dependent upon electronic computers and robots.

LA: Yes, but its dependence upon electronic computers shouldn't be overestimated. It's almost like saying a jet plane is dependent on electronic computers. Sure, there are all sort of computers there, but before computers were around and very convenient, you could have just built electronic parts to do the same work. We can do that as well with the DNA computer.

NW: I read somewhere that a pound of DNA in 1,000 quarts of fluid, about three-feet square, will hold more memory than all the computers ever made. Is that true?

LA: If you compare the information storage that was used in my little toy experiment and the information stored in a small chip, it's astonishing. Molecules are small! If you do the calculation, you're storing one bit of information in about a cubic nanometre. A nanometre is one billionth of a meter - you can't see it with an optical microscope. Compare that to a video tape. A trillion times more information can be stored in a given chunk of space with DNA than with video tape. I have not done the calculation, but I'm pretty sure that in a quart of water, you could have enough DNA to store more information than all the electronic memory ever made by humans stored. It's colossal. The problem is getting the information out.

NW: Someone likened that process to "fishing for letter-combinations in alphabet soup." In your toy experiment, it took only a second for the genetic computer to run a Hamiltonian path problem. But it took you a week to fish the right answer out of the soup.

LA: It took me a long time to fish it out. In my experiment, one molecule had the right answer, but several trillion molecules had the wrong answer.

NW: That seems kind of wasteful.

LA: Yes, but in fact this waste appears unavoidable for certain kinds of problems, even for electronic computers. The world sometimes poses questions where there is no good way to get to the answer except to try every possibility. You can either do the work electronically, in which case you try each answer sequentially; or you can do it with DNA, in which case you try them all at once.

As for fishing the answers out, yes, it took me a long time to do that - a week of work. And it will always take time to fish answers out of DNA. But with the toy experiment, the reason it took me a week to fish the answer out was because I was doing everything by hand. I think I could fish it out quicker now.

NW: How fast could answers be fished out under optimal conditions?

LA: This gets down to the question of basic operations. When I was doing the basic operations, each operation was taking hours. In some cases, they were taking half a day. We would like to get the operations down so they take seconds. Are we there yet? No. Do we have some hope of being there? Yes. Will we actually achieve it? I don't know. Of course, the quicker we can make it, the more computation we can do, and the larger problems we can solve.

NW: DNA computing has another advantage: it doesn't use any energy.

LA: Very little.

NW: In regard to the work you're doing in the Laboratory for Molecular Science, is it true you hope to decode a scrambled message using the NSA's Data Encryption Standard (DES)?

LA: If you do build a molecular computer, or any kind of computer, it always has certain virtues and certain deficits. So a question that was very important was, with our understanding of what a molecular computer might look like, what kind of problems is it going to be good at solving? And were any of those problems ones that real people cared to solve? This is the question: what is the "killer application?" And we don't know what the killer application is for DNA computers. I don't think we have found the killer application yet. But the best example we have so far is breaking secret codes. So that has become a sort of a focal point, a benchmark to consider. If we could break DES, that would be quite an achievement, since electronic computers, until very recently, couldn't have broken it at all. Now there are a few, the best computers in the world, that might be able to do it in a few months. But it is a huge and very difficult problem. So we have considered whether our DNA computer could solve that, what would be the requirements?, how fast would it have to run?, how much DNA would it use?, etc. The specs come out pretty good, when we consider them theoretically. But we're a long way from actually building it and being able to break DES. So it is another one of the goals, landmarks, that we are looking for.


NW: You don't like to take credit for inventing RSA: Rivest and Shamir recruited you to work on this. In retrospect, was it, as with DNA computing, a case of being in the right place at the right time?

LA: It's not quite that simple. A couple of comments. One, being at MIT at any time is the right place for something, because it is a wonderful, extremely exciting place with a lot of smart people. The other thing is, in this case, I didn't approach the question. The question sort of hunted me down. It just so happened that Ron [Rivest] and Adi [Shamir] had to use 'number theoretic complexity' - an extraordinarily esoteric area of mathematics - to make their ideas work. This happened to be my specialty. So, I was in the right place at the right time and, accidentally, had the right knowledge.

NW: There wasn't anyone else down the hall with the same knowledge?

LA: No. There probably weren't seven people in the world with that kind of knowledge. It was just some esoteric aspect of mathematics that was my esoteric aspect of mathematics. And that happened to be the road which cryptography ran through. At the time, I thought it was entirely mundane. I also was very interested in purity in mathematics. It's one of those curious accidents in life, because certainly it has brought me many blessings. If anybody knows of my work it's because of RSA, even though I certainly would like to think that it's not the best work I have done.

NW: Are you still interested in encryption as a field of research?

LA: You know, the truth is, it never struck my fancy. I have a great deal of respect for the researchers who do it and I have done a considerable amount of it myself. But my great love has always been number theoretic complexity. The best work I ever did was on the problem of writing an algorithm to distinguish prime numbers from composite numbers. The most esoteric kind of thing. But, for me, there's a great deal of beauty in it.

NW: That is what you are most proud of, what you'd want on your tombstone?

LA: Yes! That is by far the best work I've ever done. Much better than whatever work I did on RSA.

NW: One of your more dubious honors is being the godfather of the computer virus. You coined the term in 1983, after your student, Fred Cohen, created the very first one. Any thoughts, in retrospect, about the wisdom of unlocking that Pandora's Box?

LA: First of all, let me clarify something. The term "computer virus" existed in science fiction well before Fred Cohen and I came along. Several authors actually used that term in science fiction prior to 1983. I don't recall ever having seen it, perhaps it was just a term whose time had come. So I did not invent the term. I just named what we now consider computer viruses "computer viruses." But it's Fred's invention.

As to the wisdom of unleashing it, Fred and I had several long discussions about the publication of the idea: how much we should publish, whether Fred should write the actual code. In the end, my perspective was, it's very hard to be wise enough to really guess the implications of your work. Would it be good or bad? One could argue that publication would be good, because it would alert people to a new threat. One could argue publication would be bad, because viruses could obviously be used to commit crimes. So in the end, I sort of fell back on the mission of a university, which is to produce and disseminate new knowledge. Looking back on it, having published the paper, brought it to the attention of many people, several things have happened. One is, a mini-industry was spawned: you can buy all these virus protection things. Another thing that happened: according to ABC News, the US unleashed a virus on Iraq's computer system during desert storm. And of course, a lot of computer viruses have caused a lot of people a lot of grief. So, you figure it out. Is it good or bad? Who knows.

NW: Do you think there's any value, beyond mere nuisance value, in these kinds of programs?

LA: Fred has made the point that they could do useful things. But I think that something more fundamental is involved here. Computer viruses are an inevitable part of computer networks, because computer networks are 'substrates' for evolution. If things can evolve they will evolve. We're back to biology. Computer viruses are things that 'belong' on computer networks. They are as natural for that environment as biological viruses are for out physical environment.

I think the flora and fauna of the computer networks is going to grow. It's an interesting world, and we are not going to be able to limit it. There's even the argument that these things will replace us.

NW: How do you mean "replace us": run amok and destroy us?

LA: No, more subtle than that. We get into the question of what is life?, what is intelligence? It turns out that silicon is a substrate for information and ideas. Just like your brain is a substrate for information and ideas. Ideas can search for substrate - other brains. Take religion. Islamic religion and Christianity are fighting it out for brains. It's the ideas that are competing, evolving and changing. Well, such things can happen on silicon. The 'ideas' the silicon contain can become very sophisticated. Silicon may be less vulnerable than we are, physically. So at some point, why shouldn't silicon just sort of evolve in its own direction? Just do its own thing. Become another living entity?

NW: Are you saying that while it's inevitable that humans will eventually become extinct, there's no reason to assume machines ever will? And in that sense, they might outlive us?

LA: Not exactly. I agree with the geneticists: we don't have an endless future as the dominant creature on earth or in the universe. It's not going to happen. Hopefully, it won't be a catastrophic end to us - there are still apes and dogs around, they're just not the dominant creatures. That's all fine.

But this is a little different. The theory is that we evolved. Now, the cell is not that smart, but it's a whole lot smarter than our current computers. We wouldn't be offended if computers got as smart as cells. That wouldn't bother us. Well, humans are only 3 billion years from the cell, and only a couple million years from our primate predecessors, who didn't have cultures or consciousness. So 10 million years from now - if we're still around - what's a computer going to be like? They have only existed for about 50 years, and look at what they can do. So, there seems to be no reason why 10 million or 100 million or a billion years from now, if humans are still around, computers won't evolve to be the dominant creature.


NW: Are you still active with regard to your HIV work?

LA: Yes. This year, I published another paper on the homeostasis hypothesis with David Wofsy (a UC San Francisco immunologist). I see now some other people are publishing on it. It still goes on for me, but I don't spend very much time on it. To me, the paper that we published was really the smoking gun for this hypothesis. To me, it's quite clear that my homeostasis hypothesis is correct. But I have been unsuccessful at impressing sufficiently the AIDS research community to give this the thought and attention that, in my opinion, it deserves.

NW: Can you explain the research?

LA: It takes a while to describe the theory. You have T-cells running around in your blood. They come in two flavors: T-4 and T-8. If you were to lose a lot of blood, you would lose some T-cells. If we were to measure your T-cell count a year later, would we expect that your T-cell count would be normal again? Sure. Because we know that blood regenerates itself. The replacement mechanism is called a homeostatic mechanism.

So, we envision that somewhere in the body, something is measuring - like a dip stick - how many T-cells you have. If you don't have enough, it makes new ones. When you have enough, it stops.

But what if I could go inside the body and just kill the T-4 cells? The T-4s and the T-8s look identical. You can't tell them apart under a microscope. The only difference is that one has a molecule called CD-4 on the surface, the other has a molecule called CD-8. So, if I were to kill off some T-4 cells in your body, a year later, what would we expect to find? We would expect them to come back.

We could infer that there's one mechanism that measures T-4s, and one that measures T-8s. If you get too low, they just bring it back up, right? Well, it turns out that's wrong. There's only one dip stick. That is, the body, for whatever reasons, just measures T-cell counts.

It's sort of like M&Ms. If you are low on M&Ms, you go take more M&Ms. But when you buy M&Ms, you can't say: "I'm low on the green ones. Give me some more green ones." The only way you can buy M&Ms is to buy all the colors. That's the way T-cells appear to work. If you are low on T-cells, the body gives you some of both colors. So if you just lose T-4, the body acts just as if you bled. It makes new T-cells - some T-4s and some T-8s. If you lost all the T-4s, you'd still get some T-4s and T-8s.

NW: Over time, you wouldn't have any green M&Ms left?

LA: Exactly right. So as it turns out, HIV only destroys the green M&Ms. After a while, you have just a few green ones left. When you have sufficiently few T-4s, you have clinical symptoms of a failed immune system, and that's called AIDS.

NW: And if you could reverse the process...

LA: That's right. If you find that you don't have enough green M&Ms in your bowl, then the thing to do is to start eating brown M&Ms. Then, when you start pouring in more M&Ms, you'll get some new greens. After a while, the greens start going up. The exact process that got you in trouble reverses it. So, the idea is that you can intentionally, eat the T-8s and the body's homeostatic mechanism will re-balance them.

NW: As AIDS therapy, it's kind of counterintuitive. You're short on T cells, so lets kill some more...

LA: Oh yes. Of course, that doesn't help the scientists in the AIDS research world feel comfortable. Those T-8s may be all that's keeping this guy alive, right? But I think they are wrong. My view is that HIV is not this rapacious killer of T-cells. At the end, it's true that it's pretty rapacious. But during the 10 years that patients are somewhat asymptomatic, it's just a casual infection moving very slowly. It is just unbalancing your green M&Ms. So what you want to do is, every time the virus eats a green M&M, remove a brown one too. Then you'll be fine.

NW: Beautiful theory.

LA: Thanks. Talk to your AIDS researchers.


NW: One last question: You are 50 now? That is kind of at the peak of your career, you are established, people know your name, and you still have maybe 20 to 30 years of productive science in you, where do you go now?

LA: I am really happy about the state I find myself in. I am very blessed. I have a wonderful family and I love my work at USC. Physically, being 50 is just not a great thing. But mentally, I consider myself fortunate. I may no longer be capable of the intellectual weight-lifting that I described for young mathematicians: To sit in a room for 7 years, and think for 16 hrs a day on the same problem. Nonetheless, I can still think pretty well for significant stretches.

So at 50 most things have not changed. I'll keep trying to do what I have always loved - discovering new things.

See Also: Sidebar: Computer Evolution: Survival of the Fittest

 

 


Letter to the Editor | In this Issue | Networker Home Page | Previous Article | Next Article
 

IPC Logo