Friday, November 17, 2006

THES article from last week

The Times Higher Education Supplement published a feature article of mine last week. As we're on a new edition as of today, I can reproduce it below.

If you wish to cite it, please do so as follows (my suggested title was "Synthetic Biology: Where Top-Down Meets Bottom-Up"...):

Martyn Amos, A chip off Mother Nature's own hard-drive, Times Higher Education Supplement, November 9, 2006, pp. 16-17.



Possibly the most unusual reviewing assignment I have ever accepted came in 2002, when Guinness World Records asked me to help validate a claim made by a group of Israeli scientists to have built the "world's smallest computer". What made this machine radically different was not just its incredibly miniaturised state but its basic construction material. Rather than piecing together transistors on a silicon surface, Ehud Shapiro and his team at the Weizmann Institute had fabricated their device out of the very stuff of life itself - DNA.

Three trillion copies of their machine could fit into a single tear drop. This miracle of miniaturisation was achieved not through traditional technology but through a breakthrough in the emerging field of molecular computing. The team used strands of DNA to fuel these nanomachines, their latent energy freed by enzymatic "spark plugs". These were not computers in any traditional sense. Their computational capabilities were rudimentary and, rather than using the familiar zeroes and ones of binary code, their "software" was written in the vocabulary of the genes - strings of As, Gs, Cs and Ts.

One of the main motivations for shrinking traditional computer chips is to extract the maximum amount of computational power from a limited space. By placing ever smaller features on the silicon real estate of modern processors, chip-makers such as Intel continually try to keep in step with Moore's Law - the famous observation that computer power roughly doubles every 18 months.

Shapiro's computer was never going to win any prizes for mathematical muscle. All it could do was analyse a sequence of letters and determine whether or not it contained an even number of a specific character. Nevertheless, it represented the state of the art in a scientific field that had been in practical existence for less than a decade. In 1994, Len Adleman (previously better known as one of the co-inventors of the main Internet encryption scheme, and the man who gave a name to what we now know as computer viruses) stunned the computing world by demonstrating the feasibility of performing computations using molecules of DNA.

Rather than representing information as electronic bits inside a silicon chip, Adleman showed how to solve a problem using data encoded as sequences of bases on DNA molecules. One of his motivations lay in the storage capacity of DNA; nature has data compression down to a fine art. Every living cell in your body contains a copy of your unique 3Gb genome, the data equivalent of 200 copies of the Manhattan telephone directory. Adleman wanted to use the nature of chemical reactions to perform massively parallel computations on this molecular memory.

Each tube could contain trillions of individual DNA strands, and each molecule could encode a possible answer to a particular problem. The idea was to exploit the fact that enzymes and other biological tools act on every strand in a tube at the same time, quickly weeding out bad solutions and giving the potential for parallel processing on a previously unimagined scale.

Adleman's initial paper led to the emergence of a fully fledged field. A rash of papers appeared, describing proposals to use DNA to crack government encryption schemes or build real, "wet" memories more capacious than the human brain. After this flurry of untamed optimism - when some seriously thought that molecular machines could give traditional computers a run for their money - DNA computing matured into a more thoughtful discipline. Scientists no longer talk seriously about taking on silicon machines and are instead seeking out niche markets for their molecular machines, areas such as medical diagnostics and drug delivery, where traditional devices and methods are too large, invasive or prone to error.

Shapiro's simple computer was one example of such an application; a small step towards eventual "on-site" diagnosis and treatment of diseases such as cancer. A later version of his machine was capable (in a test tube, at least) of identifying the molecules that signal the presence of prostate cancer and then releasing a therapeutic molecule to kill the malevolent cells. Shapiro and his team have spoken about their aim of creating a "doctor in a cell", a reprogrammed human cell that could roam around the body, sniffing out and destroying disease. As physicist Richard Jones explains in his book Soft Machines, the Fantastic Voyage scenario of humans in a miniaturised submarine is "quite preposterous", but that doesn't rule out serious work into trying to engineer existing living systems to act as "medibots" able to detect and control disease at its source.

A growing band of experts is slowly coming together to form a whole new vanguard at the frontiers of science, where boundaries between biology, chemistry, engineering and computing become fluid and ever-changing. This is the new world of synthetic biology. "We want to do for biology what Intel does for electronics," states George Church, professor of genetics at Harvard University. The Massachusetts Institute of Technology's Tom Knight is even more blunt: "Biology is the nanotechnology that works."

DNA is so much more than an incredibly compact data storage medium. As physicist Richard Feynman explained: "Biology is not simply writing information; it is doing something about it." Floating inside its natural environment - the cell - DNA carries meaning, used to generate signals, make decisions, switch things on and off, like a program that controls its own execution. DNA, and the cellular machinery that operates on it, is the original reprogrammable computer, pre-dating our efforts by billions of years. By re-engineering the code of life, we may finally be able to take full advantage of the biological "wetware" that has evolved over millennia. We are dismantling living organisms and rebuilding them - this time according to a pre-planned design. It is the ultimate scrap-heap challenge.

As pioneers such as Alan Turing and John von Neumann discovered, there are direct parallels between the operation of computers and the gurglings of living "stuff" - molecules and cells. Of course, the operation of organic, bio-logic is more noisy, messy and complex than the relatively clear-cut execution of computer instructions. But rather than shying away from the complexity of living systems, a new generation of synthetic biologists is seeking to harness the diversity of behaviour that nature offers, rather than trying to control or eliminate it. By building devices that use this richness of behaviour at their very core, we are ushering in a new era in terms of practical devices and applications and of how we view the very notion of computation and of life itself.

The questions that drive this research include the following: Does nature "compute" and, if so, how? What does it mean to say that a bacterium is "computing"? Can we rewrite the genetic programs of living cells to make them do our bidding? How can mankind benefit from this potentially revolutionary new technology? What are the dangers? Could building computers with living components put us at risk from our own creations? What are the ethical implications of tinkering with nature's circuits? How do we (indeed, should we) reprogramme the logic of life?

The dominant science of the new millennium may well prove to be at the intersection of biology and computing. As biologist Roger Brent argues: "I think that synthetic biology will be as important to the 21st century as [the] ability to manipulate bits was to the 20th." This isn't tinkering around the edges, it's blue-skies research - the sort of high-risk work that could change the world or crash and burn. I took a huge risk in the 1990s when I gambled on DNA computing as the topic of my PhD research - a field with a literature base, at the time, of a single article.

It is exhilarating stuff, and it has the potential to change forever our definition of a "computer". But most researchers are wary of promising too much, preferring to combine quiet optimism with grounded realism. As researcher Drew Endy explains: "It'll be cool if we can pull it off. We might fail completely. But at least we're trying."

1 comment:

Anonymous said...

Hi Martyn, THES have improved their web content now, so the article which you mention is now available on the THES website without subscription. Unfortunately I missed your talk in Manchester last year, will you be doing similar stuff at the Biological complexity seminar?