I had a wonderful time at the Edinburgh Book Festival over the weekend; a full venue and books to sign afterwards makes for a happy author! Here is a lightly edited version of what I had to say.
In 1959, a great personal hero of mine, the Nobel Prize-winning physicist Richard Feynman gave a visionary talk entitled “There's Plenty of Room at the Bottom”. In his speech, Feynman outlined the possibility of individual molecules, even individual atoms making up the component parts of computers in the future. Remember, this was back when computers filled entire rooms, and were tended by teams of lab-coated technicians, so the idea that you could compute with individual molecules was pretty outlandish. I was struck by a quotation in Oliver's book, attributed to the microbiologist A. J. Kluyver, who said, over fifty years ago, that “The most fundamental character of the living state is the occurrence in parts of the cell of a continuous and directed movement of electrons.” At their most basic, level computers work in exactly the same way; by funnelling electrons around silicon circuits, so I think this hints at the linkages between biology and computers that are only now coming to fruition.
Indeed, it wasn't until 1994 that someone demonstrated, for the first time, the feasibility of building computers from molecular-scale bits. Feynman's vision had waited, not only for the technology to catch up, but for a person with the required breadth of understanding and the will to try something slightly bizarre. That person was Len Adleman, who won the computer science equivalent of the Nobel Prize for his role in the development of the encryption scheme that protects our financial details whenever we buy something on the Internet. Len has always had an interest in biology; when one of his students showed him a program that could take over other programs and force them to replicate it, Len said “Hmmm.... that looks very much like how a virus behaves.” The student was Fred Cohen, author of the first ever computer virus, and Len's term stuck. (Update, 2/9/07: Cohen made the first reference to a "computer virus" in an academic article, but did not write the first virus).
One night in the early 90's, Len was lying in bed reading a classic molecular biology textbook. He came across the section describing a particular enzyme inside the cell that reads and copies DNA, and he was struck by its similarity with an abstract device in computer science known as the Turing Machine. By bringing together two seemingly disparate concepts, Adleman knew at once that, in his own words, “Geez, these things could compute.”
He found a lab at the University of Southern California, where he is a professor, and got down to building a molecular computer. He knew that DNA, the molecule of life that contains the instructions needed to build every organism on the planet, from a slug to.... John Redwood can be thought of as a series of characters from the set A, G, C and T, each character being the first letter of the name of a particular chemical. The title of the film Gattaca, which considers a dystopian future in which genetic discrimination defines a society, is simply a string of characters from the alphabet A, G, C and T.
As Oliver highlights in his own book, molecular biology has always been about the transformation of information, usually inside the living cell. This information is coded in the AGCT sequences of genes and in the proteins that these genes represent. Adleman immediately saw how this mechanism could be harnessed, not to represent proteins, but to store digital data, just like a computer encodes a file as a long sequence of zeroes and ones.
Adleman decided to use this fact to solve a small computational problem. Some of you might have heard of the Travelling Salesman Problem, and Adleman's was a variant of that; given a set of cities connected by flights, does there exist a sequence of flights that starts and ends at particular cities, and which visits every other city only once? This problem is easy to describe, but fiendishly difficult to solve for an even relatively small number of cities. This inherent difficulty is what made the problem interesting in Adleman's eyes, “interesting” being, to a mathematician, a synonym for “hard”.
Len decided to build his computer using the simplest possible algorithm; generate all possible answers (right or wrong), and then throw away the wrong ones. He would build a molecular haystack of answers, and then throw away huge swathes of hay encoding bad answers until he was left with the needle encoding the correct solution (of which there may be just a single copy). For Adleman, the key to his approach was that you can make DNA in the laboratory. A machine the size of a microwave oven will sit in a lab connected to four pots, each containing either A, G, C or T. Type in the sequence you require, and the machine gets to work, threading the letters together like molecular beads on a necklace, making trillions of copies of your desired sequence.
Adleman ordered DNA strands representing each city and each flight for his particular problem. Because DNA sticks together to form the double helix in a very well-defined way, he chose his sequences carefully, such that city and flight strands would glue together like Lego blocks to form long chains, each chain encoding a sequence of flights. Because of the sheer numbers involved, he was pretty sure that a chain encoding the single correct answer would self-assemble. The problem then was to get it out. In a way, Len had built a molecular memory, containing a huge file of lines of text. What he then had to do was sort the file, removing lines that were too long or too short, that started or ended with the wrong words, or which contained duplication. He used various standard lab techniques to achieve this, and, after about a week of molecular cutting and sorting, he was left with the correct solution to his problem.
The example that he solved could be figured out in a minute by a bright 10-year-old using a pen and paper. But that wasn't the point. Adleman had realised, for the first time, Feynman's vision of computing using molecules. After he published his paper, there was a flood of interest in the new field of DNA computing, a tide on which I was personally carried. The potential benefits were huge, since we can fit a vast amount of data into a very small volume of DNA. If you consider that every cell with a nucleus in your body contains a copy of your genome - 3 gigabytes of data, corresponding to 200 copies of the Manhattan phone book – you begin to understand just how advanced nature is in terms of information compression. Suddenly my 4 gig iPod nano doesn't look quite so impressive.
After a few years, though, people began to wonder if molecular computing would ever be used for anything important. They were looking for the “killer application”, the thing that people are willing to pay serious money for, like the spreadsheet, that persuaded small businesses to buy their first ever computer. The fundamental issue with Adleman's approach is tied to the difficulty of the problem; as the number of cities grows only slightly, the amount of DNA required to store all possible sequences of flights grows much more quickly; a small increase in the number of cities quickly leads to a requirement for bathtubs full of DNA, which is enough to induce hysterical laughter in even the sanest biologist. Indeed, it was estimated that if Len's algorithm were to be applied to a map with 200 cities in it, the DNA memory required to store all possible routes would weigh more than the Earth.
It would appear that DNA computing has reached the end of the line, if we are to insist on applying it to computational problems in a head-to-head battle against traditional silicon-based computers. Let's be straight, you're never going to be able to go into PC World and buy a DNA-based computer any time soon. When DNA computing first emerged as a discipline, I was dismayed to see a rash of papers making claims that within a few years we'd be cracking military codes using DNA computers and building artificial molecular memories vastly larger than the human brain. I was dismayed because I knew what had happened 30 years previously to the embryonic field of artificial intelligence. Again, hubristic claims were made for their discipline by the young Turks, ranging from personal robot butlers to automated international diplomacy. When the promised benefits failed to materialise, AI suffered a savage backlash in terms of credibility and funding, from which it is only just beginning to recover. I was very keen to avoid the same thing happening to molecular computing, but I, like many others, knew that we needed to look beyond simply using DNA as a tiny memory storage device.
The next key breakthrough was in realising that, far from being simply a very small storage medium that can be manipulated in a test tube, within its natural environment – the cell – DNA carries meaning. As the novelist Richard Powers observes in The Gold Bug Variations, “The punched tape running along the inner seam of the double helix is much more than a repository of enzyme stencils. It packs itself with regulators, suppressors, promoters, case-statements, if-thens.” Computational structures, that is. DNA encodes a program that controls its own execution. DNA, and the cellular machinery that operates on it, pre-dates electronic computers by billions of years. By re-programming the code of life, we may finally be able to take full advantage of the wonderful opportunities offered by biological wetware.
As Oliver observes in his book, “The world is not just a set of places. It is also a set of processes.” This nicely illustrates the shift in thinking that has occurred in the last few years since the human genome has been sequenced. The notion of a human “blueprint” is outdated a useless. A blueprint encodes specific locational information for the various components of whatever it's intended to represent, whether it be a car or a skyscraper. Nowhere in the human genome will you find a section that reads “place two ears, on on either side of head” or “note to self: must fix design for appendix.” Instead, genes talk to one another, turning each other (and often themselves) on and off in a complex molecular dance. The genome is an electrician's worst nightmare, a tangle of wiring and switches, where turning down a dimmer switch in Hull can switch off the Manhattan underground system.
The human genome project (and the many other projects that are sequencing other organisms, from the orang-utan to the onion) is effectively generating a biological “parts catalogue”; a list of well-understood genes, whose behaviour we can predict in particular circumstances. This is the reductionist way of doing science; break things down, in a top-down fashion, into smaller and smaller parts, through a series of levels of description (for example, organism, molecule, atom). The epitome of this approach is the very well-funded physicists smashing together bits of nature in their accelerators in an attempt to discover what some call the God Particle.
Of course, smashing together two cats and seeing what flies off is only going to give you a limited understanding of how cats work, and it'll probably annoy the cats, so the reductionist approach is of limited use to biologists. Systems biology has emerged in recent years to address this, by integrating information from many different levels of complexity. By studying how different biological components interact, rather then just looking at their structure, as before, systems biologists try to understand biological systems from the bottom up.
An even more recent extension of systems biology is synthetic biology. When a chemist discovers a new compound, the first thing they do is break it down into bits, and the next thing they do it try to synthesise it. As Richard Feynman said just before his death, “What I cannot build I cannot understand.” Synthetic biologists play, not with chemicals, but with the genetic components being placed daily in the catalogue. It's where top down meets bottom up – break things down into their genetic parts, and then put them back together in new and interesting ways. By stripping down and rebuilding microbial machines, synthetic biologists hope to better understand their basic biology, as well as getting them to do weird and wonderful things. It's the ultimate scrapheap challenge.
If we told someone in the field of nanotechnology that we had a man-made device that doesn't need batteries, can move around, talk to its friends and even make copies of itself – and all this in a case the size of a bacterium – they would sell their grandmother for a glimpse. Of course, we already have such devices available to us, but we know them better as microbes. Biology is the nanotechnology that works. By modelling and building new genetic circuits, synthetic biologists are ushering in a new era of biological engineering, where microbial devices are built to solve very pressing problems.
As Oliver notes towards the end of his book, the planet is facing a very real energy crisis. One team is therefore trying to build a microbe to produce hydrogen. Another massive problem facing the developing world is that of arsenic contamination in drinking water. A team here in Edinburgh, made up mainly of undergraduates, has built a bacterial sensor that can quickly and easily monitor arsenic concentrations from a well sample, to within safe tolerances. Jay Keasling, a colleague in California has recently been awarded 43 million dollars by the Bill and Melinda Gates Foundation to persuade E. coli to make substances that are alien to them, but which provide the raw ingredients for antimalarial drugs. The drug is found naturally in the wormwood plant, but it's not cheap – providing it to 70 per cent of the malaria victims in Africa would cost $1 billion, and they can be repeatedly infected. It's been estimated that drug companies would need to cover the entire state of Rhode Island in order to grow enough wormwood, so Keasling wants to produce it in vats, eventually at half the cost.
There are, of course, safety issues with synthetic biology, as well as legal and ethical considerations. I worry that people have this idea that the bugs we use are snarling microbes that have to be physically restrained for fear of them erupting from a Petri dish into the face of an unfortunate researcher, like something from the Alien movies. In reality, the bacteria used in synthetic biology experiments are docile creatures, pathetic even, the crack addicts of the microbial world. They have to be nurtured and cossetted, fed a very specific nutrient brew. Like some academics, they wouldn't last two minutes in the real world. Of course, nature has a habit of weeding out the weak and encouraging the fit, so we still have to be very careful and build in as many safeguards as are practical. The potential for using synthetic biology for weaponry is, to my mind, overstated. As one of the leading researchers said to me, “If I were a terrorist looking to commit a bio-based atrocity, there are much cheaper and easier ways to do it than engineering a specific microbe – anthrax, say.” Synthetic biology will not, in the foreseeable future, return many “bangs per buck”.
Many of the legal concerns centre on the patenting of gene sequences. This was going on well before synthetic biology, but it recently hit the headlines when Craig Venter, head of the private corporation that tied with the Human Genome Project, announced that they intended to patent a synthetic organism.
We must remember that Venter is, first and foremost, a businessman, and it is very much in his interests to keep his company in the public eye. The scientific rationale for some of these patents is not immediately clear. But we should also remember that, for every Craig Venter, there are probably ten or more Jay Keaslings, placing their research in the public domain and working in an open and transparent fashion for the greater good.
On that positive note, I'd like to thank you for listening, and I'll stop there.
Friday, August 31, 2007
Friday, August 24, 2007
My contribution to the synthetic biology debate
You may recall that the Royal Society is soliciting opinions on various aspects of the field of synthetic biology. What follows is a lightly edited version of my own submission, which I sent off today.
In what follows, I highlight some concerns and dangers, speaking as someone who has an definite interest in the field flourishing (and would therefore wish to see these concerns addressed).
1. Terminology
The first concern is over the term “synthetic biology” itself. The two main issues are “what does it mean?” and “what does it cover?” As pointed out at the BBSRC workshop, clinicians have used the term for a while to refer to prosthetic devices. In attempting to offer a fixed definition of the term, the community runs the risk of becoming overly exclusive at a premature stage. However, there is also a risk that “synthetic biology” will become a “catch-all” term that is too loosely applied. The emphasis on the term “biology” may also serve to alienate mathematicians, physicists, computer scientists and others, who may (wrongly) feel that they have no expertise to offer a “biological” discipline. As a counter-example, witness the success of the field of bioinformatics, which would appear to fairly represent the disciplinary expertise in the field (in terms of the general composition of the term, rather than the relative lengths of its components). As a very crude experiment, I searched in Google for both “computational biology” and “bioinformatics”; the first term returned around 1,530,000 hits, the second around 14,000,000.
This leads on to the issue of “language barriers”. This is always an issue in any new field that involves the collision of two or more (often very dissimilar) disciplines. Being seen to publically ask “stupid questions” is a daunting prospect to most young scientists, and yet many of the major breakthroughs have occurred through just that. This opens up the wider debate on inter-disciplinarity in 21st century science, and how we might best prepare its practitioners. Do we give students a broad, shallow curriculum to allow them to make connections, without necessarily having the background to “drill deeper” if required, or do we stick to the “old model” of “first degree” and subsequent training? My own intuition is that it is far better to intensively train in a single field at the outset, and then offer the opportunity to “cherry pick” topics from a different discipline at a later stage. This educational debate is, however, not one that should be the sole preserve of synthetic biology!
2. Expectation Management
Even when biologists and (say) computer scientists can agree a suitable shared terminology, there is still the risk of a mismatch occurring in terms of expectations of what might be achieved. For example, the notion of “scalability” might mean very different things to a computer scientist and a microbiologist. To the former, it means being able to increase by several orders of magnitude the number of data items processed by an algorithm, or double the (already vast) number of transistors we may place on the surface of a computer chip. To a biologist, the idea of scalability might currently be very different:
“What's needed to make synthetic biology successful, Rabaey said, are the same three elements that made microelectronics successful. These are a scalable, reliable manufacturing process; a scalable design methodology; and a clear understanding of a computational model. "This is not biology, this is not physics, this is hard core engineering," Rabaey said.
In electronics, photolithography provides a scalable, reliable manufacturing process for designs involving millions of elements. Biology has a long way to go. What's needed, Rabaey said, is a way to generate thousands of genes reliably in a very short time period with very few errors. The difference between what's available and what's needed is about a trillion to one.”
3. Conceptual Issues
As the leading nanotechnologist (and FRS) Richard Jones has pointed out, his field was dominated from an early stage by often inappropriate analogies with mechanical engineering (e.g., cogs). It may well be that case that we are in danger of the same thing happening with synthetic biology, where computer scientists impose rigid circuit/software design principles on "softer", more “fuzzy” substrates. Jones quotes, on his blog, an article in the New York Times:
“Most people in synthetic biology are engineers who have invaded genetics. They have brought with them a vocabulary derived from circuit design and software development that they seek to impose on the softer substance of biology. They talk of modules — meaning networks of genes assembled to perform some standard function — and of “booting up” a cell with new DNA-based instructions, much the way someone gets a computer going.”
4. Complexity
The issue of "grey goo" has persistently dogged the field of nanotechnology, and it would be tempting to dismiss similar criticisms of synthetic biology as well-intentioned but ultimately uninformed. However, if synthetic biologists are to avoid the mistake that researchers in GM research made (that is, to appear arrogant and dismissive, leading to mass public protest and restrictive legislation), then we should acknowledge and address the very real possibility of the biological systems under study behaving in very unpredictable ways. Anyone who has any degree of contact with studying biosystems will understand the notion of complexity; components that are connected in an unknown fashion behave in unpredictable ways, which may include evasion of any control mechanisms that have been put in place. As Douglas Kell and his colleagues have observed, it is perfectly possible to alter parameters of a system on an individual basis, and see no effect, only to observe wild variations in behaviour when exactly the same tweak is applied to two or more parameters at the same time. Working in an interdisciplinary fashion may address this issue, at least in part, if modellers work closely with bench scientists in a cycle of cooperation. Once again invoking the issue of scalability, studying the behaviour of complex biosystems through modelling alone will quickly become infeasible, due the the combinatorial explosion in the size of the search space (of parameter values). By actually making or modifying the systems under study in the lab, the problem may be reduced to manageable proportions.
5. Hype
In my own book, Genesis Machines (Atlantic Books, 2006), I illustrate the risk of promising too much at an early stage by describing the story of the “AI winter”. In the 1960s, researchers in artificial intelligence (AI) had promised human-level intelligence “in a box” within twenty years. By issuing such wild predictions, AI researchers set themselves up for a monumental fall, and, when the promised benefits failed to accrue, funding was slashed and interest dwindled. This AI winter (by analogy with “nuclear winter”) affected the field for over 15 years, and it would be disappointing (to say the least) if the same thing were to happen to synthetic biology.
Hubristic claims for synthetic biology should be avoided wherever possible; without singling out particular groups, I have already seen several predictions (again, often conflated with ambitions) that have absolutely no realistic chance of coming to fruition in any meaningful time-scale (if at all). In this more “media savvy” age, perhaps practitioners in synthetic biology might benefit, as their AI counterparts did not, from media training (I have personally benefited (June 2004) from the course provided by the Royal Society, and perhaps the Society might consider a “mass participation” version for new entrants to the field).
In what follows, I highlight some concerns and dangers, speaking as someone who has an definite interest in the field flourishing (and would therefore wish to see these concerns addressed).
1. Terminology
The first concern is over the term “synthetic biology” itself. The two main issues are “what does it mean?” and “what does it cover?” As pointed out at the BBSRC workshop, clinicians have used the term for a while to refer to prosthetic devices. In attempting to offer a fixed definition of the term, the community runs the risk of becoming overly exclusive at a premature stage. However, there is also a risk that “synthetic biology” will become a “catch-all” term that is too loosely applied. The emphasis on the term “biology” may also serve to alienate mathematicians, physicists, computer scientists and others, who may (wrongly) feel that they have no expertise to offer a “biological” discipline. As a counter-example, witness the success of the field of bioinformatics, which would appear to fairly represent the disciplinary expertise in the field (in terms of the general composition of the term, rather than the relative lengths of its components). As a very crude experiment, I searched in Google for both “computational biology” and “bioinformatics”; the first term returned around 1,530,000 hits, the second around 14,000,000.
This leads on to the issue of “language barriers”. This is always an issue in any new field that involves the collision of two or more (often very dissimilar) disciplines. Being seen to publically ask “stupid questions” is a daunting prospect to most young scientists, and yet many of the major breakthroughs have occurred through just that. This opens up the wider debate on inter-disciplinarity in 21st century science, and how we might best prepare its practitioners. Do we give students a broad, shallow curriculum to allow them to make connections, without necessarily having the background to “drill deeper” if required, or do we stick to the “old model” of “first degree” and subsequent training? My own intuition is that it is far better to intensively train in a single field at the outset, and then offer the opportunity to “cherry pick” topics from a different discipline at a later stage. This educational debate is, however, not one that should be the sole preserve of synthetic biology!
2. Expectation Management
Even when biologists and (say) computer scientists can agree a suitable shared terminology, there is still the risk of a mismatch occurring in terms of expectations of what might be achieved. For example, the notion of “scalability” might mean very different things to a computer scientist and a microbiologist. To the former, it means being able to increase by several orders of magnitude the number of data items processed by an algorithm, or double the (already vast) number of transistors we may place on the surface of a computer chip. To a biologist, the idea of scalability might currently be very different:
“What's needed to make synthetic biology successful, Rabaey said, are the same three elements that made microelectronics successful. These are a scalable, reliable manufacturing process; a scalable design methodology; and a clear understanding of a computational model. "This is not biology, this is not physics, this is hard core engineering," Rabaey said.
In electronics, photolithography provides a scalable, reliable manufacturing process for designs involving millions of elements. Biology has a long way to go. What's needed, Rabaey said, is a way to generate thousands of genes reliably in a very short time period with very few errors. The difference between what's available and what's needed is about a trillion to one.”
3. Conceptual Issues
As the leading nanotechnologist (and FRS) Richard Jones has pointed out, his field was dominated from an early stage by often inappropriate analogies with mechanical engineering (e.g., cogs). It may well be that case that we are in danger of the same thing happening with synthetic biology, where computer scientists impose rigid circuit/software design principles on "softer", more “fuzzy” substrates. Jones quotes, on his blog, an article in the New York Times:
“Most people in synthetic biology are engineers who have invaded genetics. They have brought with them a vocabulary derived from circuit design and software development that they seek to impose on the softer substance of biology. They talk of modules — meaning networks of genes assembled to perform some standard function — and of “booting up” a cell with new DNA-based instructions, much the way someone gets a computer going.”
4. Complexity
The issue of "grey goo" has persistently dogged the field of nanotechnology, and it would be tempting to dismiss similar criticisms of synthetic biology as well-intentioned but ultimately uninformed. However, if synthetic biologists are to avoid the mistake that researchers in GM research made (that is, to appear arrogant and dismissive, leading to mass public protest and restrictive legislation), then we should acknowledge and address the very real possibility of the biological systems under study behaving in very unpredictable ways. Anyone who has any degree of contact with studying biosystems will understand the notion of complexity; components that are connected in an unknown fashion behave in unpredictable ways, which may include evasion of any control mechanisms that have been put in place. As Douglas Kell and his colleagues have observed, it is perfectly possible to alter parameters of a system on an individual basis, and see no effect, only to observe wild variations in behaviour when exactly the same tweak is applied to two or more parameters at the same time. Working in an interdisciplinary fashion may address this issue, at least in part, if modellers work closely with bench scientists in a cycle of cooperation. Once again invoking the issue of scalability, studying the behaviour of complex biosystems through modelling alone will quickly become infeasible, due the the combinatorial explosion in the size of the search space (of parameter values). By actually making or modifying the systems under study in the lab, the problem may be reduced to manageable proportions.
5. Hype
In my own book, Genesis Machines (Atlantic Books, 2006), I illustrate the risk of promising too much at an early stage by describing the story of the “AI winter”. In the 1960s, researchers in artificial intelligence (AI) had promised human-level intelligence “in a box” within twenty years. By issuing such wild predictions, AI researchers set themselves up for a monumental fall, and, when the promised benefits failed to accrue, funding was slashed and interest dwindled. This AI winter (by analogy with “nuclear winter”) affected the field for over 15 years, and it would be disappointing (to say the least) if the same thing were to happen to synthetic biology.
Hubristic claims for synthetic biology should be avoided wherever possible; without singling out particular groups, I have already seen several predictions (again, often conflated with ambitions) that have absolutely no realistic chance of coming to fruition in any meaningful time-scale (if at all). In this more “media savvy” age, perhaps practitioners in synthetic biology might benefit, as their AI counterparts did not, from media training (I have personally benefited (June 2004) from the course provided by the Royal Society, and perhaps the Society might consider a “mass participation” version for new entrants to the field).
Friday, August 17, 2007
For the love of ants
To be published next week, one book on my Amazon wishlist is titled The Ants Are My Friends. Students of popular music may recognise the phrase as one of the great misheard lyrics of our time, up there with "Beelzebub had a devil for a sideboard", rather than an expression of insect infatuation (the response being, of course, "blowing in the wind").
But I rather like the idea of ants being my friends. I've always held these misunderstood creatures in high regard, and was charmed by the story, recounted in Surely You're Joking, Mr. Feynman! (p. 91 in the Vintage edition), of how Richard Feynman investigated ant trail-following behaviour in his Princeton accomodation. He eventually used his findings to persuade an ant colony to leave his larder; "No poison, you gotta be humane to the ants!"
Anyone who has ever watched an ant colony at work cannot fail to be entranced by its beauty and efficiency. A single colony can strip an entire moose carcass in under two hours, and their work is coordinated in an inherently decentralised fashion (that is, there is no "head ant" giving out orders). An ant colony can be considered as a class of "super-organism", that is, a "virtual" organism made up of many other single organisms. Other examples include bacterial colonies and (arguably) the Earth itself.
Ants communicate remotely by way of pheromones, chemicals that generate some sort of response amongst members of the same species. When ants forage for food, they lay a particular pheromone on the ground once they've found a source. When this signal is detected by other ants, they follow the trail and reinforce it by laying pheromone themselves. Chemical signals also evaporate over time, which allows colonies to "forget" good solutions (i.e., paths) and construct new solutions if the environment changes (e.g., a stone falls onto an existing path).
By describing this mechanism in abstract terms, computer scientists have managed to harness the power of positive feedback in order to solve difficult computational problems. Perhaps the leading scientist in the field of ant colony optimization (ACO) is Marco Dorigo, and he has described how to use models of artificial ants to solve the problem of how to route text messages through a busy network of mobile base stations. We've also done some initial work on how ants build spatial structures, using an abstract model of pheromone deposition to explain how certain species can construct "bullseye"-like patterns of differently-sized objects.
Fundamentally, ongoing work in ACO reflects a wider interest in the notion of decentralised control. Rather than controlling everything from "on high" with global instructions, "bottom up" control emphasises the value of small, local interactions in keeping systems running smoothly. Software packages such as Netlogo have brought so-called agent-based modelling to a wider audience. I've just taken on a Ph.D. student to study the evacuation of tall buildings using this approach, and it's clear that, with ever-increasing computational power being available, the notion of simulating large systems of interacting entities will gain increasing influence.
But I rather like the idea of ants being my friends. I've always held these misunderstood creatures in high regard, and was charmed by the story, recounted in Surely You're Joking, Mr. Feynman! (p. 91 in the Vintage edition), of how Richard Feynman investigated ant trail-following behaviour in his Princeton accomodation. He eventually used his findings to persuade an ant colony to leave his larder; "No poison, you gotta be humane to the ants!"
Anyone who has ever watched an ant colony at work cannot fail to be entranced by its beauty and efficiency. A single colony can strip an entire moose carcass in under two hours, and their work is coordinated in an inherently decentralised fashion (that is, there is no "head ant" giving out orders). An ant colony can be considered as a class of "super-organism", that is, a "virtual" organism made up of many other single organisms. Other examples include bacterial colonies and (arguably) the Earth itself.
Ants communicate remotely by way of pheromones, chemicals that generate some sort of response amongst members of the same species. When ants forage for food, they lay a particular pheromone on the ground once they've found a source. When this signal is detected by other ants, they follow the trail and reinforce it by laying pheromone themselves. Chemical signals also evaporate over time, which allows colonies to "forget" good solutions (i.e., paths) and construct new solutions if the environment changes (e.g., a stone falls onto an existing path).
By describing this mechanism in abstract terms, computer scientists have managed to harness the power of positive feedback in order to solve difficult computational problems. Perhaps the leading scientist in the field of ant colony optimization (ACO) is Marco Dorigo, and he has described how to use models of artificial ants to solve the problem of how to route text messages through a busy network of mobile base stations. We've also done some initial work on how ants build spatial structures, using an abstract model of pheromone deposition to explain how certain species can construct "bullseye"-like patterns of differently-sized objects.
Fundamentally, ongoing work in ACO reflects a wider interest in the notion of decentralised control. Rather than controlling everything from "on high" with global instructions, "bottom up" control emphasises the value of small, local interactions in keeping systems running smoothly. Software packages such as Netlogo have brought so-called agent-based modelling to a wider audience. I've just taken on a Ph.D. student to study the evacuation of tall buildings using this approach, and it's clear that, with ever-increasing computational power being available, the notion of simulating large systems of interacting entities will gain increasing influence.
Genesis Machines in the USA
I'm delighted to report that Atlantic have signed a deal to publish Genesis Machines in the USA. It's slated to appear on April 3rd of next year, and will be published by the Overlook Press (preorder here).
Friday, August 10, 2007
Molecules and Marx
My publisher kindly sends me copies of reviews of Genesis Machines that appear from time to time in the press. I was quite surprised to see the book featured in the June issue of the Marxist Review, the monthly theoretical magazine of the Workers Revolutionary Party. In his article, William Westwell invokes Richard Dawkins as the contemporary cheerleader of arch-reductionism and mechanical materialism. But, by concentrating purely on the first half of the book (which, by its very nature is comprised largely of historical background), Westwell ignores one of its fundamental arguments: that 21st century science cannot succeed by insisting on the top-down, reductionist paradigm. Science is still, to a large extent, a reductionist enterprise, but the emerging field of systems biology is providing a complementary approach (in a way, occupying the region where top-down meets bottom up). By arguing for a notion of "quality of computation", Westwell reminded me of conversations I have enjoyed in the past with Brian Goodwin, who has argued that "Biology is returning to notions of space-time organisation as an intrinsic aspect of the living condition... They are now described as complex networks of molecules that somehow read and make sense of genes. These molecular networks have intriguing properties, giving them some of the same characteristics as words in a language. Could it be that biology and culture are not so different after all; that both are based on historical traditions and languages that are used to construct patterns of relationship embodied in communities, either of cells or of individuals?" Unfortunately, Westwell appears to have ignored the later detailed discussion of such matters.
Tuesday, July 03, 2007
Royal Society and Synthetic Biology
I was gratified to see a healthy turn-out for the Cafe Scientifique event yesterday evening, and pleased by the range and depth of questions that were asked. I briefly mentioned at the end of the Q&A session that the Royal Society (the UK's preeminent learned society for the sciences) is soliciting views on the emerging field of synthetic biology.
The field's potential impact is huge, not just in terms of technological developments and scientific understanding, but in terms of social impact, legal issues and ethical concerns. It is vital that scientists engage with the wider community, so that the implications of their work may be considered at an early stage (and, indeed, throughout). The Royal Society is therefore asking for submissions (ranging from brief comments to draft discussion documents) on various aspects of synthetic biology, from potential applications to biosecurity risks and governance.
These exercises offer a real opportunity to help shape future policy, so I would encourage potentially interested parties to visit the call website and consider adding their voice to the debate. The deadline for submissions is August 27th.
The field's potential impact is huge, not just in terms of technological developments and scientific understanding, but in terms of social impact, legal issues and ethical concerns. It is vital that scientists engage with the wider community, so that the implications of their work may be considered at an early stage (and, indeed, throughout). The Royal Society is therefore asking for submissions (ranging from brief comments to draft discussion documents) on various aspects of synthetic biology, from potential applications to biosecurity risks and governance.
These exercises offer a real opportunity to help shape future policy, so I would encourage potentially interested parties to visit the call website and consider adding their voice to the debate. The deadline for submissions is August 27th.
Thursday, June 28, 2007
Event in Manchester
If you're in Manchester next Monday and are stuck for something to do in the evening, why not pop along to Cafe Scientifique, where I'll be speaking and (hopefully) generating some discussion? The format is pretty relaxed, with a 30-40 minute presentation from me, followed by a 15 minute break for refreshments, then an open-ended discussion session.
Genesis Machines: Engineering Life
Monday 2nd July 2007 at 6:30pm in Cafe Muse (directions are here).
Although anticipated as early as the 1950s, the idea that we could somehow build working computers from organic components was merely a theoretical notion until November 1994, when a scientist announced that he had built the world's first molecular computer. Emerging from a laboratory in Los Angeles, California, his collection of test tubes, gels and DNA lay at the heart of a totally new and unexplored region of the scientific landscape.
Millions of dollars are now being invested worldwide in molecular computing and synthetic biology research. DNA, the code of life, is right now being used at the heart of experimental computers. Living cells are being integrated with silicon nanotubes to create hybrid machines, as well as being routinely manipulated to add entirely new capabilities. Preparations are being made to build entirely new organisms, never seen before in nature.
This research raises amazing new questions. Does nature 'compute', and, if so, how? Can natural systems inspire entirely new ways of doing computation? How can humanity benefit from this potentially revolutionary new technology? What are the dangers? Could building computers with living components put us at risk from our own creations? What are the ethical implications of tinkering with nature's circuits? In this event we'll examine what it means to reprogram the logic of life.
Genesis Machines: Engineering Life
Monday 2nd July 2007 at 6:30pm in Cafe Muse (directions are here).
Although anticipated as early as the 1950s, the idea that we could somehow build working computers from organic components was merely a theoretical notion until November 1994, when a scientist announced that he had built the world's first molecular computer. Emerging from a laboratory in Los Angeles, California, his collection of test tubes, gels and DNA lay at the heart of a totally new and unexplored region of the scientific landscape.
Millions of dollars are now being invested worldwide in molecular computing and synthetic biology research. DNA, the code of life, is right now being used at the heart of experimental computers. Living cells are being integrated with silicon nanotubes to create hybrid machines, as well as being routinely manipulated to add entirely new capabilities. Preparations are being made to build entirely new organisms, never seen before in nature.
This research raises amazing new questions. Does nature 'compute', and, if so, how? Can natural systems inspire entirely new ways of doing computation? How can humanity benefit from this potentially revolutionary new technology? What are the dangers? Could building computers with living components put us at risk from our own creations? What are the ethical implications of tinkering with nature's circuits? In this event we'll examine what it means to reprogram the logic of life.
Thursday, June 14, 2007
Two things
Apologies for the lack of recent activity on the blog; this time of year is always the busiest for academics, what with exam boards and so on. Hopefully things will calm down in the next week or two.
Two items of note: today marks both the publication of Genesis Machines in paperback, and the release of the programme for this year's Edinburgh International Book Festival.
Not much has changed in the paperback, apart from a few additional acknowledgements and some minor changes to the notes and references (which are available online here.)
As for the Book Festival, I'm privileged to be appearing on stage once again with Oliver Morton from Nature, where we'll be discussing "The Future of Nature" as part of the "Genes and Society" Festival theme. Oliver will be talking about "intelligent plants", and I'll be holding forth on biocomputing and synthetic biology (Craig Venter's recent patent swoop has given me lots of nice new discussion material). Anyway, we're appearing on Sunday the 26th of August, and full programme details are available here. The event is sponsored by the ESRC Genomics Policy and Research Forum, and it's one of five that they are supporting.
Two items of note: today marks both the publication of Genesis Machines in paperback, and the release of the programme for this year's Edinburgh International Book Festival.
Not much has changed in the paperback, apart from a few additional acknowledgements and some minor changes to the notes and references (which are available online here.)
As for the Book Festival, I'm privileged to be appearing on stage once again with Oliver Morton from Nature, where we'll be discussing "The Future of Nature" as part of the "Genes and Society" Festival theme. Oliver will be talking about "intelligent plants", and I'll be holding forth on biocomputing and synthetic biology (Craig Venter's recent patent swoop has given me lots of nice new discussion material). Anyway, we're appearing on Sunday the 26th of August, and full programme details are available here. The event is sponsored by the ESRC Genomics Policy and Research Forum, and it's one of five that they are supporting.
Friday, May 25, 2007
DNA hash pooling
The draft paper that came out of our trip to Paris has now been lodged with the arXiv e-print server.
DNA Hash Pooling and its Applications
Dennis Shasha (Courant Institute, New York University), Martyn Amos (Computing and Mathematics, Manchester Metropolitan University)
Abstract: In this paper we describe a new technique for the characterisation of populations of DNA strands. Such tools are vital to the study of ecological systems, at both the micro (e.g., individual humans) and macro (e.g., lakes) scales. Existing methods make extensive use of DNA sequencing and cloning, which can prove costly and time consuming. The overall objective is to address questions such as: (i) (Genome detection) Is a known genome sequence present at least in part in an environmental sample? (ii) (Sequence query) Is a specific fragment sequence present in a sample? (iii) (Similarity Discovery) How similar in terms of sequence content are two unsequenced samples?
We propose a method involving multiple filtering criteria that result in "pools" of DNA of high or very high purity. Because our method is similar in spirit to hashing in computer science, we call the method DNA hash pooling. To illustrate this method, we describe examples using pairs of restriction enzymes. The in silico empirical results we present reflect a sensitivity to experimental error. The method requires minimal DNA sequencing and, when sequencing is required, little or no cloning.
Available at http://www.arxiv.org/abs/0705.3597.
DNA Hash Pooling and its Applications
Dennis Shasha (Courant Institute, New York University), Martyn Amos (Computing and Mathematics, Manchester Metropolitan University)
Abstract: In this paper we describe a new technique for the characterisation of populations of DNA strands. Such tools are vital to the study of ecological systems, at both the micro (e.g., individual humans) and macro (e.g., lakes) scales. Existing methods make extensive use of DNA sequencing and cloning, which can prove costly and time consuming. The overall objective is to address questions such as: (i) (Genome detection) Is a known genome sequence present at least in part in an environmental sample? (ii) (Sequence query) Is a specific fragment sequence present in a sample? (iii) (Similarity Discovery) How similar in terms of sequence content are two unsequenced samples?
We propose a method involving multiple filtering criteria that result in "pools" of DNA of high or very high purity. Because our method is similar in spirit to hashing in computer science, we call the method DNA hash pooling. To illustrate this method, we describe examples using pairs of restriction enzymes. The in silico empirical results we present reflect a sensitivity to experimental error. The method requires minimal DNA sequencing and, when sequencing is required, little or no cloning.
Available at http://www.arxiv.org/abs/0705.3597.
Tuesday, May 01, 2007
Spiked innovation survey
I was recently asked to contribute to the annual innovation survey from spiked. There appear to be many interpretations of the term "innovation", but notable entries (from my own perspective) include those by Scott Aaronson, Paul Rothemund and Jeffrey Shallit.
Here's the blurb from the spiked website:
"The internet, the alphabet, the discovery of nuclear fusion, x-rays, the brick, rockets, the eraser: all of these have been identified as the greatest innovations in history in a new survey.
Over 100 key thinkers and experts from the fields of science, technology and medicine - including six Nobel laureates - participated in the brand new spiked/Pfizer survey 'What's the Greatest Innovation?', which goes live on spiked today.
In his introduction to the survey, spiked's editor-at-large Mick Hume says: 'Some choose "sexy" looking innovations, others apologise for the apparent dullness of their arcane choices. But whatever the appearances, almost all of our respondents exude a sense of certainty about the improvement that innovations in their field are making to our world, and the potential for more of the same."
Here's the blurb from the spiked website:
"The internet, the alphabet, the discovery of nuclear fusion, x-rays, the brick, rockets, the eraser: all of these have been identified as the greatest innovations in history in a new survey.
Over 100 key thinkers and experts from the fields of science, technology and medicine - including six Nobel laureates - participated in the brand new spiked/Pfizer survey 'What's the Greatest Innovation?', which goes live on spiked today.
In his introduction to the survey, spiked's editor-at-large Mick Hume says: 'Some choose "sexy" looking innovations, others apologise for the apparent dullness of their arcane choices. But whatever the appearances, almost all of our respondents exude a sense of certainty about the improvement that innovations in their field are making to our world, and the potential for more of the same."
Wednesday, April 25, 2007
A new kind of firefighting
I managed to miss a potentially interesting edition of Horizon on the BBC after making the mistake of flicking over to watch the second half of the Manchester Utd/Milan match.
Anyway, I caught the last ten minutes, and managed to glean the basic facts: that fewer people would have died when the Twin Towers collapsed on September 11 had the authorities been in possession of a global picture of the state of the building (in terms of both its structure and the movement of its occupants). But having the raw data is not enough: it needs to be provided as input to predictive models that are capable of allowing firefighters to play "what if" games. These models are necessarily computationally complex and resource intensive, which is where Jose Torero comes in. He's in charge of Firegrid, an interdisciplinary project dedicated to using Grid-based computing to model and predict, in real-time, the evolution of fire emergencies.
This work is related to my own on evacuation modelling, and we've recently been awarded a Ph.D. studentship in order to develop our ideas on how crush conditions emerge in situations where people fail to follow a set evacuation plan. This work will be done in collaboration with Dr Steve Gwynne, who has worked for the last ten years on modelling people movement, and who helped develop the influential Exodus system. The position will be advertised shortly, so watch this space.
Anyway, I caught the last ten minutes, and managed to glean the basic facts: that fewer people would have died when the Twin Towers collapsed on September 11 had the authorities been in possession of a global picture of the state of the building (in terms of both its structure and the movement of its occupants). But having the raw data is not enough: it needs to be provided as input to predictive models that are capable of allowing firefighters to play "what if" games. These models are necessarily computationally complex and resource intensive, which is where Jose Torero comes in. He's in charge of Firegrid, an interdisciplinary project dedicated to using Grid-based computing to model and predict, in real-time, the evolution of fire emergencies.
This work is related to my own on evacuation modelling, and we've recently been awarded a Ph.D. studentship in order to develop our ideas on how crush conditions emerge in situations where people fail to follow a set evacuation plan. This work will be done in collaboration with Dr Steve Gwynne, who has worked for the last ten years on modelling people movement, and who helped develop the influential Exodus system. The position will be advertised shortly, so watch this space.
Monday, April 16, 2007
Warwick victorious!
Congratulations to one of my old institutions, Warwick, on winning the 2007 University Challenge. In a tight match, they eventually fought off the reigning champions, Manchester, both securing Warwick's first ever series win and preventing their opponents from gaining the first ever "back to back" run of titles.
I'm afraid, however, that most neutrals watching will remember it more for Prakash Patel's post-presentation lunge for Ann Widdecombe than anything else.
I'm afraid, however, that most neutrals watching will remember it more for Prakash Patel's post-presentation lunge for Ann Widdecombe than anything else.
Friday, April 06, 2007
Edinburgh Science Festival
Just a quick reminder that I'll be appearing at the Edinburgh Science Festival next Sunday (April 15th). Full details of my event (including how to reserve tickets) are here, and I'm told that there will be a book signing afterwards.
An apt observation
Jonathan Hodgkin, a Professor of Biochemistry at the University of Oxford, has published a nice essay in the March 28 edition of the Times Literary Supplement. It's built around a review of both Genesis Machines and Robert Frenay's recent book, Pulse (which I haven't yet had the chance to read, but which has a very nice website). This is the second occasion on which the two have been jointly reviewed (the first being Matt Ridley's examination here).
Anyway, I'm happy with Hodgkin's overall assessment of my own book, and he makes some fair points concerning gaps in topical coverage. I specifically avoided dealing in detail with quantum computing (although, to be fair, I did mention it), as I didn't want the book to turn into a detailed "quantum vs DNA" debate (and I'm not sure I have the expertise to do justice to the quantum "camp" anyway). It's understandable, though, that as a chemist Hodgkin should highlight the omission of aptamer development.
Aptamers are synthetic molecules that can fold up into very detailed three-dimensional shapes, thus binding to other molecules with incredible specificity. They can therefore be used to target other molecules in the same way as antibodies, and offer a wide range of applications in biotechnology and medicine. Because the possible space of three-dimensional shapes a molecule can adopt is potentially vast, researchers must use a smart approach to finding aptamers, as opposed to a "hit-and-hope" policy. The technique that has been developed, the name of which is abbreviated to SELEX,
uses an evolutionary approach based on an initial molecular population. Interestingly, it may be thought of (rather loosely) as a "wet" version of the genetic algorithm.
One possible hook that I could perhaps have made more of is the fact that Andrew Ellington, one of the founders of aptamer development, was one of the main researchers involved in recently building a bacterial camera (which did merit a mention in the book!)
Anyway, I'm happy with Hodgkin's overall assessment of my own book, and he makes some fair points concerning gaps in topical coverage. I specifically avoided dealing in detail with quantum computing (although, to be fair, I did mention it), as I didn't want the book to turn into a detailed "quantum vs DNA" debate (and I'm not sure I have the expertise to do justice to the quantum "camp" anyway). It's understandable, though, that as a chemist Hodgkin should highlight the omission of aptamer development.
Aptamers are synthetic molecules that can fold up into very detailed three-dimensional shapes, thus binding to other molecules with incredible specificity. They can therefore be used to target other molecules in the same way as antibodies, and offer a wide range of applications in biotechnology and medicine. Because the possible space of three-dimensional shapes a molecule can adopt is potentially vast, researchers must use a smart approach to finding aptamers, as opposed to a "hit-and-hope" policy. The technique that has been developed, the name of which is abbreviated to SELEX,
uses an evolutionary approach based on an initial molecular population. Interestingly, it may be thought of (rather loosely) as a "wet" version of the genetic algorithm.
One possible hook that I could perhaps have made more of is the fact that Andrew Ellington, one of the founders of aptamer development, was one of the main researchers involved in recently building a bacterial camera (which did merit a mention in the book!)
Sunday, April 01, 2007
"From Rive Gauche to Rochdale..."
...was Justine's remark yesterday, as we drove home through that northern town after a wonderful week in Paris. We visited Dennis Shasha and his family, as he's there on sabbatical from New York University and kindly invited us over. Justine and Alice took in the sights while Dennis and I got down to some work.
We stayed in a marvellous little hotel, just around the corner from the Eglise Saint-Sulpice (which featured as a central location in The Da Vinci Code).
Wednesday was spent walking and talking with Dennis, bouncing around ideas about biocomputing. His wife, Karen, kindly took time out to show Justine and Alice around the Jardin du Luxembourg. Thursday was spend working while Justine wandered up to the Louvre, before we had dinner with the Shashas. We wrapped up on Friday morning, then Justine and I took some time out to revisit Monmartre, where we honeymooned three years ago.
It was a pleasure to spend time with Dennis and his family; both he and his wife are prodigiously talented, Dennis as a scientist, writer and (as Alice was delighted to discover) juggler, and Karen as an artist (and cook!), and we much appreciated their hospitality. Dennis and I are currently working on the draft paper that emerged from our discussions, which will hopefully appear as a preprint in the next few weeks - watch this space.
Thursday, March 22, 2007
The Times Higher recently commissioned an article from me, the subject being the recently announced cuts in UK research funding. They were particularly interested in the views of a "young academic", so I was delighted to see that the resulting piece was made the lead opinion article in today's edition. It's available on the THES website, but I'm reproducing it here with their kind permission. The headline (and accompanying cartoon) appear to have been derived from a rather throwaway remark I made in the final sentence.
Labour's infidelity will not be forgiven easily
Martyn Amos, Times Higher Education Supplement, March 22 2007, p. 12.
Ripples from the collapse of Rover two years ago are apparently lapping at the doors of UK university departments. The decision to dip into research funds was, according to the Department of Trade and Industry, to cover "exceptional" costs. The demise of the car firm was a one-off budgetary burden that should be borne by all.
The image of an administration battling to save jobs in a region blighted by industrial decline is one the DTI is in no hurry to dispel. A closer inspection of the department's figures suggests the exercise was more about fiscal firefighting than industrial or social intervention. But behind the smokescreen of short-term financial juggling lie deeper concerns about fundamental breaches of trust.
Senior academics and industry leaders reacted with dismay to the announcement that about £68 million of funding destined for science would instead be diverted back to the DTI to address these "historic and new" financial pressures. Although the Rover debacle was pushed to the front of the crowd of good causes, other recipients of recalled funds lurked in the background. Jokes about David Cameron's alleged drug use at school have recently filled the corridors of Whitehall, but, to the Government, Weeed is no joke. That's the Waste Electronic and Electrical Equipment Directive to the uninitiated, a European Union edict that requires companies to dispose of obsolete white goods on behalf of consumers. UK implementation of this directive has been put on hold twice, the delay necessitating an additional funding shot to the tune of £27 million - only a couple of million short of the £29 million taken back from the Engineering and Physical Sciences Research Council. Michael Kenward, a former editor of New Scientist magazine, has highlighted the irony of funding the cost of delays in implementing electrical recycling by taking money away from the very agency that has green technologies at the top of its research agenda.
Whatever lies behind the decision to cut research funding, as a relatively junior member of staff I am acutely aware of the impact that these changes may have on the rank and file. The short-term implications are that roughly 100 research council grants will no longer be funded. Many of these would have been supported in "responsive mode", a mechanism designed to offer maximum flexibility in terms of project size and scope. Younger scientists are particularly encouraged to apply within this framework, as are those proposing adventurous or multidisciplinary research - all of which are vital to the long-term development of healthy science and innovation. Since most grants run for between two and five years, the research councils are obliged to cover these future costs from a much diminished purse and must cut back on flexible, short-term activities. These include studentships and fellowships - precisely the mechanisms by which new researchers establish their groups and develop their careers. Long-term, risky research will be sacrificed for the purposes of short-term expediency.
Perhaps more significant, this decision represents a sea change in the relationship between Labour and the scientific community. For the first time since taking power, the Government has reneged on its promises about the funding of science. This had previously enjoyed protected status within the Office of Science and Engineering to encourage medium to long-term research that might extend beyond the lifetime of governments. The dismantling of this ring fence has sent out a signal to some that the DTI can choose to ignore Treasury rules on science funding whenever it sees fit.
Early portents of the cuts came soon after the resignation of Lord Sainsbury from his post of Science Minister, a long tenure that had been greeted with almost universal approval from the research community. According to one insider quoted in The Times Higher in the wake of the announcement of the cuts: "The fact that he has left has made this possible." The formal announcement came while his successor, Malcolm Wicks, was on a trip to an operation funded by the Natural Environment Research Council.
The Government will argue that the cuts amount to less than 1 per cent of the total science budget, and that funding levels will be restored or even improved in future. But, like a cheating partner, the administration must understand that the long-term damage wrought by their breach of trust cannot simply be undone by promises to behave better in future.
Thursday, March 15, 2007
Passport hell
Dennis Shasha, academic, author and the series editor for my first book, has very kindly invited me over to Paris for a week to do some work. Of course, my wife and daughter were also invited, and we thought it would be a chance to introduce Alice to the city where her parents enjoyed their honeymoon (during the heatwave of 2003 - even less romantically, we thought it might also be a chance to get some rather messy but necessary work done on the house in our absence).
As is common in our household, various arrangements had been left until the last minute, the most significant one being passports for my wife and daughter. Luckily, one of the regional centres that deals with fast-track (ie. within two weeks) applications is just down the road in Liverpool, so I made an appointment to go over there today. Which is where the fun started.
The regulations concerning the acceptability of photographs for use on passports are fairly relaxed for children under five, but they still specify things like "no other person visible in the background". If you've ever tried to get a 12-month old to sit still, in a photo booth, looking in the vague direction of the camera, whilst remaining invisible yourself, then you'll know what we're up against. A couple of days ago, my wife had the following taken:
Which we thought would be fine. How little we knew. We dropped into our local Post Office on the way to Liverpool, just to double-check that the photo would be acceptable. "No", was the quick response, since Justine's arm is clearly visible in the background. Cue quick dash to Morrisons and frantic changing of notes into pound coins.
Our first effort wasn't too bad, in a moody, My Bloody Valentine album cover sort of way. But nowhere near good enough to satisfy the sticklers at the passport agency. So we tried again.
Away with the fairies. So we tried again.
Too blurred, face in the wrong part of the shot, looking down. By this point, we'd burned through 12 quid, I'd lost all feeling in my legs from kneeling on the floor of the photo booth, and we were in severe danger of missing our pre-booked appointment. So we decided to just get there and then worry about it.
Sure enough, the lovely (and I don't mean that sarcastically, they really were lovely, accomodating and helpful) people in Liverpool told us that none of the photos would be acceptable, but they had their own photo booth for just such an eventuality. They also passed on some wisdom on how to control toddlers whilst remaining invisible, thus sticking to the rules. Which is how we came to get this (acceptable!) shot:
If you look closely, you can see that Alice is actually sitting on my lap. That's me, in the background. Wearing a white T-shirt over my head.
As is common in our household, various arrangements had been left until the last minute, the most significant one being passports for my wife and daughter. Luckily, one of the regional centres that deals with fast-track (ie. within two weeks) applications is just down the road in Liverpool, so I made an appointment to go over there today. Which is where the fun started.
The regulations concerning the acceptability of photographs for use on passports are fairly relaxed for children under five, but they still specify things like "no other person visible in the background". If you've ever tried to get a 12-month old to sit still, in a photo booth, looking in the vague direction of the camera, whilst remaining invisible yourself, then you'll know what we're up against. A couple of days ago, my wife had the following taken:
Which we thought would be fine. How little we knew. We dropped into our local Post Office on the way to Liverpool, just to double-check that the photo would be acceptable. "No", was the quick response, since Justine's arm is clearly visible in the background. Cue quick dash to Morrisons and frantic changing of notes into pound coins.
Our first effort wasn't too bad, in a moody, My Bloody Valentine album cover sort of way. But nowhere near good enough to satisfy the sticklers at the passport agency. So we tried again.
Away with the fairies. So we tried again.
Too blurred, face in the wrong part of the shot, looking down. By this point, we'd burned through 12 quid, I'd lost all feeling in my legs from kneeling on the floor of the photo booth, and we were in severe danger of missing our pre-booked appointment. So we decided to just get there and then worry about it.
Sure enough, the lovely (and I don't mean that sarcastically, they really were lovely, accomodating and helpful) people in Liverpool told us that none of the photos would be acceptable, but they had their own photo booth for just such an eventuality. They also passed on some wisdom on how to control toddlers whilst remaining invisible, thus sticking to the rules. Which is how we came to get this (acceptable!) shot:
If you look closely, you can see that Alice is actually sitting on my lap. That's me, in the background. Wearing a white T-shirt over my head.
Wednesday, February 21, 2007
Festival time
I feel honoured and delighted to have been asked to contribute to two of the Edinburgh Festivals this year. The programme for the International Science Festival was published today, and features (amongst many others) Marcus du Sautoy, Marcus Chown, Kirsty Wark, Colin Blakemore, Steve Jones and Heinz Wolff. Richard Jones (who blogs here) and I have been scheduled in the Cutting Edge subgroup of the Big Ideas event. The Festival runs from April 2-15, and it promises to be a lot of fun (as well as informative, of course!)
I'll also be appearing at the Edinburgh International Book Festival which takes place between August 11-27. Last year's event (featuring three Nobel Laureates) attracted over 220,000 visitors, and it's now the "world's largest celebration of the written word". The programme for this event will be published on June 14.
I'll also be appearing at the Edinburgh International Book Festival which takes place between August 11-27. Last year's event (featuring three Nobel Laureates) attracted over 220,000 visitors, and it's now the "world's largest celebration of the written word". The programme for this event will be published on June 14.
Wednesday, February 14, 2007
Protect and survive
Those of us old enough to remember Protect and Survive (or its US equivalent, Duck and Cover) will appreciate the humour of Safe Now, which provides alternative interpretations of public information graphics.
The middle of a terrorist attack is not an appropriate time to catch up on your reading or paperwork.
The middle of a terrorist attack is not an appropriate time to catch up on your reading or paperwork.
Monday, February 12, 2007
"Dr" Gillian McKeith
Today's Guardian contains a wonderfully detailed demolition of "Dr" Gillian McKeith, Channel 4's very own "clinical nutritionist". Many people have accused her of quackery and charlatanism in the past, often resulting in threats of legal action (as opposed to a reasoned scientific response).
Her claims to hold a Ph.D. certainly seem quite laughable.
Her claims to hold a Ph.D. certainly seem quite laughable.
Sunday, February 11, 2007
Paperback edition
Just a short note that the paperback edition of Genesis Machines is now available for pre-ordering on Amazon. It's out on June 14.
I spent the end of last week at a BBSRC workshop on synthetic biology, and will post a report later this week.
I spent the end of last week at a BBSRC workshop on synthetic biology, and will post a report later this week.
Monday, January 29, 2007
Another radio interview
Just a quick note that I'm this week's guest on This Week In Science, a radio show run out of the University of California Davis, and which boasts listeners in over 60 countries. You can either listen online tomorrow at 17:30 (UK) via the website, or download the show later as a podcast.
Wednesday, January 17, 2007
New stuff
Two new publications to report, both very different.
The first is a paper that's just been accepted by BioSystems, and is now available online. Two hybrid compaction algorithms for the layout optimisation problem was written with two colleagues in China, and deals with the problem of packing circular objects inside an "outer" containing circle. Many people will be familiar with this problem, which tries to minimise the size of the container, but our version is complicated by the fact that each object also has mass, and we must seek to minimise not only the radius of the container, but the net mass imbalance. This problem has real-world significance in areas such as aerospace and satellite design, where the circles represent pieces of equipment, and the body as a whole is rotating or moving. We have developed two algorithms, both inspired by nature, which produce the best known results for this problem.
The other piece of writing this month is my first (and quite possibly last) appearance in a men's style magazine (Esquire). As part of their "Hot List 2007", I was asked to write a short piece on "The New Software is...", and chose "Wetware". I'm on page 92 of the February issue, sharing space with George Monbiot.
The first is a paper that's just been accepted by BioSystems, and is now available online. Two hybrid compaction algorithms for the layout optimisation problem was written with two colleagues in China, and deals with the problem of packing circular objects inside an "outer" containing circle. Many people will be familiar with this problem, which tries to minimise the size of the container, but our version is complicated by the fact that each object also has mass, and we must seek to minimise not only the radius of the container, but the net mass imbalance. This problem has real-world significance in areas such as aerospace and satellite design, where the circles represent pieces of equipment, and the body as a whole is rotating or moving. We have developed two algorithms, both inspired by nature, which produce the best known results for this problem.
The other piece of writing this month is my first (and quite possibly last) appearance in a men's style magazine (Esquire). As part of their "Hot List 2007", I was asked to write a short piece on "The New Software is...", and chose "Wetware". I'm on page 92 of the February issue, sharing space with George Monbiot.
Monday, January 15, 2007
The next generation
I spent the end of last week and the start of the weekend drifting in and out of the Systems Biology, Bioinformatics and Synthetic Biology conference BioSysBio, which was hosted by the University of Manchester. The event is aimed at post-graduates, post-docs and "young faculty" (I wasn't sure if I qualified for this last descriptor, but they took my money!), and there was certainly a youthful exuberance about the the proceedings. Teaching and family commitments meant that I wasn't able to attend as many sessions as I would have liked, although I was able to make both sessions dedicated to synthetic biology.
The first of these was opened by Randy Rettberg, director of the International Genetically Engineered Machines (iGem) programme. Rettberg had a long and distinguished career as a computer engineer (including serving as the chief technical officer of Sun Microsystems) before turning his attention to biology and dividing his time between iGEM and looking after MIT's Registry of Standard Biological Parts, the community's attempt to do for synthetic biology what the Maplin Catalogue did for electronics.
There then followed three talks by UK-based teams who took part in the most recent iGEM. The Imperial College team described their novel approach to building a cell-based oscillator (a device that gives a signal that goes "up" and "down" on a regular basis). Rather than building their oscillator inside a single cell, as others have done, the Imperial team decided to try to model classical "predator-prey" dynamics, where the population of prey (eg. rabbits) rises and falls slightly out of step with the rise and fall in the number of predators (eg. foxes). The students decided to engineer two populations of bacteria, each generating molecules that would cause the net signal between the two to rise and fall periodically. Although they've yet to get it all working together, it's a novel approach to the problem, and their simulations and early experimental characterisations seem to suggest that they're well on the road to success.
Another talk was given by a team from Cambridge, who were investigating a subject close to my own heart; self-organisation and pattern formation in bacteria. They've harnessed the ability of bacteria to "swim" combined with an engineered position-dependent genetic "switch" to generate spatial patterns from the "bottom up". The ability to be able to control this process may have significant implications on tissue engineering and bio-medicine, as we'll see shortly.
For me, the most inspiring student presentation was given by the group from Edinburgh, about whom I've written briefly in my book. Arsenic contamination in drinking water is a problem that affects tens of millions worldwide, and is particularly acute in Bangladesh. Existing methods for testing samples are expensive and require technical training, so the Edinburgh team have developed a cell-based detection kit that can detect concentrations below the WHO safety threshold, and which produces a simple "yes-no" response that a non-specialist can understand. Their eventual objective is to be in a position to package and sell the kits for around $1 a pop, which will make sustained testing possible for villagers. A fantastic technical achievement as well as an extremely worthy cause.
I felt slightly humbled by the experience of watching these students in action; remember, most of them were undergraduates (albeit the best of the best) when this work was carried out, and yet they were doing work that, only a few years ago, would be considered the absolute state of the art, and attracting Science and Nature papers (although I can't see any reason why the current work should not do the same). If any of them choose to pursue a career in this field (and I sincerely hope that they do) then they have an excellent future ahead of them.
The final plenary was given by my colleague Ron Weiss of Princeton, who is one of the leading figures in synthetic biology (and, again, who features prominently in the final chapter of my book). Ron has been at the forefront of cellular re-engineering for some years now, and has consistently produced first-rate work. Ron is also interested in pattern formation in nature, and his recent work focuses on programming the way that stem cells talk to other cells, in the hope of one day being able to control the way that they "specialise" and form tissue structures. Although it's still very early days, I think this work has the potential to be massively significant.
The first of these was opened by Randy Rettberg, director of the International Genetically Engineered Machines (iGem) programme. Rettberg had a long and distinguished career as a computer engineer (including serving as the chief technical officer of Sun Microsystems) before turning his attention to biology and dividing his time between iGEM and looking after MIT's Registry of Standard Biological Parts, the community's attempt to do for synthetic biology what the Maplin Catalogue did for electronics.
There then followed three talks by UK-based teams who took part in the most recent iGEM. The Imperial College team described their novel approach to building a cell-based oscillator (a device that gives a signal that goes "up" and "down" on a regular basis). Rather than building their oscillator inside a single cell, as others have done, the Imperial team decided to try to model classical "predator-prey" dynamics, where the population of prey (eg. rabbits) rises and falls slightly out of step with the rise and fall in the number of predators (eg. foxes). The students decided to engineer two populations of bacteria, each generating molecules that would cause the net signal between the two to rise and fall periodically. Although they've yet to get it all working together, it's a novel approach to the problem, and their simulations and early experimental characterisations seem to suggest that they're well on the road to success.
Another talk was given by a team from Cambridge, who were investigating a subject close to my own heart; self-organisation and pattern formation in bacteria. They've harnessed the ability of bacteria to "swim" combined with an engineered position-dependent genetic "switch" to generate spatial patterns from the "bottom up". The ability to be able to control this process may have significant implications on tissue engineering and bio-medicine, as we'll see shortly.
For me, the most inspiring student presentation was given by the group from Edinburgh, about whom I've written briefly in my book. Arsenic contamination in drinking water is a problem that affects tens of millions worldwide, and is particularly acute in Bangladesh. Existing methods for testing samples are expensive and require technical training, so the Edinburgh team have developed a cell-based detection kit that can detect concentrations below the WHO safety threshold, and which produces a simple "yes-no" response that a non-specialist can understand. Their eventual objective is to be in a position to package and sell the kits for around $1 a pop, which will make sustained testing possible for villagers. A fantastic technical achievement as well as an extremely worthy cause.
I felt slightly humbled by the experience of watching these students in action; remember, most of them were undergraduates (albeit the best of the best) when this work was carried out, and yet they were doing work that, only a few years ago, would be considered the absolute state of the art, and attracting Science and Nature papers (although I can't see any reason why the current work should not do the same). If any of them choose to pursue a career in this field (and I sincerely hope that they do) then they have an excellent future ahead of them.
The final plenary was given by my colleague Ron Weiss of Princeton, who is one of the leading figures in synthetic biology (and, again, who features prominently in the final chapter of my book). Ron has been at the forefront of cellular re-engineering for some years now, and has consistently produced first-rate work. Ron is also interested in pattern formation in nature, and his recent work focuses on programming the way that stem cells talk to other cells, in the hope of one day being able to control the way that they "specialise" and form tissue structures. Although it's still very early days, I think this work has the potential to be massively significant.
Monday, January 08, 2007
A good way to start the year
A (belated) Happy New Year to you!
I haven't, up until now, flagged reviews of Genesis Machines on the blog (partly because I assume that most people who come to it have arrived via the link in the book). However, I was delighted by this review in last Saturday's Guardian; apart from saying nice things about the book, its author, Steven Poole (who wrote Unspeak) appears to share my views on Melanie Phillips of the Daily Mail.
A minor point: Poole ends his review by saying that "It is even possible that, when the footnote numbering goes crazy on pages 199-201, it is some sort of joke about genetic mutation. Sadly, I was not able to find meaning in the resulting number series." He assumes that footnote numbers refer only to the first citation of a source; in the example he gives, I cited a report at the start of the chapter, and then again near the end. In both instances I supplied the reference number ("3"), which is why it might have appeared to be out of sequence later on.
I haven't, up until now, flagged reviews of Genesis Machines on the blog (partly because I assume that most people who come to it have arrived via the link in the book). However, I was delighted by this review in last Saturday's Guardian; apart from saying nice things about the book, its author, Steven Poole (who wrote Unspeak) appears to share my views on Melanie Phillips of the Daily Mail.
A minor point: Poole ends his review by saying that "It is even possible that, when the footnote numbering goes crazy on pages 199-201, it is some sort of joke about genetic mutation. Sadly, I was not able to find meaning in the resulting number series." He assumes that footnote numbers refer only to the first citation of a source; in the example he gives, I cited a report at the start of the chapter, and then again near the end. In both instances I supplied the reference number ("3"), which is why it might have appeared to be out of sequence later on.
Subscribe to:
Posts (Atom)