Exciting times ahead, as we've just appointed a research assistant to work on our DNA hash pooling project. He'll be starting next month, and I'll post progress reports as we start to test the idea in the lab.
Another two Ph.D. students have started in my Group; Ben and Matthew will be working with Andy Nisbet, myself and others on hardware-based approaches to novel computation, with specific reference to the CUDA platform. They are both MMU graduates (in fact, they did their Honours projects with me, each gaining a first class degree), and I hope they'll prove to embody the "grow your own researchers" ethos that we've tried to encourage with NanoInfoBio (no pressure, lads).
I'm currently in the process of moving to a newly-refurbished (and, finally, single-occupancy!) office; this, combined with decorating work at home means that I feel a bit like the Queen, smelling fresh paint wherever I go.
On the family front, this weekend we're off to visit friends for the New Mills lantern parade, followed by Room on the Broom at Buxton Opera House. Rehearsals are well underway for the "BUZZ OFF! That's MY witch!" moment.
Showing posts with label hash pooling. Show all posts
Showing posts with label hash pooling. Show all posts
Monday, September 20, 2010
Monday, June 21, 2010
Weeknote #6 (w/e 20/6/10)
We (three colleagues and myself) were recently successful in obtaining funding from the NanoInfoBio project to test an idea that's been rattling around for a while. DNA hash pooling is a technique that Dennis Shasha developed, with some assistance from me, while I was visiting him. Dennis is an incredibly sharp and prolific Professor of Computer Science at the Courant Institute of New York University. He was the Series Editor for my first book, and we kept in touch since its publication. Justine, the little one and I visited Dennis while he was in Paris on sabbatical with his family, in the summer of 2007. While Tyler, Dennis and Karen's son, played American football, we walked round and round an athletics track on the edge of the city, knocking around our own particular problem.
The task of analysing large populations of mixed DNA strands is of particular relevance to the emerging field of metagenomics, which is concerned with understanding, in genetic terms, the vast complexity of the planet's biosphere. Methods for looking at environmental samples often require a lot of genetic sequencing; although new ways of doing this are constantly driving down the cost, it can still be expensive to sequence large populations, as well as time-consuming. Dennis and I developed a technique that combines computational analysis with simple rounds of laboratory steps, based on the computer science idea of hashing. The idea is to associate "labels" with individual sub-populations of genetic sequences, such that the number of different genomes with the same label is relatively low. In this way, each genome (or genomic fragment) is associated with its own "fingerprint", which we can then use to confirm its presence (or otherwise) in a sample. Our hope was that this technique would offer a cheap, quick and simple pre-processing step before any sequencing was required, thus reducing the cost and complexity of analysing a sample.
We finally published the theoretical paper last year, but have only just obtained the funding to actually test the idea in the lab. I floated the concept at one of the NIB brain-storming meetings, and it was picked up by a talented team of biologists (Trish Linton, Mike Dempsey and Robin Sen). We put together a proposal to NIB for a small amount of support (£25K), and we were fortunate enough to be one of three projects funded in the last round. The nine-month post-doctoral position is currently going through the MMU approval process, so watch this space if you're interested.
The task of analysing large populations of mixed DNA strands is of particular relevance to the emerging field of metagenomics, which is concerned with understanding, in genetic terms, the vast complexity of the planet's biosphere. Methods for looking at environmental samples often require a lot of genetic sequencing; although new ways of doing this are constantly driving down the cost, it can still be expensive to sequence large populations, as well as time-consuming. Dennis and I developed a technique that combines computational analysis with simple rounds of laboratory steps, based on the computer science idea of hashing. The idea is to associate "labels" with individual sub-populations of genetic sequences, such that the number of different genomes with the same label is relatively low. In this way, each genome (or genomic fragment) is associated with its own "fingerprint", which we can then use to confirm its presence (or otherwise) in a sample. Our hope was that this technique would offer a cheap, quick and simple pre-processing step before any sequencing was required, thus reducing the cost and complexity of analysing a sample.
We finally published the theoretical paper last year, but have only just obtained the funding to actually test the idea in the lab. I floated the concept at one of the NIB brain-storming meetings, and it was picked up by a talented team of biologists (Trish Linton, Mike Dempsey and Robin Sen). We put together a proposal to NIB for a small amount of support (£25K), and we were fortunate enough to be one of three projects funded in the last round. The nine-month post-doctoral position is currently going through the MMU approval process, so watch this space if you're interested.
Labels:
dna,
hash pooling,
metagenomics,
NIB,
weeknote
Subscribe to:
Comments (Atom)