Bench philosophy: Shrinkify your Lab

Microfluidics
by Steven Buckingham, Labtimes 02/2013



Microfluidics holds great promise for life science research. What’s still missing, however, is a killer application.

Small is beautiful, so they say. And nowhere is this truer than in the fast-moving world of microfluidics. Throw away your Eppendorf-tubes – who needs those lake-sized vats these days? Waste of resources! Ditch those 96-well plates – those wells are big enough to sink an ocean liner. Picolitre reactions? Positively prodigal. Microfluidics is taking the volumes to micro extremes. Whole pipelines are being run on chips the size of microscope slides. You’d need a microscope to watch your PCR now. And it all springs from the technology behind that old inkjet printer you threw away ten years ago.

Just imagine a PCR reaction in six minutes, on a chip small enough to carry in your wallet. Think of the possibilities: in-field diagnostics; hand-held sequencers (not the musical kind of course); pocket HPLC machines. This is the exciting future that some think will be opened up by current developments in microfluidics.


But will microfluidics deliver on its promise, or will commercial realities kill off its adventure into the big wide world and keep it sealed in academia?

As its name implies, microfluidics is the art and technology of using extremely small fluid volumes to control a reaction. It does in nanolitres what most of us traditionally do in millilitres. In most cases microfluidics comes in the guise of pipes and conduits assembled on an integrated platform – a “chip” – electronically controlled by switches and valves. The chip incorporates all the reaction steps needed for the job at hand.

Many different approaches

A chip designed to detect the flu virus, for instance, will contain a chamber for holding the test sample, another to store the other reagents, a reaction chamber for the PCR steps and even machinery for the electrophoresis step. The body of the chip itself might be made of various materials including glass or plastics. It will be mounted on an integrated circuit that not only controls the flow of fluids using an array of switches and pumps but also forms an interface with the off-chip monitoring and control-equipment.

The nickname “Lab-on-a-chip” is no exaggeration. The list of advantages afforded by the many microfluidic approaches is truly impressive. Most of them are the pretty obvious consequences you get when you shrink things down. For starters, achieving small volumes is obviously a big plus if you are working with rare reagents, or if they are expensive in large volumes. Think of how DNA microarrays are manufactured, for instance. Each oligonucleotide on that array has to be synthesised and the traditional way of doing this needs a dedicated synthesising column for each one. Admittedly, the traditional method gives you a large yield but the raw materials run up quite a bill. And as for synthesising whole genomes...

When you come to think about it, small volumes also mean quick reactions – any reaction run in picolitre volumes means negligible diffusion times, rapid material transfer and so on. So what? Well, for one, this makes a huge difference when you are doing a lot of PCR reactions, where the reagents are repeatedly heated and cooled over a series of cycles. Although the actual Taq polymerase reaction itself is pretty fast, the heating and cooling cycles take up quite a decent chunk of time. Scale down to the volumes used in microfluidics and you cut this loss right down.

Lightning fast PCR

Get this: a microfluidic device built by Pavel Neuzil and his colleagues from the Institute of Bioengineering and Nanotechnology in Singapore can run no less than 40 PCR cycles in six minutes (Nucleic Acids Res., 34:e77). And they didn’t do anything particularly radical to achieve this – it was just a matter of suspending the reaction mix in droplets of oil and heating/cooling them with tiny integrated heaters. But there is an even faster (cooler?) way of doing the job: instead of turning the heat up and down, you just set up a temperature gradient on a microfluidic chip and run the reaction mix through the gradient – the so-called “flow-through” approach.

And we are forgetting the most obvious advantage of small size – portability. Medical staff unwilling to lug an HPLC machine, fluorescence reader and PCR machine into the field will perhaps be more open to taking a credit-card sized chip to the “point of care”. Surprisingly, much of the basic technology isn’t really all that new. In fact, you probably had a microfluidics device on your desktop without even knowing it: that old inkjet printer you ditched ages ago provided the basic starting point for the control of droplets that, admittedly with a few refinements, lies behind the modern droplet-based microfluidic technology.

Physical obstacles

But let’s not underestimate the difficulties of miniaturisation. It is more than just moving a decimal point in the design specifications. Ever tried pumping water through a narrow pipe? (And I mean narrow, we’re talking micrometres here). It takes a fair bit of pushing – the viscous resistance to flow increases inversely with the fourth power of the radius. Funny things start happening when you get down to the micrometre scale. Droplets suffer strong surface tension effects as they get smaller, while capillary effects mean liquids get up to some pretty un-intuitive behaviour when they are moved around tiny chambers. Then there is evaporation – anyone who has left a nanolitre droplet too long while they are frantically changing the settings on their pipettor, will know just how quickly a small volume of liquid can disappear.

But once those technical issues are overcome, microfluidics has one property that all powerful tools share – flexibility. Take droplets, for instance. Give them a bit of electrical charge and you have a very convenient way of sorting them. So, if you partition your cell culture into millions of droplets, you’ve got yourself a massively parallel cell culture system, coupled with a lightning-fast cell sorter. And another thing: that infamous reluctance of oil and water to mix actually opens up a whole host of possibilities that has not escaped the notice of microfluidiphiles. Think of segmented flow: use a standard microfluidic plumbing system but fill it with oil instead of water and squirt your aqueous reactions into it. Now you have a lot of very separate reactions that can be pushed around the plumbing like a tiny train set.

This flexibility and the promise of better-faster-cheaper, means the list of clever uses looks like the inventory of a Victorian explorer’s attic. If you have ever experienced the annoyance of waiting hours for PCR reactions to run, it is not surprising that microfluidic approaches to PCR abound. As early as 1998, Andreas Manz and his colleagues, then at the Imperial College of Science in London, UK, showed how a continuous flow microfluidic PCR could achieve 20 cycles of amplification in 90 seconds (Kopp et al., Science 280: 1046). More recently, Florian Hollfelder’s group in Cambridge, Uk, announced the amazing feat of amplifying a single molecule of DNA per droplet (Schaerli et al. Anal. Chem. 81:302). Oh and don’t forget: the post-PCR processing can also be integrated into the chip design.

This opens the way for a new kind of oligonucleotide synthesiser. Two years ago, pioneering work by Stephen Quake from Stanford University brought us a programmable microfluidic synthesiser that could make some 100 picomoles of oligos using only 250 nanolitres of reagents per column per cycle – some two orders of magnitude lower than existing methods (Lee et al., Nucleic Acids Res., 38(8): 2514-21). As a proof of concept, they used their gene machine to build 16 oligonucleotides, which they then assembled without further purification into a 200 bp DNA construct. The gene machine is a chip of three parts. In the first, a set of chambers holds the reagents, which can be delivered using programmable valves. The second is a “binary tree”, a conduit that leads to a series of two-way T-junctions that finally deliver material to the third element, the reaction columns. This binary splitting guarantees equal flow to the columns. Suddenly, the path to cheap, programmable genome synthesis looks less steep and rugged.

Microfluidic sequencing

If making DNA isn’t your thing, microfluidics has something for sequencing, too. There are already a number of microfluidic instances of classic Sanger sequencing methods and they boast being able to sequence up to one femtomole of substrate, or message concentrations of 100 attomolar. This makes approaches using single-cell genomics more tractable than ever before.

But it is not just genomics that is having a micro field day – proteomics is ­cashing in, too. Take protein-protein interactions (PPIs), for instance. The number of possible interactions increases exponentially with the number of genes in the pool, making even “simple” interactomes a major challenge. But a controlled spray of droplets, each containing a FRET (fluorescence resonance energy transfer)-labelled protein, reduces the volume and handling limitations drastically and opens up many possibilities for rapid, automated detection. Then there is the amazing PPI-machine invented (again) by Stephen Quake – the “protein interaction network generator (PING)”, which fuses a DNA microarray with a microfluidic chip (Nat Methods., 6(1):71-4). Here, a soup of E. coli extract is applied to the microarray to make the protein and the bait. And bait/prey interactions are trapped using yet another of their inventions, the “mechanical trapping of molecular interactions” (MITOMI). The entire device is about the size of a coin.

Adverse market forces

But before you start asking for your USB-key/genome synthesiser at your local electrical goods, store, pause for breath – it is not all clear blue skies, at least when it comes to commercialisation. The idea of a fully-integrated, micro-scale analysis system was first being talked about as far back as 16 years ago by Albert van den Berg and Theo Lammerink from the University of Enschede, Netherland (eds. Manz, A. & Becker, H., pp. 21-50, Springer, Berlin, 1999), and yet the graveyard is littered with failed lab-on-a-chip companies. The technical problems we outlined above are significant when implementing a robust, off-the-shelf chip. And as biochip expert Syed Hashsham from the University of Michigan, USA, has pointed out, market forces just may not favour the manufacture of cheap diagnostic devices when equally robust, if less nifty, solutions already exist (Microbe, 2:531). But even if the biological answer to the microcomputer never leaves academia’s shores, the endless possibilities for more rapid, more trustworthy and, critically, less time- and Euro-expensive research will be sure to make an impact.





Last Changed: 25.03.2013




Information 4


Information 5


Information 6