Bench philosophy: Boolean Logic for E. coli

Genetic circuits
by Steven Buckingham, Labtimes 03/2014




Image: Liang Zong and Yan Liang/MIT

While computer programmers try to get the bugs out of their programmes, synthetic biologists are trying to get programmes into their bugs.

If you want to sound with-it and up-to-date, there are certain words and ­phrases you’ll need to keep away from. Here’s one – how about “Genetic Engineering”? So 2013, eh? Oh dear, no! Instead, if you really want to show your street cred, the catch-phrase of the moment is “Synthetic Biology”. Okay, you might not see it spray-canned on the walls of run-down railway sidings but it is a hot topic in certain quarters.

We have got used to the idea that we can turn genes on and off using promoters and repressors, and in the mid-1990s it dawned on researchers that the tool-kit nature provides can even be forged into a formal engineering discipline. So, in the decade that followed, we started learning how to hook these transcription factors into more complex control mechanisms, creating functional networks and circuits of steadily increasing complexity.

Many practical uses

We’re not just talking about Craig Venter making bugs out of spare parts here – synthetic biology is a fast growing area with a lot of serious, practical uses. Take, for example, a particularly exciting development in this area: engineering logic circuits into bacterial genomes to create living Turing machines – true biological computers.

Synthetic biology takes advantage of the idea that a genome is very much like a computer. Computers have inputs and that input is then operated on in a series of steps called a programme, and the programme hopefully generates a useful output. In the case of Synthetic biology, the inputs to the ‘computer’ are the environmental signals – pH, sugar concentrations and so on. The programme is the gene expression programme in the DNA, while the processor is the transcriptional regulator acting on promoters controlling their target genes. The output is the changed expression of a protein.

Synthetic biologists like to think in terms of ‘devices’ – sets of genes, transcription factors, RNA, proteins, etc., which perform a basic computational task. Take the toggle switch, for example, which was one of the earliest Synthetic biology devices. It is made of two translational repressor genes (lacl and cl) that inhibit each other’s activity. This creates a bistable system that can be flipped from one state to another by an environmental stimulus. Heat, for example, disengages cl from its operator. And like any electronic bistable switch, it stays in the same position even after the environmental stimulus is removed. Et voilà: one computer-bit of memory.

Clever. But what is it all for? In answer to that, think for a moment of all the ways we have been using marker expression as a reporter for some sort of biological activity or the presence of some environmental factor. Typically, you engineer a gene system, so that if the environmental factor you want to monitor is present, a fluorescent protein is expressed. It is a standard technique but few researchers will realise that this is essentially an ON gate. Now imagine what you would do if you wanted to do the opposite: instead of detecting the presence of a factor, you now want to monitor its absence. Put in a repressor, I hear you say. Great – you just made a NOT gate. Boolean logic for yeast and bacteria.

Specific engineering

Now here’s the important part. Think of all the things you could do if, instead of just having a reporter as an output, you actually got the cell to do something based on the information it has been processing. For example, you could engineer bacteria to invade cells, depending on the constituents of a cell’s outer membrane to give you an intelligent bacterial drug delivery system. That was the approach suggested by Chris Voigt at MIT (Anderson et al. J. Mol. Biol. 355, 619-27, 2006), who showed that bacteria could be engineered specifically to invade cancer cells. Or you could create organisms that release a chemical only under specific environmental conditions, such as clearing up pollution.

The comparison of synthetic biology with computing was made even more compelling with the availability three years ago of a complete set of Boolean logic gates for Escherichia coli (Wang et al., Nature Communications 2, doi:10.1038/ncomms1516). When you think of all the amazing things that computers are able to do, they do them using nothing other than Boolean logic, you begin to get an idea of the potential that might have just opened up to us.

So just how do you do Boolean logic with bacteria?

Logical gates for bugs

Let’s start with the simplest computational operation of all – the ON gate. Here, the output is ON if, and only if, the input is ON. How do we do this in synthetic biology? Well, that is what you have with a promoter, of course. So what about an OFF gate? No prizes for this one – a repressor.

So far then, we have the basic building blocks of a computer. But we aren’t posing much of a threat to Intel yet. So let’s start making something a bit more complicated – how about an AND gate? This isn’t too hard – all you need is a promoter that needs two activating inputs, such as two activators or an activator plus an inducer. NOR? Two repressors as inputs. You get the idea?

Now, this is the point where we should start getting really excited, because there is an interesting and very powerful principle we can now apply. The principle is this: it can be proved that NOR gates (gates that switch to ON when neither of two inputs is present) are Boolean complete. That means that you can make any computational operation you want all out of just NOR gates – nothing else is needed. So a set of good quality NOR gates is like a Lego set for synthetic biologists.

Whole set of devices

So much for the theory – how about in practice? So far, the circuits that have been built up until now have been limited to rather simple designs. The capabilities of digital programming in synthetic biology fall far short of the flexibility that “real” computer programmers take for granted and, indeed, rely upon. Why? It all comes down to the messiness of biological systems. In a computer, each routine or function is carefully encapsulated – you don’t get the input of one function interfering with the input of another.

But when you put two biological devices together, you run the very real risk that the operation of one will spill over and affect the operation of another. For example, the transcription factor in your gate might also affect a gene in the host cell. And the more devices you hook together, the greater the danger. So, what we need is a set of devices that get on with their own computational job without interfering with each other.

But there is another issue that needs to be addressed and that is the problem of modularity. In other words, you want devices that can be reused in different circuits. Say for instance, you have engineered a circuit that drives expression of a protein only when the temperature is high and pH is low. Great. Now, imagine a little later you want to use the same circuit to solve a formally identical problem – such as expressing a marker when the osmotic pressure is high and oxygen tension is low. What you really want to be able to do is just adapt the device to the two different inputs. But the way devices have been designed up until now don’t usually allow this. It is hard work getting a device to work in a given situation in the first place, so researchers are often forced to accept what they can get. The downside of that is that fitting the device to a new task means tinkering with the internals of the device or, worse, rebuilding it from scratch. If a computer programmer wrote a code in that way, he would soon be looking for a new job.

So, we need modular and orthogonal devices – but how are we going to find them?

This was the very problem that Martin Buck and his colleagues at Imperial College, London, partially solved three years ago (Wang et al., Nature Communications 2, doi:10.1038/ncomms1516). Buck realised that getting modular and orthogonal devices was being held back because there just aren’t that many regulatory components to choose from. It is like building a model of a cathedral with only one type of Lego brick. So Buck looked to nature and adapted a naturally-occurring regulatory module from Pseudomonas syringae and found they could make an AND gate that yielded nearly digital logic behaviour. It was modular because the actual computing part is done internally by the P. syringae’s own built-in regulatory machinery. Two environmentally-sensitive promoters drive the transcription of two regulators: hrpR and hrpS. The output promoter needs both proteins (HrpS and HrpR) to get activated. The good bit is, you can adapt the device to any new context simply by changing the input promoters. On top of that, the device also contains a ribosome binding site that is used to tune the dynamic range of the system.

Buck and colleagues had to do a lot of tweaking and optimising to get the device working. But the point is that once it is done, it is done forever – you don’t need to play around with the internals when you hook the device up into a new circuit in a new cell type. Just like a well-written computer code.

In a paper published at the beginning of this year, Chris Voigt at the Synthetic Biology Center at MIT took a different approach to solving the problem of orthogonality (Stanton et al. Nature Chem Biol 10: 99-08). Now, remember that you can build any arbitrary computational operation out of just NOR gates and that you can make a NOR gate just by adding a second input promoter in series to a NOT gate. Given that all you need to make a NOT gate in the first place is a repressor, Voigt argued that all we need is to find a set of repressors that have their own, non-overlapping, specific target operator sequences – a big set, ideally.

Endless circuit possibilities

So, Voigt set out to find one. He started by generating a huge library of known repressors, taking them from several species and weeding out the redundant ones. Then for each one, he identified its DNA binding sequence using array technology. This gave a large set of repressor/target pairs, which he then screened for orthogonality. The result was a set of 16 repressors – in other words, 16 NOT/NOR gates, which, mathematicians would tell us, amounts to some 1054 possible circuits!

If you are thinking of trying your own hand at putting programmes into bugs, you could look at MIT’s International Genetically Engineered Machine competition (iGEM) registry at http://parts.igem.org/Main_Page, where you can find a repository of plug-and-play biological devices. Or if you are an undergraduate, it may be worth registering for the iGEM competitions – you have missed this year’s but look out for 2015. They even run a competition for high schools!

So, now the way is open – in theory at least – for us to be able to engineer genetic circuits with practically no constraint on complexity. We wait to see how this works out in practice.





Last Changed: 08.05.2014




Information 4


Information 5


Information 6