Bench philosophy: Imaging Flow Cytometry
Best of both Worlds
by Steven Buckingham, Labtimes 03/2017
Imaging flow cytometry is a combination of two very powerful and successful techniques: microscopy and flow cytometry. It is an attempt to get the best of both but, since these two techniques lie at the two extremes of a trade-off between speed and content, joining them into one is like squaring the circle.
Flow cytometry has been around for years. But what does it do? The clue is in the name: cytometry has the root meaning of “cell” (cyto-) “measurement”(metry), and flow means, well, flow. The basic idea of flow cytometry then is very simple – get cells to flow along a stream and measure some aspect of them as they flow past.
Flow cells of classical cytometers allow very high throughput rates – but largely at the expense of spatial information.
In fact, several features can be measured with amazing speed. Typically, a focussed beam of light is passed through the cells and the degree and pattern of scatter in the light is measured. If needed, the cells may be labelled with a fluorescent dye, and the intensity of the dye can be measured for each cell as it passes by. Such fluorescent probes may be linked to some physiological or structural feature of the cell, such as the levels of free calcium ions, or the position of the cell in the cell cycle, providing some potential functional aspect to the data.
Usually several parameters are measured simultaneously, and the recent evolution in the technique has included increasing the number of coloured markers to allow up to around 20 parameters to be monitored. Yet, even with such high dimensional data, traditional flow cytometry is very high throughput. Hundreds to 100,000’s of cells can be measured per second.
Flow cytometry is used to gather population-level parameters of large numbers of cells. For example, to measure the proportion of cells in S-phase, or the distribution of cytoplasmic granularity. In some cases, flow cytometry forms the basis of Fluorescence-Activated Cell Sorting (FACS), and related methods.
Statistical clustering techniques are used to identify subgroups in the total cell population. Most of us are familiar with the scatter graphs often used to visualise such data, but the statistical analysis is often more sophisticated than just 2D scatterplots. Machine learning methods can be applied to the measured variables in order to find high-dimensional ways of classifying the cells. A common approach is to use standard multi-variate classification methods, which try to find the right combination of variables that distinguish sub-populations of cells from each other. More recently, some labs have applied machine learning methods, such as deep learning, which is commonly used for computer identification of objects in natural images.
Claire Lifan Chen and colleagues at UCLA used this approach to show how imaging flow cytometry can be combined with deep learning to classify two different cell types (OT-II hybridoma T lymphocytes and SW-480 colon cancer epithelial cells). They used standard imaging software to analyse the output of an imaging flow cytometry setup and identified a set of biologically-meaningful parameters (Scientific Reports 6, doi:10.1038/srep21471). These parameters, along with a set of “true” values for the cell types, were then fed to a deep learning neural network consisting of several layers of “neurones”, which then tried to learn, which combination of parameters predicts the cell type. Using this method, Chen achieved a reported 95% accuracy, and that with unlabelled cells.
But there is a problem with flow cytometry: it achieves this high throughput speed largely at the cost of spatial information. As cells pass by the probe, all the information for each cell is pooled together. If the ratio, say, of two fluorescent probes is being measured, there will be only one number per cell. There will be no way of telling, for instance, if that ratio is high near the nucleus but lower further away, or whether it is uniform throughout the cell. Worse, there may be subpopulations of cells that are defined by just such spatial details that are discarded in flow cytometry.
Contrast this situation with that of microscopy, which is really fundamentally all about the spatial (and temporal) distribution of the signal. On the other hand, microscopy isn’t in any sense high-throughput. Comparing flow cytometry with microscopy is like comparing two different devices, one that rapidly counts cars flowing along a road, but has no way of distinguishing American from Japanese cars, the other a device that can identify car manufacturers, but takes a long time doing it! The fact that flow cytometry and imaging lie on opposite ends of that scale is disappointing, because there would be big advantages if you could marry the two. For instance, imagine being able to rapidly sort thousands of cells per second, based on the pattern of expression of a fluorescently-labelled protein.
But how do you get the speed of flow cytometry and the detail of imaging? Isn’t that like trying to have your cake and eat it?
Let’s think about the problem more closely and we will understand the amazing solutions that are emerging from various labs to tackle this challenge. What we want, if we are to accomplish our goal, is some way of getting good quality images of cells as they zip quickly past the camera.
There are several ways of doing that, and if you have ever tried taking pictures of objects on the move you will probably know some of them. The most obvious is to have a very short exposure time. The problem is that there is a lower limit to how sensitive a camera can be, especially considering that the fluorescent signals emitted from stained cells are often quite weak. Another alternative is to make the signal brighter so we can have a shorter exposure time. But we are quite limited here by the brightness of available probes and the undesirability of over-expressing exogenous constructs. Alternatively, we could make the cells go by slower, but that would defeat our purpose!
To understand how it is done, we have to think about the way modern imaging cameras work. They are usually either CMOS (complementary metal-oxide-semiconductor) chips or, more commonly, charge-coupled devices (CCD). In the latter case, the “film” at the back of the camera is an array of photosensitive surfaces. Photons falling on each surface create a charge, which is then read off and converted to a digital voltage signal. If you wait longer before reading off, you get a stronger signal – one way of making the camera more sensitive but at the cost of blurring the image because of cell movement during the integration time. However, if the cell is moving past in a predictable direction and at a constant speed (something that is easy to guarantee with modern microfluidics), there is a bit of a trick you can play. While each cell is charging (in response to the light), instead of reading off, the CCD photocell passes its charge onto the next photocell along, allowing that adjacent cell to take over the job of integrating the light signal. If the speed and direction at which the signal is passed from photocell to photocell exactly matches that of the real biological cell, we have the same effect as panning the camera.
Another trick is to take advantage of the number of photocells in the camera compared to the field of view. A camera can contain tens of millions of photocells, but given the size of cells (tens of micrometres in diameter), this is a bit of an overkill. This means we have enough photocells to work out a bit of judicious division of labour – preallocating photocells to work in parallel, either in the time or spatial domain.
An alternative to the above approaches is to split the field of view into several complementary angles, all presented to the camera at the same time. A set of lenses and mirrors splits the field of view so that several cells can be processed at once, effectively side-stepping the resolution/exposure-time trade-off.
The method called “Temporally coded excitation” is another way to overcome the blurring/exposure dilemma. Recall that increasing exposure time would indeed increase sensitivity, but would also cause blurring because of the target’s motion during acquisition. In temporally-coded methods, a chopper wheel rotates to create a series of light pulses. This series is not regular, but is pseudorandom. This creates a series of superimposed, time-lapse images. But, because you know the timing of the pulses, and the point-spread function is known, the images can be combined (giving the same result as increasing exposure time) and deblurred (because the chopper wheel has separated out the different exposures).
So far we have talked about imaging flow cytometry using cameras. An alternative is to use photomultiplier tubes (PMTs). PMTs may sound a bit primitive, in that they are effectively just light meters, or perhaps we could think of them as one-pixel cameras. A far cry from the striving towards ever higher resolution we have become used to. But PMTs actually offer some serious advantages. The main one is their speed and sensitivity. PMTs automatically amplify the signal that arises from each photon. The light enters the tube and hits a photocathode. In response, the photocathode emits a number of electrons which then bounce off a series of “dynodes”, each of which releases several electrons for each one it receives. PMTs have a huge dynamic range and are incredibly sensitive. But wait a minute – just now we were bewailing the loss of spatial information, and now we are talking about one-pixel cameras! But you can restore spatial information by the way you illuminate the specimen. For one thing, you can scan across the image with a narrow-beam laser, much like the dot used to scan in lines along an old-fashioned cathode ray tube (thereby revealing the author’s age!). As the beam passes from side-to-side and down, the time-variant pattern of intensities can be folded back to recreate a 2D image.
Another way to capture spatial information in a 1-pixel device is to use STEAM. Nothing to do with computer games or beautiful railway engines – this is Serial Time-Encoded Amplified Microscopy. STEAM holds the record for the fastest imaging shutter speeds, so it is worth trying to understand it. In what follows, remember that the PMT only sees a one-dimensional signal varying over time. The trick behind STEAM is to create a time-locked laser pulse with a fairly wide bandwidth (that is, there is a wide variety of wavelengths, or “colours”, in it). This pulse is really short, and I mean short. Femtoseconds short. That is, 1/1,000,000,000,000,000 of a second, or one quadrillionth, or one millionth of one billionth, of a second.
The pulse is spread out spatially according to the wavelengths in it, producing a rainbow. In other words, the components of the light are mapped out spatially, hence capturing the spatial aspect of the signal. After passing through the target-to-be-imaged, the rainbow is merged again and passed through a fibre which performs a “dispersive Fourier transform” stretching the signal in time. The rest is mathematical magic, but the point is that the spatial information becomes mapped onto a time vs frequency domain, which can be interpreted by the PMT.
Last Changed: 28.06.2017