Automatic Microscopy : Challenges and Opportunities


Visual microscopical examination has been one of the foundational methodologies that has been used to perform quantitative and qualitative analysis of blood smears, which are very valuable for diagnosis of many diseases. However, visual analysis of these smears by a human is tedious, time consuming and has limited statistical reliability. Thus, methods that automate visual analysis tasks are one of the key drivers of the performance, throughput and accuracy in haematological laboratories.

For the uninitiated, the stepping stone of any digital image analysis system is to first “digitise” the physical blood smear, which in itself is a challenge. Moreover, ensuring that the solution enables a fully automated examination of blood smears which can be used in real laboratory conditions is quite a prodigious task.

In the past, there have been several proposed solutions for digitisation of whole slides that are commercially available. Most of these solutions aid in capturing a whole slide with a configurable objective magnification and present the digitised image to the pathologists or haematologists for analysis. The estimated size of these images may range from several tens of gigabytes to a few hundreds of gigabytes. With these estimated data sizes, it is quite evident that it might be infeasible to transfer the digitised slides over a network to a centralised server or knowledge base, thus, create silos of data, information and knowledge. Building simple smear digitisation solutions with much lesser image sizes, yet maintaining the quality of images for manual intervention when required is the need of the hour.

While smear digitisation in itself has a lot of unknowns and challenges, the inherent variability amongst the images are influenced by many factors such as environment illumination conditions, the glass-slide thickness, variability in staining the smear, visual artefacts formed due to dust particles, non uniform background, different image luminance, colour distribution, etc. This makes it a super complex problem at hand. Essentially, not only does it demand a pre-processing step to normalise the images, but, also requires one to define the “normalised form”. Needless to say, that the steps taken to transform the image in itself is non-trivial and hard!

So, does addressing the above challenges sound like we have solved half the problem? Definitely not! Let’s delve deeper, the next challenge is to segment the cells in these “normalised digital images”. Cell segmentation has been one of the most researched area within digital image processing community. With the umpteen combinations of variations on the type of microscopy, staining, intensity, the cell type, cell density, clumping, etc., makes cell segmentation a hard nut to crack.

..and we are not done yet. While, segmentation of the cells enable us to extract objects of interest, it needs us to scale yet another Everest to classify and bucketise these objects of interest into their subtypes — based on a pattern that each cell type exhibits. These pattern may range from a simple colour or morphological distinction to a much complex nitty-gritties involving geometries and finer details like cytoplasm granularity, chromatic variations, etc. While, humans are capable of recognising these patterns and geometrical variations with some degree of ease, for a computer to decipher this pattern, it needs more than just a pattern matching system. It demands the computer to not only learn these patterns, but, also to learn it and assimilate the knowledge about how to classify them, just like the way a human would do. To top it all, the computer has to not only match the pattern recognition accuracy of a human, but, also estimate the geometrical measurements of these 3-dimensional objects via 2-dimensional images!

Seems super complex and hard? To add more “hard” to hardness and more “complex” to the super complex. Should we tell you that all of these processing and number crunching needs to be done in a couple of minutes?

Image Credits :