Deep Neural Networks and Image Analysis for Quantitative Microscopy
- Plats: 2446, ITC, Lägerhyddsvägen 2, Hus 2, Uppsala
- Doktorand: Sadanandan, Sajith Kecheril
- Om avhandlingen
- Arrangör: Avdelningen för visuell information och interaktion
- Kontaktperson: Sadanandan, Sajith Kecheril
This thesis covers computational methods for segmentation and classification of biological samples imaged by microscopy.
Understanding biology paves the way for discovering drugs targeting deadly diseases like cancer, and microscopy imaging is one of the most informative ways to study biology. However, analysis of large numbers of samples is often required to draw statistically verifiable conclusions. Automated approaches for analysis of microscopy image data makes it possible to handle large data sets, and at the same time reduce the risk of bias. Quantitative microscopy refers to computational methods for extracting measurements from microscopy images, enabling detection and comparison of subtle changes in morphology or behavior induced by varying experimental conditions. This thesis covers computational methods for segmentation and classification of biological samples imaged by microscopy.
Recent increase in computational power has enabled the development of deep neural networks (DNNs) that perform well in solving real world problems. This thesis compares classical image analysis algorithms for segmentation of bacteria cells and introduces a novel method that combines classical image analysis and DNNs for improved cell segmentation and detection of rare phenotypes. This thesis also demonstrates a novel DNN for segmentation of clusters of cells (spheroid), with varying sizes, shapes and textures imaged by phase contrast microscopy. DNNs typically require large amounts of training data. This problem is addressed by proposing an automated approach for creating ground truths by utilizing multiple imaging modalities and classical image analysis. The resulting DNNs are applied to segment unstained cells from bright field microscopy images. In DNNs, it is often difficult to understand what image features have the largest influence on the final classification results. This is addressed in an experiment where DNNs are applied to classify zebrafish embryos based on phenotypic changes induced by drug treatment. The response of the trained DNN is tested by ablation studies, which revealed that the networks do not necessarily learn the features most obvious at visual examination. Finally, DNNs are explored for classification of cervical and oral cell samples collected for cancer screening. Initial results show that the DNNs can respond to very subtle malignancy associated changes. All the presented methods are developed using open-source tools and validated on real microscopy images.