Published in Journal of Digital Imaging
Data augmentation refers to a group of techniques whose goal is to battle limited amount of available data to improve model generalization and push sample distribution toward the true distribution. While different augmentation strategies and their combinations have been investigated for various computer vision tasks in the context of deep learning, a specific work in the domain of medical imaging is rare and to the best of our knowledge, there has been no dedicated work on exploring the effects of various augmentation methods on the performance of deep learning models in prostate cancer detection. In this work, we have statically applied five most frequently used augmentation techniques (random rotation, horizontal flip, vertical flip, random crop, and translation) to prostate diffusion-weighted magnetic resonance imaging training dataset of 217 patients separately and evaluated the effect of each method on the accuracy of prostate cancer detection. The augmentation algorithms were applied independently to each data channel and a shallow as well as a deep convolutional neural network (CNN) was trained on the five augmented sets separately. We used area under receiver operating characteristic (ROC) curve (AUC) to evaluate the performance of the trained CNNs on a separate test set of 95 patients, using a validation set of 102 patients for finetuning. The shallow network outperformed the deep network with the best 2D slice-based AUC of 0.85 obtained by the rotation method.
Published in Nature Scientific Reports
COVID-19 spread across the globe at an immense rate and has left healthcare systems incapacitated to diagnose and test patients at the needed rate. Studies have shown promising results for detection of COVID-19 from viral bacterial pneumonia in chest X-rays. Automation of COVID-19 testing using medical images can speed up the testing process of patients where health care systems lack sufficient numbers of the reverse-transcription polymerase chain reaction tests. Supervised deep learning models such as convolutional neural networks need enough labeled data for all classes to correctly learn the task of detection. Gathering labeled data is a cumbersome task and requires time and resources which could further strain health care systems and radiologists at the early stages of a pandemic such as COVID-19. In this study, we propose a randomized generative adversarial network (RANDGAN) that detects images of an unknown class (COVID-19) from known and labelled classes (Normal and Viral Pneumonia) without the need for labels and training data from the unknown class of images (COVID-19). We used the largest publicly available COVID-19 chest X-ray dataset, COVIDx, which is comprised of Normal, Pneumonia, and COVID-19 images from multiple public databases. In this work, we use transfer learning to segment the lungs in the COVIDx dataset. Next, we show why segmentation of the region of interest (lungs) is vital to correctly learn the task of classification, specifically in datasets that contain images from different resources as it is the case for the COVIDx dataset. Finally, we show improved results in detection of COVID-19 cases using our generative model (RANDGAN) compared to conventional generative adversarial networks for anomaly detection in medical images, improving the area under the ROC curve from 0.71 to 0.77.
Published in Nature Scientific Reports
As an analytic pipeline for quantitative imaging feature extraction and analysis, radiomics has grown rapidly in the past decade. On the other hand, recent advances in deep learning and transfer learning have shown significant potential in the quantitative medical imaging field, raising the research question of whether deep transfer learning features have predictive information in addition to radiomics features. In this study, using CT images from Pancreatic Ductal Adenocarcinoma (PDAC) patients recruited in two independent hospitals, we discovered most transfer learning features have weak linear relationships with radiomics features, suggesting a potential complementary relationship between these two feature sets. We also tested the prognostic performance for overall survival using four feature fusion and reduction methods for combining radiomics and transfer learning features and compared the results with our proposed risk score-based feature fusion method. It was shown that the risk score-based feature fusion method significantly improves the prognosis performance for predicting overall survival in PDAC patients compared to other traditional feature reduction methods used in previous radiomics studies (40% increase in area under ROC curve (AUC) yielding AUC of 0.84).
At The Hospital for Sick Children, we have opening for a fully funded MSc student (domestic applicants only) in the field of Machine Learning for Medical Imaging and Medicine for January 2021 admission to Institute of Medical Science (IMS) at the University of Toronto. The research project is AI in Medicine with the emphasis on radiomics and deep learning for diagnosis and prognosis of brain tumours, which requires a strong background in statistical analysis and machine learning. The successful candidate may have the option to start as a Research Assistant at SickKids in Sep 2020 until she/he transitions to MSc student in January 2021. If interested, please send your CV and transcripts along with list of references to farzad dot khalvati at utoronto.ca before Aug 23, 2020. The successful candidate will be invited to apply to the School of Graduate Studies at the University of Toronto.
Sensors Special Issue: Deep Learning-Based Imaging and Sensing Technologies for Biomedical Applications (Impact Factor: 3.27)
With the advent of deep learning, Artificial Intelligence (AI) models, including convolutional neural networks (CNNs), have delivered promising results for health monitoring and detection and prediction of different diseases using biomedical imaging and sensing technologies. These technologies help to improve the overall patient outcome by providing personalized diagnostics, prognostics, and treatment, improving the quality of life of patients. The unique challenges of developing AI models for health monitoring and disease diagnosis and prognosis using imaging and sensing technologies require customized models that go beyond off-the-shelf and generic AI solutions. These challenges include high accuracy, reliability, and explainability of the AI results for biomedical applications. To bring state-of-the-art research together, research papers reporting novel AI-driven imaging and/or sensing technologies with clinical applications are invited for submission to this Special Issue. The scope and topic of this Special Issue includes but is not limited to:
- AI-driven advances in biomedical optical imaging/sensing technologies (e.g., optical imaging, optical coherence tomography, near infrared spectroscopy, diffuse optical spectroscopy) for biomedical applications;
- AI-driven advances in medical image analysis using deep learning for different imaging modalities including X-ray, CT, MRI, PET, ultrasound, etc.;
- Advances in AI-based solutions for disease diagnosis and prognosis using imaging and/or sensing technologies;
- Advances in AI explainability solutions for imaging and/or sensing technologies that address different aspects of AI explainability, including novel attention map generators as well as ways to interpret the results and integrate them into clinical settings.
Dr. Farzad Khalvati