Tuberculosis (TB) caused by Mycobacterium tuberculosis is a contagious disease which is among the top deadly diseases in the world. Research in Medical Imaging has been done to provide doctors with techniques and tools to early detect, monitor and diagnose the disease using Artificial Intelligence. Recently, many attempts have been made to automatically recognize TB from chest X-ray (CXR) images. Still, while the obtained performance is encouraging, according to our investigation, many of the existing approaches have been evaluated on small and undiverse datasets. We suppose that such a good performance might not hold for heterogeneous data sources, which originate from real world scenarios. Our present work aims to fill the gap and improve the prediction performance on larger datasets. In particular, we present a practical solution for the detection of tuberculosis from CXR images, making use of cutting-edge Machine Learning and Computer Vision algorithms. We conceptualize a framework by adopting three recent deep neural networks as the main classification engines, namely modified EfficientNet, modified original Vision Transformer, and modified Hybrid EfficientNet with Vision Transformer. Moreover, we also empower the learning process with various augmentation techniques. We evaluated the proposed approach using a large dataset which has been curated by merging various public datasets. The resulting dataset has been split into training, validation, and testing sets which account for 80%, 10%, and 10% of the original dataset, respectively. To further study our proposed approach, we compared it with two state-of-the-art systems. The obtained results are encouraging: the maximum accuracy of 97.72% with AUC of 100% is achieved with ViT_Base_EfficientNet_B1_224. The experimental results demonstrate that our conceived tool outperforms the considered baselines with respect to different quality metrics.
Detection of tuberculosis from chest X-ray images: Boosting the performance with vision transformer and transfer learning
Nguyen Phuong Thanh.
2021-01-01
Abstract
Tuberculosis (TB) caused by Mycobacterium tuberculosis is a contagious disease which is among the top deadly diseases in the world. Research in Medical Imaging has been done to provide doctors with techniques and tools to early detect, monitor and diagnose the disease using Artificial Intelligence. Recently, many attempts have been made to automatically recognize TB from chest X-ray (CXR) images. Still, while the obtained performance is encouraging, according to our investigation, many of the existing approaches have been evaluated on small and undiverse datasets. We suppose that such a good performance might not hold for heterogeneous data sources, which originate from real world scenarios. Our present work aims to fill the gap and improve the prediction performance on larger datasets. In particular, we present a practical solution for the detection of tuberculosis from CXR images, making use of cutting-edge Machine Learning and Computer Vision algorithms. We conceptualize a framework by adopting three recent deep neural networks as the main classification engines, namely modified EfficientNet, modified original Vision Transformer, and modified Hybrid EfficientNet with Vision Transformer. Moreover, we also empower the learning process with various augmentation techniques. We evaluated the proposed approach using a large dataset which has been curated by merging various public datasets. The resulting dataset has been split into training, validation, and testing sets which account for 80%, 10%, and 10% of the original dataset, respectively. To further study our proposed approach, we compared it with two state-of-the-art systems. The obtained results are encouraging: the maximum accuracy of 97.72% with AUC of 100% is achieved with ViT_Base_EfficientNet_B1_224. The experimental results demonstrate that our conceived tool outperforms the considered baselines with respect to different quality metrics.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.