Document Type : Original Article(s)
Authors
1 Health Information Management, Office of Vice Chancellor for Research, Arak University of Medical Sciences, Arak, Iran
2 Internal Medicine Department, Arak University of Medical Sciences, Arak, Iran
3 Postdoctoral Research Associate in Medical Imaging at Dartmouth College, Hanover, NH 03755 USA
4 Department of Internal Medicine of Amir Al Momenin Hospital, Arak University of Medical Sciences, Arak, Iran
5 Department of Radiology and Neurology; University of Alabama at Birmingham, Birmingham, Alabama, USA
Abstract
Background: Chest computed tomography (CT) plays an essential role in diagnosing coronavirus disease 2019 (COVID-19). However, CT findings are often nonspecific among different viral pneumonia conditions. The differentiation between COVID-19 and influenza can be challenging when seasonal influenza concurs with the COVID-19 pandemic. This study was conducted to test the ability of radiomics-artificial intelligence (AI) to perform this task.
Methods: In this retrospective study, chest CT images from 47 patients with COVID-19 (after February 2020) and 19 patients with H1N1 influenza (before September 2019) pneumonia were collected from three hospitals affiliated with Arak University of Medical Sciences, Arak, Iran. All pulmonary lesions were segmented on CT images. Multiple radiomics features were extracted from the lesions and used to develop support-vector machine (SVM), k-nearest neighbor (k-NN), decision tree, neural network, adaptive boosting (AdaBoost), and random forest.
Results: The patients with COVID-19 and H1N1 influenza were not significantly different in age and sex (P=0.13 and 0.99, respectively). Nonetheless, the average time between initial symptoms/hospitalization and chest CT was shorter in the patients with COVID-19 (P=0.001 and 0.01, respectively). After the implementation of the inclusion and exclusion criteria, 453 pulmonary lesions were included in this study. On the harmonized features, random forest yielded the highest performance (area under the curve=0.97, sensitivity=89%, precision=90%, F1 score=89%, and classification accuracy=89%).
Conclusion: In our preliminary study, radiomics feature extraction, conjoined with AI, especially random forest and neural network, appeared to yield very promising results in the differentiation between COVID-19 and H1N1 influenza on chest CT.
Keywords
What’s Known
There is an overlap between COVID-19 and influenza in clinical presentation. Seasonal influenza may have happened concurrently with the COVID-19 pandemic. The polymerase chain reaction test is robust for influenza, but has low sensitivity for COVID-19. Any technique that can differentiate between these infections could improve patient management.
What’s New
Radiomics feature extraction, conjoined with modern artificial intelligence, has high accuracy in differentiating COVID-19 from H1N1 influenza on chest computed tomography. Radiomics-artificial intelligence techniques can improve the accuracy of computed tomography to differentiate COVID-19 from influenza and empower radiologists with limited experience in chest imaging.
Introduction
The coronavirus disease 2019 (COVID-19) pandemic is the first pandemic of the third decade of the 21st century. The first cases with COVID-19 infection were detected in the Chinese city of Wuhan, in December 2019, presenting with fever, cough, pneumonia, and lymphopenia, and unamenable to usual antibiotics. The etiology was soon detected to be a novel coronavirus, SARS-CoV-2. Despite robust measurements, the infection became pandemic in less than three months, with more than 41 million infected cases and over 1.1 million deaths by the end of October 2020. 1 , 2 Since then, the differentiation between COVID-19 and influenza has remained critical for patient management. Currently, the gold-standard diagnostic test for COVID-19 is considered to be the polymerase chain reaction (PCR) test. Nonetheless, not only is this test unavailable even in many developed countries, but also it is associated with questionable accuracy. In this context, chest computed tomography (CT) scanning continues to be of vital importance for diagnosis. Chest CT has a high sensitivity of about 95% to 97% in the detection of COVID-19 pneumonia. 3 , 4
Despite the very promising sensitivity of chest CT to detect COVID-19 infection, the major limitation of this modality is still its low specificity. 5 Currently, radiologists with limited chest imaging experience cannot differentiate COVID-19 from other viral or bacterial pneumonia conditions with high accuracy. Hence, any technique that can improve the specificity of chest CT can enhance their performance. The low specificity of chest CT to differentiate this pneumonia can be partially attributed to the inability of the human eye to detect subtle radiology findings. Generally, the human eye can identify a few radiology features such as the size, density, borders, and enhancement of lesions. In this context, radiomics has proven itself as a rapidly evolving research field in radiology. The basic concept behind radiomics is the ability of computers and software to detect many more radiology features from medical images. In radiomics, the region of interest is generally selected and segmented by a radiologist in order that many features can be extracted from the segmented area. The extracted feature is then analyzed to detect the best diagnostic feature before artificial intelligence (AI) models are developed for these features. 6
In this study, we sought to determine whether radiomics in tandem with different AI models could improve the specificity of chest CT to differentiate COVID-19 pneumonia from H1N1 pneumonia.
Patients and Methods
Study Population
This retrospective study was approved by the Ethics Committee of Arak University of Medical Sciences (No. IR.ARAKMU.REC.1398.339). Written informed consent was obtained from each patient upon admission. The entire study population received standards of care based on the university and national guidelines. The medical records of 850 patients with acute respiratory symptoms admitted to three hospitals affiliated with Arak University of Medical Sciences, Arak, Iran, were reviewed. Patients with COVID-19 (after February 2020) and H1N1 influenza-induced pneumonia (before September 2019), who underwent chest CT were included in this study. The inclusion criterion for patients with COVID-19 was a positive PCR test. The quantitative reverse transcription-polymerase chain reaction (RT-qPCR) assay was performed using the 2019-nCoV Nucleic Acid Diagnostic Kit (Sansure Biotech, Changsha, China), in keeping with the manufacturer’s protocol, in LightCycler 96 instruments (Roche Diagnostics, Mannheim, Germany). For patients with H1N1 influenza, the inclusion criterion was a positive respiratory viral panel for influenza according to the RT-qPCR assay. For H1N1 cohort, only patients before September 2019 were selected to avoid any overlap between COVID-19 and influenza. Known patients with chronic lung disease were excluded. Finally, 66 patients, comprising 47 cases with COVID-19 and 19 cases with H1N1 influenza, were included for the final analysis. For the influenza cohort, all PCR-positive and inpatient cases in the mentioned university data set were selected. The same data set had a large population of cases with COVID-19. However, only 47 patients with COVID-19 were included to avoid class imbalance.
CT Images
All the studied patients underwent chest CT with the standard lung protocol (peak kilovoltage [kVp]=100–110, milliampere-seconds [mAs]=24–40, thickness=<1.5 mm, pitch factor=0.8, and matrix=512×512). CT scanning was performed with Siemens (SOMATOM Emotion 16 Slice [DE], Germany), Toshiba (Aquilion 16 Slice, Japan), and GE (Optima 58, 32 Slice, USA) scanners.
Pulmonary Lesion Segmentation
The chest CT images of the study population were evaluated by a radiologist (HS) with 15 years of clinical imaging experience, who was blind to the patients’ diagnoses. After the initial assessment, chest CT images with poor quality and motion artifacts were excluded. Pulmonary lesions in the chest CT images in the lung window were then segmented by the same radiologist using 3D Slicer. 7 If a part of a lesion was ground-glass opacification/opacity, and the other part was consolidation, the lesion was segmented as two different lesions. Patchy lesions attaching to each other were considered a single lesion. Axial CT was used for segmentation. Each lesion was segmented in multiple axial slices, and the segmented areas were added to obtain a 3D volume for each lesion. The one-third central portion of the bronchovascular structures was avoided during the segmentation, whereas the two-thirds peripheral portions of the bronchovascular structures were included within the segmented lesions, if they were encased by parenchymal opacities. Pulmonary fissures were also avoided during the segmentation, and large pulmonary lesions were limited to a single pulmonary lobe. Chronic lesions such as calcifications, fibrosis, cavities, pleural effusions, lymphadenopathy, and atelectasis were diagnosed visually by the radiologist at the segmentation time and were not included in the segmentation.
Feature Extraction
Feature extraction was performed with 3D Slicer 7 and PyRadiomics Library 8 (resample size=2,2,2 and binWidth=64). For each lesion, 120 features were extracted (table 1). The extracted features comprised (Shape 2D and Shape 3D) first-order gray-level dependence matrix (GLDM), grey-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), gray-level size-zone matrix (GLSZM), and neighboring gray-tone-difference matrix (NGTDM). Additionally, redundancy maximum relevance, least absolute shrinkage, and selection operator (LASSO), and principal component analysis (PCA) were used for feature selection and reduction.
Feature Classes | Features |
---|---|
First-Order Features | Energy, Total Energy, Entropy, Minimum, 10th Percentile, 90th Percentile, Maximum, Mean, Median, Interquartile Range, Range, Mean Absolute Deviation, Robust Mean Absolute Deviation, Root Mean Squared, Standard Deviation, Skewness, Kurtosis, Variance, and Uniformity |
Shape Features (3D) | Mesh Volume, Voxel Volume, Surface Area, Surface Area to Volume Ratio, Sphericity Compactness, Spherical Disproportion, Maximum 3D Diameter, Maximum 2D Diameter (Slice), Maximum 2D Diameter, Maximum 2D Diameter, Major Axis Length, Minor Axis Length, Least Axis Length, Elongation, and Flatness |
Shape Features (2D) | Mesh Surface, Pixel Surface, Perimeter, Perimeter to Surface Ratio, Sphericity Spherical Disproportion, Maximum 2D Diameter, Major Axis Length, Minor Axis Length, and Elongation |
Gray-Level Co-occurrence Matrix Features | Autocorrelation, Joint Average, Cluster Prominence, Cluster Shade, Cluster Tendency, Contrast, Correlation, Difference Average, Difference Entropy, Difference Variance, Joint Energy, Joint Entropy, Informational Measure of Correlation 1, Informational Measure of Correlation 2, Inverse Difference Moment, Maximal Correlation Coefficient, Inverse Difference Moment Normalized, Inverse Difference, Inverse Difference Normalized, Inverse Variance, Maximum Probability, Sum Average, Sum Entropy, and Sum of Squares |
Gray-Level Size-Zone Matrix Features | Small-Area Emphasis, Large-Area Emphasis, Gray-Level Nonuniformity, Gray-Level Nonuniformity Normalized, Size-Zone Nonuniformity, Size-Zone Nonuniformity Normalized, Zone Percentage, Gray-Level Variance, Zone Variance, Zone Entropy, Low Gray-Level Zone Emphasis, High Gray-Level Zone Emphasis, Small-Area Low Gray-Level Emphasis, Small-Area High Gray-Level Emphasis, Large-Area Low Gray-Level Emphasis, and Large-Area High Gray-Level Emphasis |
Gray-Level Run-Length Matrix Features | Short-Run Emphasis, Long-Run Emphasis, Gray-Level Nonuniformity, Gray-Level Nonuniformity Normalized, Run Length Nonuniformity, Run Length Nonuniformity Normalized, Run Percentage, Gray-Level Variance, Run Variance, Run Entropy, Low Gray-Level Run Emphasis, High Gray-Level Run Emphasis, Short-Run Low Gray-Level Emphasis, Short-Run High Gray-Level Emphasis, Long-Run Low Gray-Level Emphasis, and Long-Run High Gray-Level Emphasis |
Neighboring Gray-Tone-Difference Matrix Features | Coarseness, Contrast, Busyness, Complexity, and Strength |
Gray-Level Dependence Matrix Features | Small Dependence Emphasis, Large Dependence Emphasis, Gray-Level Nonuniformity, Dependence Nonuniformity, Dependence Nonuniformity Normalized, Gray-Level Variance, Dependence Variance, Dependence Entropy, Low Gray-Level Emphasis, High Gray-Level Emphasis, Small Dependence Low Gray-Level Emphasis, Small Dependence High Gray-Level Emphasis, Large Dependence Low Gray-Level Emphasis, and Large Dependence High Gray-Level Emphasis |
Machine-Learning Model Development
Different binary classifier machine-learning (ML) models, comprising support-vector machine (SVM), decision tree, k-nearest neighbor (k-NN), Naïve Bayes, adaptive boosting (AdaBoost), random forest, and neural network were developed using the extracted features to classify each pulmonary lesion into COVID-19 and H1N1 influenza groups.
The performance of the models was tested via 10-fold cross-validation and leave-one-out cross-validation analyses on Orange: Data Mining Toolbox in Python. 9 Additionally, the confusion matrix of the models was evaluated. ML model training and testing were repeated twice: once with the raw extracted features and then with the harmonized features. Feature harmonization was performed to avoid the effect of the different CT scanners on the radiomics result. 10 The features were harmonized by using the combatting batch effect (ComBat) harmonization algorithm.
Statistical Analysis
The differences between H1N1 influenza and COVID-19 groups concerning numerical variables were assessed by using the independent samples t test. The Chi square test was utilized to compare categorical variables between these two groups (sex). A P value of less than 0.05 was considered significant. The statistical analyses were performed with SPSS, version 21. The method of this study is summarized in figure 1.
Results
Seventy-three patients with COVID-19 and H1N1 influenza-induced pneumonia were included in this study. After the initial evaluation of the study population’s CT images, seven patients were excluded because of motion artifacts or poor-quality images. Sixty-six patients, comprising 47 cases with COVID-19 and 19 cases with H1N1 influenza were enrolled in this study. The demographic data of these patients are provided in table 2. Regarding CT scanning on the patients with influenza, the Siemens SOMATOM Emotion 16 Slice (DE) Scanner, and the Toshiba Aquilion 16 Slice CT Scanner were used for 16 and three patients, correspondingly. Concerning CT scanning on the patients with COVID-19, the Siemens SOMATOM Emotion 16 Slice (DE) Scanner, the Toshiba Aquilion 16 Slice CT Scanner, and the GE Optima 58, 32 Slice Scanner were employed for 14, 23, and 10 patients, respectively. The patients in the COVID-19 and H1N1 influenza groups were not significantly different in terms of age and sex (P=0.13 and 0.99, respectively). However, the average time between initial symptoms/hospitalization and chest CT was shorter in the COVID-19 group (P=0.001 and 0.01, respectively) (table 2).
Influenza (n=19) | COVID-19 (n=47) | P value | ||
---|---|---|---|---|
Age (mean±SD) | 65.89±15.50 | 59.30±16.33 | 0.13 | |
Sex | Female | 11 (16.7%) | 16 (24.2%) | 0.99 |
Male | 8 (12.1%) | 31 (47.0%) | ||
Average time between initial symptoms and CT (d) | 7.5±5.02 | 3.7±3.38 | 0.001 | |
Average time between hospitalization and CT (d) | 1.9±2.34 | 1.1±0.4 | 0.01 | |
The independent samples t test was used to evaluate the differences between H1N1 influenza and COVID-19 samples for numerical variables (age, the average time between the initial symptoms and CT, and the average time between hospitalization and CT). The Chi square test was used to compare categorical variables between these two groups (sex). A P value of less than 0.05 was considered significant. CT: Computed tomography |
Totally, 453 pulmonary lesions were segmented: 306 COVID-19 lesions and 147 influenza lesions. In the first step, 120 features were extracted from each pulmonary lesion. All 120 features were utilized to train state-of-the-art ML models. The models were Naïve Bayes, SVM, k-NN (number of neighbors=5), decision tree, neural network (different networks were developed with different numbers of layers: 10, 25, 50, 60, 75, and 100), AdaBoost (number of estimators=50 and learning rate=100,000), and random forest (number of trees=11). The application of LASSO, PCA, and redundancy maximum relevance did not change ML performance, so all the features were used for the final ML development. Nevertheless, the feature selection techniques suggested that the size of the lesion (shape), large dependence emphasis (GLDM), large-area low gray-level emphasis (GLSZM), and gray-level nonuniformity (GLSZM) were the most important features apropos of correlation with the class output. With the use of the raw features, the neural network (100 layers) had the highest area under the curve (AUC). The AUC of the neural network was 0.87 and 0.87 for the 10-fold cross-validation and leave-one-out cross-validation analyses, respectively. These numbers were 0.79 and 0.86 for decision tree, 0.83 and 0.84 for SVM, 0.85 and 0.83 for random forest, 0.85 and 0.79 for AdaBoost, 0.7 and 0.7 for Naïve Bayes, and 0.57 and 0.55 for k-NN. The performance of these models is summarized in table 3. The training was repeated once more on the harmonized features to resolve the possible effect of the different scanners on the extracted features and the subsequent classification models. The 10-fold cross-validation and leave-one-out cross-validation analyses of these models demonstrated improved classification by using the harmonized features. This time, the random forest had the highest performance with an AUC of 0.97 for 10-fold cross-validation (table 4), and 0.969 for leave-one-out cross-validation. These numbers were 0.91 and 0.93 for neural network (100 layers), 0.91 and 0.9 for AdaBoost, 0.89 and 0.89 for decision tree, 0.85 and 0.85 for Naïve Bayes, 0.8 and 0.78 for SVM, and 0.64 and 0.64 for k-NN. Our best model (random forest) accurately predicted 304 COVID-19 lesions out of 307 lesions with a false-negative rate of less than 1% on the confusion matrix. The classification performance of these models is presented in table 4.
Model | AUC ** | Classification Accuracy (%) | F1 Score (%) | Precision (%) | Sensitivity (%) |
---|---|---|---|---|---|
Neural Network | 0.874 | 83.01 | 82.50 | 82.22 | 83.02 |
Random Forest | 0.858 | 81.80 | 78.32 | 79.80 | 81.80 |
AdaBoost | 0.858 | 90.60 | 90.60 | 90.61 | 90.61 |
SVM | 0.832 | 81.80 | 76.11 | 83.40 | 81.80 |
Decision Tree | 0.798 | 89.01 | 88.70 | 88.61 | 89.02 |
Naive Bayes | 0.702 | 65.10 | 68.21 | 76.61 | 65.10 |
k-NN | 0.571 | 78.62 | 74.32 | 74.01 | 78.60 |
The 10-fold cross-validation was used to measure the performance of the machine-learning models. AdaBoost: Adaptive boosting; SVM: Support-vector machine; k-NN: k-nearest neighbors; AUC: Area under the curve. **AUC ranges in value from 0 to 1 |
Model | AUC ** | Classification Accuracy (%) | F1 Score (%) | Precision (%) | Sensitivity (%) |
---|---|---|---|---|---|
Random Forest | 0.974 | 89.40 | 88.91 | 90.20 | 89.40 |
Neural Network | 0.914 | 84.80 | 84.61 | 84.61 | 84.81 |
AdaBoost | 0.911 | 92.31 | 92.32 | 92.30 | 92.32 |
Decision Tree | 0.894 | 91.42 | 91.40 | 91.41 | 91.40 |
Naive Bayes | 0.851 | 78.20 | 78.50 | 79.12 | 78.21 |
SVM | 0.802 | 76.40 | 76.11 | 76.02 | 76.40 |
k-NN | 0.642 | 65.60 | 64.80 | 64.20 | 65.60 |
The 10-fold cross-validation was used to measure the performance of the machine-learning models. AdaBoost: Adaptive boosting; SVM: Support-vector machine; k-NN: k-nearest neighbors; AUC: Area under the curve. **AUC ranges in value from 0 to 1 |
Discussion
In this study, by making use of the radiomics techniques and different ML models, we succeeded in differentiating pulmonary lesions caused by COVID-19 from those caused by H1N1 influenza with high accuracy. Our study is one of the first investigations on the role of radiomics in infection (COVID-19 in our study). It seems that radiomics, which was first developed mainly in oncologic imaging, could also be useful in diagnosing infectious diseases.
So far, radiomics has had very few applications in pneumonia in that it has been mainly employed to differentiate infection from malignancy. Uthoff and others used radiomics to differentiate between histoplasmosis and non-small cell lung cancer with an AUC of 0.89. 11 Liu and colleagues drew upon radiomics, conjoined with SVM, feedforward backpropagation neural network (FNN-BP), and random forest, to differentiate silicosis from tuberculosis. They reported the highest performance for the random forest with an AUC of 0.91. 12 Radiomics was also used to differentiate primary progressive pulmonary tuberculosis from community-acquired pneumonia in children by Wang and others. 13 They reported an AUC of 0.97 by developing SVM using radiomics features. Shi and colleagues utilizes radiomics to differentiate lung cancer from opportunistic pulmonary infection in patients with HIV. 14 In a recently published study, radiomics was used to distinguish ground-glass pulmonary opacities between COVID-19 infection and non–COVID-19 pneumonia. The authors extracted radiomics features from pulmonary lesions and then selected the most predictive features via LASSO. In SVM models, the authors used selected features to differentiate ground-glass opacities between COVID-19 and non–COVID-19 lesions with an AUC of 0.9. 15 In the same line with our study, they used ComBat harmonization for the normalization of features. Our random forest model outperformed their SVM model in classification accuracy. In another recent study, which is similar to ours, radiomics was used to differentiate COVID-19 from influenza A. After feature extraction, the authors selected the most predictive features by LASSO, and then used SVM for the differentiation of COVID-19 from influenza A lesions. They achieved an AUC of 0.87 for this task. 16 Our random forest, neural network, AdaBoost, and decision tree models outperformed their final model.
The application of radiomics feature extraction with different scanners is challenging. Reports indicate that feature extraction depends on the imaging protocol, reconstruction techniques, scanner vendors, and imaging protocol. 17 - 20 In our study, CT images were obtained via the same pulmonary protocol with similar thickness, pitch factor, and matrix and with almost the same kVp and mAs. Still, we obtained the images by using three different CT scanners, which could interface with the observed results. To avoid the bias induced by different CT scanners, we repeated the AI model training once more after harmonizing the extracted features. Harmonization has been proposed to resolve the effect of different scanners, protocols, and image reconstruction techniques on radiomics-based models. In this context, ComBat harmonization has been promising, and it is especially robust in improving chest CT feature extraction. It has been reported that this technique can reduce the variance of extracted features by different chest CT protocols even close to zero. 21 The same technique also enhanced our classifier models. This finding again confirms that radiomics feature extraction is scanner and protocol dependent, and that all features must be harmonized before AI model training.
PCR is currently regarded as a robust test to diagnose H1N1 infection. 22 Nonetheless, the same does not hold true for COVID-19. Indeed, PCR can be negative in 38% of patients with COVID-19 on the day of initial symptoms. 23 Having an additional technique with a low false-negative rate in detecting COVID-19 could lessen this PCR weakness. Our review of the confusion matrix showed that our best model missed very few lesions of COVID-19 (false-negative rate= <1%). We believe that our model or similar platforms can diminish the low sensitivity of PCR in COVID-19.
We recognize the limitations of our study. This study was conceptual and was performed retrospectively. Moreover, only H1N1 influenza was compared with COVID-19, and other causes of pneumonia (bacterial and other viral pneumonia conditions) were not evaluated. Further prospective studies with larger patient populations are needed to evaluate radiomics-associated AI techniques to augment the accuracy of CT in COVID-19 pneumonia. Additionally, segmentation was performed manually, and every single lesion was segmented individually. Consequently, the radiologist’s experience might play a role in the final performance.
Conclusion
Radiomics feature extraction, conjoined with ML models, appears to yield promising results in terms of improving the specificity and accuracy of chest CT to differentiate between COVID-19 pneumonia and other causes of pneumonia (H1N1 in our study). Given the known high sensitivity of CT in COVID-19 infection, the application of radiomics-AI techniques on chest CT may confer a state-of-the-art diagnostic method. Radiomics feature extraction in chest CT is scanner-dependent, and features must be harmonized before classifier model development. More extensive studies are required to probe further into the role of radiomics in COVID-19 management.
Conflict of Interest:
None declared.
References
- Lake MA. What we know so far: COVID-19 current clinical knowledge and research. Clin Med (Lond). 2020; 20:124-7. Publisher Full Text | DOI | PubMed
- Johns Hopkins University [Interent]. COVID-19 Dashboard by the Center for Systems Science and Engineering (CSSE). [Cited 8 September 2020]. https://coronavirus.jhu.edu/map.html.
- Long C, Xu H, Shen Q, Zhang X, Fan B, Wang C, et al. Diagnosis of the Coronavirus disease (COVID-19): rRT-PCR or CT?. Eur J Radiol. 2020; 126:108961. Publisher Full Text | DOI | PubMed
- Ai T, Yang Z, Hou H, Zhan C, Chen C, Lv W, et al. Correlation of Chest CT and RT-PCR Testing for Coronavirus Disease 2019 (COVID-19) in China: A Report of 1014 Cases. Radiology. 2020; 296:E32-E40. Publisher Full Text | DOI | PubMed
- Kovacs A, Palasti P, Vereb D, Bozsik B, Palko A, Kincses ZT. The sensitivity and specificity of chest CT in the diagnosis of COVID-19. Eur Radiol. 2020. Publisher Full Text | DOI | PubMed
- Mohajerani P, Sotoudeh H. Essentials of AI Techniques: With a focus on medicine and healthcare, without math or coding. Chicago. Independently published. February 14, 2020..
- 3D Slicer[Internent]. open source software platform. [Cited 12 September 2020]. https://www.slicer.org/.
- y-radiomics [Interent]. Open-source radiomics library. [Cited 02 September 2020]. https://www.radiomics.io/pyradiomics.html.
- Demšar J, Curk T, Erjavec A, Gorup Č, Hočevar T, Milutinovič M, et al. Orange: data mining toolbox in Python. The Journal of Machine Learning Research. 2013; 14:2349-53.
- Fortin JP [Interent]. ComBatHarmonization. Open-source saftware. [Cited 13 September 2020]. https://github.com/Jfortin1/ComBatHarmonization.
- Uthoff J, Nagpal P, Sanchez R, Gross TJ, Lee C, Sieren JC. Differentiation of non-small cell lung cancer and histoplasmosis pulmonary nodules: insights from radiomics model performance compared with clinician observers. Transl Lung Cancer Res. 2019; 8:979-88. Publisher Full Text | DOI | PubMed
- Liu J, Li M, Liu RR, Zhu Y, Chen GQ, Li XB, et al. Establishment of a CT image radiomics-based prediction model for the differential diagnosis of silicosis and tuberculosis nodules. Zhonghua Lao Dong Wei Sheng Zhi Ye Bing Za Zhi. 2019; 37:707-10. DOI | PubMed
- Wang B, Li M, Ma H, Han F, Wang Y, Zhao S, et al. Computed tomography-based predictive nomogram for differentiating primary progressive pulmonary tuberculosis from community-acquired pneumonia in children. BMC Med Imaging. 2019; 19:63. Publisher Full Text | DOI | PubMed
- Shi W, Zhou L, Peng X, Ren H, Wang Q, Shan F, et al. HIV-infected patients with opportunistic pulmonary infections misdiagnosed as lung cancers: the clinicoradiologic features and initial application of CT radiomics. J Thorac Dis. 2019; 11:2274-86. Publisher Full Text | DOI | PubMed
- Xie C, Ng MY, Ding J, Leung ST, Lo CSY, Wong HYF, et al. Discrimination of pulmonary ground-glass opacity changes in COVID-19 and non-COVID-19 patients using CT radiomics analysis. Eur J Radiol Open. 2020; 7:100271. Publisher Full Text | DOI | PubMed
- Zeng QQ, Zheng KI, Chen J, Jiang ZH, Tian T, Wang XB, et al. Radiomics-based model for accurately distinguishing between severe acute respiratory syndrome associated coronavirus 2 (SARS-CoV-2) and influenza A infected pneumonia. MedComm (Beijing). 2020. Publisher Full Text | DOI | PubMed
- Jin H, Kim JH. How dependent are CT radiomic features on CT scan parameters?. International Forum on Medical Imaging in Asia 2019. 2019; 11050:110500D. DOI
- Rizzo S, Botta F, Raimondi S, Origgi D, Fanciullo C, Morganti AG, et al. Radiomics: the facts and the challenges of image analysis. Eur Radiol Exp. 2018; 2:36. Publisher Full Text | DOI | PubMed
- Shafiq-Ul-Hassan M, Zhang GG, Latifi K, Ullah G, Hunt DC, Balagurunathan Y, et al. Intrinsic dependencies of CT radiomic features on voxel size and number of gray levels. Med Phys. 2017; 44:1050-62. Publisher Full Text | DOI | PubMed
- Ger RB, Zhou S, Chi PM, Lee HJ, Layman RR, Jones AK, et al. Comprehensive Investigation on Controlling for CT Imaging Variabilities in Radiomics Studies. Sci Rep. 2018; 8:13047. Publisher Full Text | DOI | PubMed
- Mahon RN, Ghita M, Hugo GD, Weiss E. ComBat harmonization for radiomic features in independent phantom and lung cancer patient computed tomography datasets. Phys Med Biol. 2020; 65:015010. DOI | PubMed
- Maignan M, Viglino D, Hablot M, Termoz Masson N, Lebeugle A, Collomb Muret R, et al. Diagnostic accuracy of a rapid RT-PCR assay for point-of-care detection of influenza A/B virus at emergency department admission: A prospective evaluation during the 2017/2018 influenza season. PLoS One. 2019; 14:e0216308. Publisher Full Text | DOI | PubMed
- Rong Y, Wang F, Liu J, Zhou Y, Li X, Liang X, et al. Clinical characteristics and risk factors of mild-to-moderate COVID-19 patients with false-negative SARS-CoV-2 nucleic acid. J Med Virol. 2021; 93:448-55. Publisher Full Text | DOI | PubMed