Clinical Research

Cutting edge healthtech insights

Effects of a comprehensive brain computed tomography deep learning model on radiologist detection accuracy

European Radiology Published: 22 August 2023 Authors: Quinlan D. Buchlak, Cyril H. M. Tang, Jarrel C. Y. Seah, Andrew Johnson, Xavier Holt, Georgina M. Bottrell, Jeffrey B. Wardman, Gihan Samarasinghe, Leonardo Dos Santos Pinheiro, Hongze Xia, Hassan K. Ahmad, Hung Pham, Jason I. Chiang, Nalan Ektas, Michael R. Milne, Christopher H. Y. Chiu, Ben Hachey, Melissa K. Ryan, Benjamin P. Johnston, Nazanin Esmaili, Christine Bennett, Tony Goldschlager, Jonathan Hall, Duc Tan Vo, Lauren Oakden-Rayner, Jean-Christophe Leveque, Farrokh Farrokhi, Richard G. Abramson, Catherine M. Jones, Simon Edelstein & Peter Brotchie Abstract: Non-contrast computed tomography of the brain (NCCTB) is commonly used to detect intracranial pathology but is subject to interpretation errors. Machine learning can augment clinical decision-making and improve NCCTB scan interpretation. This retrospective detection accuracy study assessed the performance of radiologists assisted by a deep learning model and compared the standalone performance of the model with that of unassisted radiologists. Key Points: • This study demonstrated that the use of a comprehensive deep learning system assisted radiologists in the detection of a wide range of abnormalities on non-contrast brain computed tomography scans. • The deep learning model demonstrated an average area under the receiver operating characteristic curve of 0.93 across 144 findings and significantly improved radiologist interpretation performance. • The assistance of the comprehensive deep learning model significantly reduced the time required for radiologists to interpret computed tomography scans of the brain. : :

2 MIN READ
Clinical research Artificial intelligence Annalise CTB

Analysis of Line and Tube Detection Performance of a Chest X-ray Deep Learning Model to Evaluate Hidden Stratification

Diagnostics 2023, 13(14), 2317; Published: 9 July 2023 Authors: Cyril H. M. Tang, Jarrel C. Y. Seah, Hassan K. Ahmad, Michael R. Milne, Jeffrey B. Wardman, Quinlan D. Buchlak, Nazanin Esmaili, John F. Lambert, Catherine M. Jones Abstract: This retrospective case-control study evaluated the diagnostic performance of a commercially available chest radiography deep convolutional neural network (DCNN) in identifying the presence and position of central venous catheters, enteric tubes, and endotracheal tubes, in addition to a subgroup analysis of different types of lines/tubes. A held-out test dataset of 2568 studies was sourced from community radiology clinics and hospitals in Australia and the USA, and was then ground-truth labelled for the presence, position, and type of line or tube from the consensus of a thoracic specialist radiologist and an intensive care clinician. DCNN model performance for identifying and assessing the positioning of central venous catheters, enteric tubes, and endotracheal tubes over the entire dataset, as well as within each subgroup, was evaluated. The area under the receiver operating characteristic curve (AUC) was assessed. The DCNN algorithm displayed high performance in detecting the presence of lines and tubes in the test dataset with AUCs > 0.99, and good position classification performance over a subpopulation of ground truth positive cases with AUCs of 0.86–0.91. The subgroup analysis showed that model performance was robust across the various subtypes of lines or tubes, although position classification performance of peripherally inserted central catheters was relatively lower. Our findings indicated that the DCNN algorithm performed well in the detection and position classification of lines and tubes, supporting its use as an assistant for clinicians. Further work is required to evaluate performance in rarer scenarios, as well as in less common subgroups. : : :

2 MIN READ
Artificial intelligence Annalise CXR Clinical research

Machine Learning Augmented Interpretation of Chest X-rays: A Systematic Review

Diagnostics 2023, 13(4), 743; Published: 15 February 2023 Authors: Hassan K. Ahmad, Michael R. Milne, Quinlan D. Buchlak, Nalan Ektas, Georgina Sanderson, Hadi Chamtie, Sajith Karunasena, Jason Chiang, Xavier Holt, Cyril H. M. Tang, Jarrel C. Y. Seah, Georgina Bottrell, Nazanin Esmaili, Peter Brotchie, Catherine Jones Abstract: Limitations of the chest X-ray (CXR) have resulted in attempts to create machine learning systems to assist clinicians and improve interpretation accuracy. An understanding of the capabilities and limitations of modern machine learning systems is necessary for clinicians as these tools begin to permeate practice. This systematic review aimed to provide an overview of machine learning applications designed to facilitate CXR interpretation. A systematic search strategy was executed to identify research into machine learning algorithms capable of detecting >2 radiographic findings on CXRs published between January 2020 and September 2022. Model details and study characteristics, including risk of bias and quality, were summarized. Initially, 2248 articles were retrieved, with 46 included in the final review. Published models demonstrated strong standalone performance and were typically as accurate, or more accurate, than radiologists or non-radiologist clinicians. Multiple studies demonstrated an improvement in the clinical finding classification performance of clinicians when models acted as a diagnostic assistance device. Device performance was compared with that of clinicians in 30% of studies, while effects on clinical perception and diagnosis were evaluated in 19%. Only one study was prospectively run. On average, 128,662 images were used to train and validate models. Most classified less than eight clinical findings, while the three most comprehensive models classified 54, 72, and 124 findings. This review suggests that machine learning devices designed to facilitate CXR interpretation perform strongly, improve the detection performance of clinicians, and improve the efficiency of radiology workflow. Several limitations were identified, and clinician involvement and expertise will be key to driving the safe implementation of quality CXR machine learning systems. : : :

2 MIN READ
Artificial intelligence Annalise CXR

Charting the potential of brain computed tomography deep learning systems

Journal of Clinical Neuroscience, open access. May 2022. https://doi.org/10.1016/j.jocn.2022.03.014 Authors: Quinlan D.Buchlak, Michael R.Milne, Jarrel Seah, Andrew Johnson, Gihan Samarasinghe, Ben Hachey, Nazanin Esmaili, Aengus Tran, Jean-Christophe Leveque, Farrokh Farrokhi, Tony Goldschlager, Simon Edelstein, Peter Brotchie Abstract: Brain computed tomography (CTB) scans are widely used to evaluate intracranial pathology. The implementation and adoption of CTB has led to clinical improvements. However, interpretation errors occur and may have substantial morbidity and mortality implications for patients. Deep learning has shown promise for facilitating improved diagnostic accuracy and triage. This research charts the potential of deep learning applied to the analysis of CTB scans. It draws on the experience of practicing clinicians and technologists involved in development and implementation of deep learning-based clinical decision support systems. We consider the past, present and future of the CTB, along with limitations of existing systems as well as untapped beneficial use cases. Implementing deep learning CTB interpretation systems and effectively navigating development and implementation risks can deliver many benefits to clinicians and patients, ultimately improving efficiency and safety in healthcare. : : :

2 MIN READ
Thought leadership Annalise CTB

Diagnostic accuracy of a commercially available deep learning algorithm in supine chest radiographs following trauma

BJR. First published online 18 Mar 2022. Authors: Jacob Gipson, Victor Tang, Jarrel Seah, Helen Kavnoudias, Adil Zia, Robin Lee, Biswadev Mitra and Warren Clements Abstract: Objectives: Trauma chest radiographs may contain subtle and time-critical pathology. Artificial intelligence (AI) may aid in accurate reporting, timely identification and worklist prioritisation. However, few AI programs have been externally validated. This study aimed to evaluate the performance of a commercially available deep convolutional neural network – Annalise CXR V1.2 (Annalise.ai)- for detection of traumatic injuries on supine chest radiographs. Methods: Chest radiographs with a CT performed within 24 h in the setting of trauma were retrospectively identified at a level one adult trauma centre between January 2009 and June 2019. Annalise.ai assessment of the chest radiograph was compared to the radiologist report of the chest radiograph. Contemporaneous CT report was taken as the ground truth. Agreement with CT was measured using Cohen’s κ and sensitivity/specificity for both AI and radiologists were calculated. Results: There were 1404 cases identified with a median age of 52 (IQR 33–69) years, 949 male. AI demonstrated superior performance compared to radiologists in identifying pneumothorax (p = 0.007) and segmental collapse (p = 0.012) on chest radiograph. Radiologists performed better than AI for clavicle fracture (p = 0.002), humerus fracture (p < 0.0015) and scapula fracture (p = 0.014). No statistical difference was found for identification of rib fractures and pneumomediastinum.

2 MIN READ
Clinical research Annalise CXR

Assessment of the effect of a comprehensive chest radiograph deep learning model on radiologist reports and patient outcomes: a real-world observational study

BMJ Open. First published December 20, 2021 Authors: Catherine M Jones, Luke Danaher, Michael R Milne, Cyril Tang, Jarrel Seah, Luke Oakden-Rayner, Andrew Johnson, Quinlan D Buchlak, Nazanin Esmaili Abstract: Objectives: Artificial intelligence (AI) algorithms have been developed to detect imaging features on chest X-ray (CXR) with a comprehensive AI model capable of detecting 124 CXR findings being recently developed. The aim of this study was to evaluate the real-world usefulness of the model as a diagnostic assistance device for radiologists. Design: This prospective real-world multicentre study involved a group of radiologists using the model in their daily reporting workflow to report consecutive CXRs and recording their feedback on level of agreement with the model findings and whether this significantly affected their reporting. Setting: The study took place at radiology clinics and hospitals within a large radiology network in Australia between November and December 2020.

2 MIN READ
Clinical research Annalise CXR

Do comprehensive deep learning algorithms suffer from hidden stratification? A retrospective study on pneumothorax detection in chest radiography

BMJ Open. First published December 7, 2021 Authors: Jarrel Seah, Cyril Tang, Quinlan D Buchlak, Michael Robert Milne, Xavier Holt, Hassan Ahmad, John Lambert, Nazanin Esmaili, Luke Oakden-Rayner Peter Brotchie, Catherine M Jones Objectives: To evaluate the ability of a commercially available comprehensive chest radiography deep convolutional neural network (DCNN) to detect simple and tension pneumothorax, as stratified by the following subgroups: the presence of an intercostal drain; rib, clavicular, scapular or humeral fractures or rib resections; subcutaneous emphysema and erect versus non-erect positioning. The hypothesis was that performance would not differ significantly in each of these subgroups when compared with the overall test dataset. Design: A retrospective case–control study was undertaken. Setting: Community radiology clinics and hospitals in Australia and the USA. Participants: A test dataset of 2557 chest radiography studies was ground-truthed by three subspecialty thoracic radiologists for the presence of simple or tension pneumothorax as well as each subgroup other than positioning. Radiograph positioning was derived from radiographer annotations on the images.

2 MIN READ
Clinical research

Effect of a comprehensive deep-learning model on the accuracy of chest x-ray interpretation by radiologists: a retrospective, multireader multicase study

Lancet Digital Health, July 1 2021. Authors: Jarrel C Y Seah, Cyril H M Tang, Quinlan D Buchlak, Xavier G Holt, Jeffrey B Wardman, Anuar Aimoldin, Nazanin Esmaili, Hassan Ahmad, Hung Pham, John F Lambert, Ben Hachey, Stephen J F Hogg, Benjamin P Johnston, Christine Bennett, Luke Oakden-Rayner, Peter Brotchie, Catherine M Jones Summary: Background Chest x-rays are widely used in clinical practice; however, interpretation can be hindered by human error and a lack of experienced thoracic radiologists. Deep learning has the potential to improve the accuracy of chest x-ray interpretation. We therefore aimed to assess the accuracy of radiologists with and without the assistance of a deep-learning model. Methods: In this retrospective study, a deep-learning model was trained on 821 681 images (284 649 patients) from five data sets from Australia, Europe, and the USA. 2568 enriched chest x-ray cases from adult patients (≥16 years) who had at least one frontal chest x-ray were included in the test dataset; cases were representative of inpatient, outpatient, and emergency settings. 20 radiologists reviewed cases with and without the assistance of the deep-learning model with a 3-month washout period. We assessed the change in accuracy of chest x-ray interpretation across 127 clinical findings when the deep-learning model was used as a decision support by calculating area under the receiver operating characteristic curve (AUC) for each radiologist with and without the deep-learning model. We also compared AUCs for the model alone with those of unassisted radiologists. If the lower bound of the adjusted 95% CI of the difference in AUC between the model and the unassisted radiologists was more than –0·05, the model was considered to be non-inferior for that finding. If the lower bound exceeded 0, the model was considered to be superior. Findings: Unassisted radiologists had a macroaveraged AUC of 0·713 (95% CI 0·645–0·785) across the 127 clinical findings, compared with 0·808 (0·763–0·839) when assisted by the model. The deep-learning model statistically significantly improved the classification accuracy of radiologists for 102 (80%) of 127 clinical findings, was statistically non-inferior for 19 (15%) findings, and no findings showed a decrease in accuracy when radiologists used the deep-learning model. Unassisted radiologists had a macroaveraged mean AUC of 0·713 (0·645–0·785) across all findings, compared with 0·957 (0·954–0·959) for the model alone. Model classification alone was significantly more accurate than unassisted radiologists for 117 (94%) of 124 clinical findings predicted by the model and was non-inferior to unassisted radiologists for all other clinical findings. :

2 MIN READ
Clinical research

Chest radiographs and machine learning – Past, present and future

Journal of Medical Imaging and Radiation Oncology. 25 June 2021. Authors: CM Jones MBBS, FRCR, FRANZCR; QD Buchlak MD, MPsych, MIS; L Oakden-Rayner MBBS, FRANZCR; M Milne MS; J Seah MBBS; N Esmaili PhD, MBA; B Hachey PhD. Summary: Despite its simple acquisition technique, the chest X-ray remains the most common first-line imaging tool for chest assessment globally. Recent evidence for image analysis using modern machine learning points to possible improvements in both the efficiency and the accuracy of chest X-ray interpretation. While promising, these machine learning algorithms have not provided comprehensive assessment of findings in an image and do not account for clinical history or other relevant clinical information. However, the rapid evolution in technology and evidence base for its use suggests that the next generation of comprehensive, well-tested machine learning algorithms will be a revolution akin to early advances in X-ray technology. Current use cases, strengths, limitations and applications of chest X-ray machine learning systems are discussed. : : :

2 MIN READ
Artificial intelligence

Designing Effective Artificial Intelligence Software

Poster presented at the European Society of Radiology Congress 2021 Poster number C-13640 DOI: 10.26044/ecr2021/C-13640 Authors: C. Tang, J. C. Y. Seah, Q. Buchlak, C. Jones; Sydney/AU Learning objectives: To raise awareness of the importance of usable AI design, provide examples of model interpretability methods, and to summarise clinician reactions to methods of communicating AI model interpretability in a radiological tool. Background: In the past decade, the number of AI-enabled tools, especially deep learning solutions, has exploded onto the radiological scene with the promise of revolutionising healthcare[1]. However, these data-driven models are often treated as numerical exercises and black boxes, offering little insight into the reasons for their behaviour. Trust in novel technologies is often limited by a lack of understanding of the decision-making processes behind the technology. Findings and procedure details: Design Cycle “It’s just aggravating to have to move and shuffle all these windows… shuffle between the list and your [Brand Name] dictation software… [or] Google Chrome or Internet Explorer, to search for something on there. Everything’s just opening on top of each other, which is aggravating.” – UX interview with Interventional Radiologist, USA. The design of the entire user experience of our AI tool has involved radiologists and other clinicians at every step. Conclusion: The inclusion of interpretability techniques has been well-received through testing in multiple rounds of user interviews, reflecting a demand from the broader radiological community to be able to demystify the black box of AI. Future AI work should involve radiologists at all steps of the design process in order to address workflow and UI concerns, especially as regulatory authorities move towards guidelines that aim to ensure a safer and more interpretable AI future.

2 MIN READ
Clinical research