Clinical Research

Cutting edge healthtech insights

Charting the potential of brain computed tomography deep learning systems

Journal of Clinical Neuroscience, open access. May 2022. https://doi.org/10.1016/j.jocn.2022.03.014 Authors: Quinlan D.Buchlak, Michael R.Milne, Jarrel Seah, Andrew Johnson, Gihan Samarasinghe, Ben Hachey, Nazanin Esmaili, Aengus Tran, Jean-Christophe Leveque, Farrokh Farrokhi, Tony Goldschlager, Simon Edelstein, Peter Brotchie Abstract: Brain computed tomography (CTB) scans are widely used to evaluate intracranial pathology. The implementation and adoption of CTB has led to clinical improvements. However, interpretation errors occur and may have substantial morbidity and mortality implications for patients. Deep learning has shown promise for facilitating improved diagnostic accuracy and triage. This research charts the potential of deep learning applied to the analysis of CTB scans. It draws on the experience of practicing clinicians and technologists involved in development and implementation of deep learning-based clinical decision support systems. We consider the past, present and future of the CTB, along with limitations of existing systems as well as untapped beneficial use cases. Implementing deep learning CTB interpretation systems and effectively navigating development and implementation risks can deliver many benefits to clinicians and patients, ultimately improving efficiency and safety in healthcare. : : :

2 MIN READ
Thought leadership

Diagnostic accuracy of a commercially available deep learning algorithm in supine chest radiographs following trauma

BJR. First published online 18 Mar 2022. Authors: Jacob Gipson, Victor Tang, Jarrel Seah, Helen Kavnoudias, Adil Zia, Robin Lee, Biswadev Mitra and Warren Clements Abstract: Objectives: Trauma chest radiographs may contain subtle and time-critical pathology. Artificial intelligence (AI) may aid in accurate reporting, timely identification and worklist prioritisation. However, few AI programs have been externally validated. This study aimed to evaluate the performance of a commercially available deep convolutional neural network – Annalise CXR V1.2 (Annalise.ai)- for detection of traumatic injuries on supine chest radiographs. Methods: Chest radiographs with a CT performed within 24 h in the setting of trauma were retrospectively identified at a level one adult trauma centre between January 2009 and June 2019. Annalise.ai assessment of the chest radiograph was compared to the radiologist report of the chest radiograph. Contemporaneous CT report was taken as the ground truth. Agreement with CT was measured using Cohen’s κ and sensitivity/specificity for both AI and radiologists were calculated. Results: There were 1404 cases identified with a median age of 52 (IQR 33–69) years, 949 male. AI demonstrated superior performance compared to radiologists in identifying pneumothorax (p = 0.007) and segmental collapse (p = 0.012) on chest radiograph. Radiologists performed better than AI for clavicle fracture (p = 0.002), humerus fracture (p < 0.0015) and scapula fracture (p = 0.014). No statistical difference was found for identification of rib fractures and pneumomediastinum.

2 MIN READ
Annalise CXR Clinical research

Assessment of the effect of a comprehensive chest radiograph deep learning model on radiologist reports and patient outcomes: a real-world observational study

BMJ Open. First published December 20, 2021 Authors: Catherine M Jones, Luke Danaher, Michael R Milne, Cyril Tang, Jarrel Seah, Luke Oakden-Rayner, Andrew Johnson, Quinlan D Buchlak, Nazanin Esmaili Abstract: Objectives: Artificial intelligence (AI) algorithms have been developed to detect imaging features on chest X-ray (CXR) with a comprehensive AI model capable of detecting 124 CXR findings being recently developed. The aim of this study was to evaluate the real-world usefulness of the model as a diagnostic assistance device for radiologists. Design: This prospective real-world multicentre study involved a group of radiologists using the model in their daily reporting workflow to report consecutive CXRs and recording their feedback on level of agreement with the model findings and whether this significantly affected their reporting. Setting: The study took place at radiology clinics and hospitals within a large radiology network in Australia between November and December 2020.

2 MIN READ
Annalise CXR Clinical research

Do comprehensive deep learning algorithms suffer from hidden stratification? A retrospective study on pneumothorax detection in chest radiography

BMJ Open. First published December 7, 2021 Authors: Jarrel Seah, Cyril Tang, Quinlan D Buchlak, Michael Robert Milne, Xavier Holt, Hassan Ahmad, John Lambert, Nazanin Esmaili, Luke Oakden-Rayner Peter Brotchie, Catherine M Jones Objectives: To evaluate the ability of a commercially available comprehensive chest radiography deep convolutional neural network (DCNN) to detect simple and tension pneumothorax, as stratified by the following subgroups: the presence of an intercostal drain; rib, clavicular, scapular or humeral fractures or rib resections; subcutaneous emphysema and erect versus non-erect positioning. The hypothesis was that performance would not differ significantly in each of these subgroups when compared with the overall test dataset. Design: A retrospective case–control study was undertaken. Setting: Community radiology clinics and hospitals in Australia and the USA. Participants: A test dataset of 2557 chest radiography studies was ground-truthed by three subspecialty thoracic radiologists for the presence of simple or tension pneumothorax as well as each subgroup other than positioning. Radiograph positioning was derived from radiographer annotations on the images.

2 MIN READ
Clinical research

Effect of a comprehensive deep-learning model on the accuracy of chest x-ray interpretation by radiologists: a retrospective, multireader multicase study

Lancet Digital Health, July 1 2021. Authors: Jarrel C Y Seah, Cyril H M Tang, Quinlan D Buchlak, Xavier G Holt, Jeffrey B Wardman, Anuar Aimoldin, Nazanin Esmaili, Hassan Ahmad, Hung Pham, John F Lambert, Ben Hachey, Stephen J F Hogg, Benjamin P Johnston, Christine Bennett, Luke Oakden-Rayner, Peter Brotchie, Catherine M Jones Summary: Background Chest x-rays are widely used in clinical practice; however, interpretation can be hindered by human error and a lack of experienced thoracic radiologists. Deep learning has the potential to improve the accuracy of chest x-ray interpretation. We therefore aimed to assess the accuracy of radiologists with and without the assistance of a deep-learning model. Methods: In this retrospective study, a deep-learning model was trained on 821 681 images (284 649 patients) from five data sets from Australia, Europe, and the USA. 2568 enriched chest x-ray cases from adult patients (≥16 years) who had at least one frontal chest x-ray were included in the test dataset; cases were representative of inpatient, outpatient, and emergency settings. 20 radiologists reviewed cases with and without the assistance of the deep-learning model with a 3-month washout period. We assessed the change in accuracy of chest x-ray interpretation across 127 clinical findings when the deep-learning model was used as a decision support by calculating area under the receiver operating characteristic curve (AUC) for each radiologist with and without the deep-learning model. We also compared AUCs for the model alone with those of unassisted radiologists. If the lower bound of the adjusted 95% CI of the difference in AUC between the model and the unassisted radiologists was more than –0·05, the model was considered to be non-inferior for that finding. If the lower bound exceeded 0, the model was considered to be superior. Findings: Unassisted radiologists had a macroaveraged AUC of 0·713 (95% CI 0·645–0·785) across the 127 clinical findings, compared with 0·808 (0·763–0·839) when assisted by the model. The deep-learning model statistically significantly improved the classification accuracy of radiologists for 102 (80%) of 127 clinical findings, was statistically non-inferior for 19 (15%) findings, and no findings showed a decrease in accuracy when radiologists used the deep-learning model. Unassisted radiologists had a macroaveraged mean AUC of 0·713 (0·645–0·785) across all findings, compared with 0·957 (0·954–0·959) for the model alone. Model classification alone was significantly more accurate than unassisted radiologists for 117 (94%) of 124 clinical findings predicted by the model and was non-inferior to unassisted radiologists for all other clinical findings. :

2 MIN READ
Clinical research

Chest radiographs and machine learning – Past, present and future

Journal of Medical Imaging and Radiation Oncology. 25 June 2021. Authors: CM Jones MBBS, FRCR, FRANZCR; QD Buchlak MD, MPsych, MIS; L Oakden-Rayner MBBS, FRANZCR; M Milne MS; J Seah MBBS; N Esmaili PhD, MBA; B Hachey PhD. Summary: Despite its simple acquisition technique, the chest X-ray remains the most common first-line imaging tool for chest assessment globally. Recent evidence for image analysis using modern machine learning points to possible improvements in both the efficiency and the accuracy of chest X-ray interpretation. While promising, these machine learning algorithms have not provided comprehensive assessment of findings in an image and do not account for clinical history or other relevant clinical information. However, the rapid evolution in technology and evidence base for its use suggests that the next generation of comprehensive, well-tested machine learning algorithms will be a revolution akin to early advances in X-ray technology. Current use cases, strengths, limitations and applications of chest X-ray machine learning systems are discussed. : : :

2 MIN READ
Artificial intelligence

Designing Effective Artificial Intelligence Software

Poster presented at the European Society of Radiology Congress 2021 Poster number C-13640 DOI: 10.26044/ecr2021/C-13640 Authors: C. Tang, J. C. Y. Seah, Q. Buchlak, C. Jones; Sydney/AU Learning objectives: To raise awareness of the importance of usable AI design, provide examples of model interpretability methods, and to summarise clinician reactions to methods of communicating AI model interpretability in a radiological tool. Background: In the past decade, the number of AI-enabled tools, especially deep learning solutions, has exploded onto the radiological scene with the promise of revolutionising healthcare[1]. However, these data-driven models are often treated as numerical exercises and black boxes, offering little insight into the reasons for their behaviour. Trust in novel technologies is often limited by a lack of understanding of the decision-making processes behind the technology. Findings and procedure details: Design Cycle “It’s just aggravating to have to move and shuffle all these windows… shuffle between the list and your [Brand Name] dictation software… [or] Google Chrome or Internet Explorer, to search for something on there. Everything’s just opening on top of each other, which is aggravating.” – UX interview with Interventional Radiologist, USA. The design of the entire user experience of our AI tool has involved radiologists and other clinicians at every step. Conclusion: The inclusion of interpretability techniques has been well-received through testing in multiple rounds of user interviews, reflecting a demand from the broader radiological community to be able to demystify the black box of AI. Future AI work should involve radiologists at all steps of the design process in order to address workflow and UI concerns, especially as regulatory authorities move towards guidelines that aim to ensure a safer and more interpretable AI future.

2 MIN READ
Clinical research
Menu