The evolution of artificial intelligence and related technologies have the potential to drastically increase the clinical importance of automated diagnosis tools. Putting these tools into use, however, is challenging, since the algorithm outcome will be used to make clinical decisions and wrong predictions can prevent the most appropriate treatment from being provided to the patient. Models should not only provide accurate predictions, but also evidence that supports the outcomes, so they can be audited, and their predictions double-checked. Some models are constructed in such a way they are difficult to interpret, hence the name black-box models. While there are methods that generate explanations for generic black-box classifiers,9 the solutions are usually not tailored for the needs of physicians and do not take any medical background into consideration. Our claim, in this work, is that explanations must be based on features that are meaningful to physicians. We call those contextual features.
In order to improve accuracy and transparency in automatic ECG analysis, we propose generating explanations based on contextual features for ECG diagnosis.
Deep neural networks are relevant examples of black-box models. These models, trained on large real datasets, have demonstrated the ability to provide extremely accurate diagnosis.1,5 However, these large and complex models of stacked transformations usually do not allow easy interpretation of the results. Despite their potential to transform healthcare and clinical practice,3,8 there are still significant challenges that must be addressed. For instance, it is commonplace that neural network results are brittle either because it learns to solve the task in unwanted ways or because even small perturbations may have a huge impact on its outcome.2
Cardiovascular diseases are the leading cause of death worldwide7 and the electrocardiogram (ECG) is a major exam for screening cardiovascular diseases (see Figure 1). Our immediate application scenario is the Telehealth Network of Minas Gerais (TNMG), that serves more than 1,000 remote municipalities in six Brazilian states. More than 2,000 ECGs are examined daily and reported by cardiologists using a Web-based system. Our goal is to empower those physicians through not only accurate, automatically generated disease predictions, but also explanations that ease their understanding of the model outcome.
Figure 1. ECG samples for some common diseases.
Classical methods for automated ECG analysis, such as the University of Glasgow ECG analysis program,4 employ a two-step approach: First extracting the main features of the ECG signal using traditional signal processing techniques and then using these features as inputs to a classifier. Deep learning presents an alternative to this approach, since the raw signal itself is used as an input to the classifier, which learns from examples to extract the features, as presented in our previous work.6 In the classical two-step approach, the models are built on top of measures and features that are known by the physicians, making it easier to verify and to understand the algorithm decisions as well as to identify sources of algorithmic mistakes. Such transparency is lost in “end-to-end” deep learning approaches.
In order to improve accuracy and transparency in automatic ECG analysis, we propose generating explanations based on contextual features for ECG diagnosis (Figure 2). To the best of our knowledge, this is the first work that generates explanations tailored to physicians’ needs for ECG black-box algorithms, including end-to-end classification models. The proposed method (Figure 3) uses a noise-insertion strategy to quantify the impact of the ECG intervals and segments on the automated classification outcome and to generate meaningful features to the user. These intervals and segments and their impact on the diagnosis are commonplace to cardiologists, and their usage in explanations enables a better understanding of the outcomes and also the identification of sources of mistakes. We applied our method to generate an explanation to the predictions of the deep learning model presented in Ribeiro et al.6 using data from TNMG. Finally, we assessed our approach by analyzing the explanations generated in terms of their interpretability and robustness.
Figure 2. Comparison between methods.
While diagnosing some diseases, cardiologists analyze the ECG (depicted in Figure 4) and apply rules to diagnosis. For instance, the criteria for Left Bundle Branch Block (LBBB) is: QRS duration greater than 120 milliseconds; absence of Q wave in leads I, V5 and V6; monomorphic R wave in I, V5 and V6; and ST and T wave displacement opposite to the major deflection of the QRS complex. Our explanation consists of both a textual and a visual component in order to better explain to cardiologists in terms and criteria familiar to them. In Figure 5, we show an explanation for six classes of diseases based on how much impact the noise has over the different features, quantifying how the different criteria affect the model predictions.
Figure 4. ECG-based diagnosis.
Figure 5. Each explanation has a visual and textual component. The visual component is a horizontal bar graph where each bar represents a feature. The colored bar is the mean value of the impact of the associated feature on the classifier and the error bar at the right end is the standard deviation. An explanation as significant when the mean and the standard deviation are above the threshold vertical dotted line. The textual component is generated automatically.
In summary, improving transparency and accountability of deep learning models is an important step toward utilization. Incorporating such models in the TNMG pipeline may improve the quality of its service and have a positive impact in the treatment of many patients. In countries such as Brazil, where the population is spread across large portions of the territory and access to physicians, in particular specialists, is still an issue, we believe our proposal is an example of research-intensive work that opens new opportunities for the massive and responsible adoption of social impacting initiatives.
Acknowledgment. This work is partially supported by the Brazilian agencies CNPq, CAPES and Fapemig, by the projects MASWEB, INCT-Cyber and Atmosphere, and by the Google Research Awards for Latin America program.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment