The past few years have seen an incredible rise in the use of smart systems based on artificial neural networks (ANNs), owing to their remarkable classification capability and decision making comparable to that of humans. Yet, as shown in Figure 1, the addition of even a small amount of noise to the input may trigger these networks to give incorrect results.13 This is an alarming limitation of the ANNs, particularly for those deployed in safety-critical applications such as autonomous vehicles, aviation, and healthcare. For instance, consider a self-driving car using an ANN to perceive traffic signs as shown in Figure 2; the correct classification by the ANN in noisy real-world environments is crucial for the safety of humans and objects in the vicinity of the car.
Figure 1. Magnitudes of image input and the noise applied to it. The addition of noise causes the input previously classified correctly to be misclassified by the trained network.
Figure 2. The addition of small, imperceptible noise to traffic signs makes the trained network provide incorrect output classification.5
The standard design cycle of an ANN involves training the network on clean data, tuning the network’s hyperparameters based on clean data, and ensuring acceptable accuracy of the trained network using clean data. However, noise is a ubiquitous component of the real world.
The impacts of noise stretch beyond the correct/incorrect response to noisy inputs.
This means a trained ANN deployed in a real-world application is fed with noisy data, to which the ANN may provide an unacceptable response from a wide spectrum of unlikely responses. This calls for a better understanding of the impacts (that is, the possibility of unlikely responses) of noise on the trained ANN, before it is deployed in practical applications.12
Toward this end, this article provides an overview of the state of the art and its limitations for analyzing the impacts of noise on ANNs. It also provides an effective framework for identifying the vulnerabilities of trained ANNs pertaining to noise, and consequently discusses the impacts of noise on the ANNs’ performance.
Related Work
The existing literature, including both empirical and formal efforts, have been directed to observe the response of trained ANNs to different noise bounds.3,5 However, given the completeness of the formal methods,3,4,11,14 they are of particular interest to provide reliable guarantees for the behavior of ANNs under noise.9,13,14 Among these formal methods, the use of linear programming and satisfiability (SAT) solving prevail for ANN analysis. The linear programming uses an ANN model given as a set of linear constraints and employs optimization for property verification.7,9,10 On the other hand, the SAT solving-based works aim to identify a satisfying solution to erroneous behavior by the ANN, using network models often expressed in conjunctive normal form (CNF).3,14
However, a major limitation of these works is the consideration of robustness as the only impact noise has on trained ANNs. Additionally, even though the constant use of linear programming and SAT solving has significantly improved the state of the art for the formal analysis of trained ANNs, formal methods like model checking remain a scarcely explored area for ANN analysis. This work aims to address the aforementioned limitations by leveraging model checking to verify the various impacts of noise on ANNs, stretching beyond robustness.
FANNet: Formal Analysis Framework for Neural Networks
As hinted earlier, the impacts of noise stretch beyond the correct/incorrect response to noisy inputs. Our work aims to explore the vast variety of ANNs’ responses to noise, including checking robustness, identifying noise tolerance of the network, discovering an underlying bias in the network to certain output classes, and also studying the sensitivity of individual input nodes. To achieve this, we propose the use of the formal analysis framework FANNet,11 as summarized in Figure 3, which takes a systematic approach to analyzing the various responses of ANNs to noise. It leverages the automated operation of a model checker and, depending on the type of model checker used, can provide qualitative and/or quantitative results for the analysis.
Figure 3. Overview of the formal analysis framework FANNet. The framework writes a formal model for the trained ANN, verifies the ANN’s properties, and identifies the noise tolerance of the ANN.
To achieve its targets, the framework uses the architectural details and the trained parameter values to define a formal ANN model. The model is validated (p0) using seed inputs from the (labeled) testing dataset; that is, the results from the formal ANN model and the actual trained network must be identical. Predefined noise bounds are provided to the framework, using which, the model checker non-deterministically selects incident noise vectors for the formal analysis. Properties defined in (probabilistic) temporal logic then are verified for the validated ANN model, with seed inputs under the incidence of bounded noise. These properties include:
- Robustness (p1)
Given an ANN f: X→L(X), the addition of small noise n to a correctly classified input x ∈ X with classification label L(x), should not change the output classification of x.
- Training bias (p2)
Given an ANN f: X→L(X), f is said to delineate training bias if the addition of small noise n to a correctly classified input from output class B,Xb ∈XB is more likely to be misclassified by f than a correctly classified input from output class A,Xa∈XA.
- Sensitivity of individual input nodes (p3)
Given an ANN f: X→ηL(X), and Xi is an arbitrary input node of the network, the node is said to be insensitive if the addition of small noise η to the node, for a correctly classified input x∈X with classification label L(x), that is, f(x\xi,xi+η) does not make the ANN misclassify the input x.
Additionally, an iterative noise reduction loop is also deployed in FANNet, which identifies the maximum noise nmax, in the presence of which the trained ANN does not exhibit any misclassification. This provides the noise tolerance bounds of the network.
Results and Discussion
To demonstrate the impact of noise on a real-world dataset, we trained a single hidden-layer, ReLU-based fully connected binary classifier on Leukemia dataset2 to the training and testing accuracies of 100% and 94.12%, respectively. The following results were obtained using FANNet deploying the Storm model checker,1 on AMDRyzen Threadripper 2990WX processors running the Ubuntu 18.04 LTS operating system. It must be noted that the FANNet is independent of the choice of model checker used. The use of a qualitative model checker like NuXMV, as done in our prior work,11 will essentially provide a binary (that is, SAT or UNSAT) result. On the other hand, the employment of a quantitative tool like the Storm model checker provides more precise results (that is, the exact probability of how likely it is for the property to hold for the given network).
Figure 4 shows the probability of correct classification by the ANN under the influence of noise. As expected, the probability of correct classification decreases with increasing noise to the seed inputs, indicating the decreasing robustness of the network. The change in probability is apparent, however, only once the incident noise increases beyond the noise tolerance of the network. Figure 4 indicates this decrease can be attributed to the low classification accuracy of Class B. Even a considerably large noise does not trigger incorrect classification for Class A, hence depicting a training bias in the network. This phenomenon can also be observed visually in Figure 5, where the addition of noise (that is, z-axis) is seen to cause misclassifications of inputs from Class A (blue points on the graph) but not from Class B (red points on the graph). In addition, Figure 6 demonstrates the varying sensitivities of individual input nodes since the increase in noise reduces the correct classification probability for certain (sensitive) input nodes more than the others.
Figure 4. The effects of increasing noise on the probability of correct classification of inputs by the trained ANN.
Figure 5. Application of noise (z-axis) to inputs from Class A and B (x– and y-axis, respectively) results in inputs from Class A being misclassified significantly more compared to Class B, indicating a bias in the trained network.
Figure 6. The effects of increasing noise on the probability of correct classification of individual input nodes: different nodes may have different sensitivities to incident noise.
As observed in the aforementioned results, noise impacts trained ANNs in numerous ways. Robustness of networks is generally lower for ANNs fed with noisy inputs. Noise tolerance, on other hand, is an essential (constant) attribute of a trained ANN that can aid during system design by providing the acceptable noise level to ensure the robustness of the ANN. Training bias is a stealthy vulnerability of the ANN, which may grow under noisy input and may lead the ANN to provide incorrect responses in real-world applications. In addition to these, the analysis of individual input nodes may provide useful insights to the working of a trained ANN and pave a new way towards research on the interpretability of trained ANNs.
The findings from the framework can also be used to improve the training of ANNs. The knowledge of existing training bias in the dataset could be used for dataset resampling and bias-awareness training.6 Likewise, the noise tolerance and sensitivity of input nodes may be leveraged by multi-objective training algorithms to provide robust ANNs, in addition to providing highly accurate ANNs with optimal network architecture.8
Conclusion
Despite being an essential component of the real-world environment, noise is often absent in the design cycle of ANNs trained for practical systems. A vast literature covers the impact of such noise on the robustness of ANNs. However, the impacts of noise beyond robustness are rarely studied. We propose the use of the model checking-based framework FANNet to fill this gap in the state of the art. Such targeted analysis to study the impacts of noise on trained ANNs will likely improve the reliability and safety of future smart systems.
Acknowledgment. This work was partially supported by Doctoral College Resilient Embedded Systems, which is run jointly by TU Wien’s Faculty of Informatics and FH-Technikum Wien.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment