I Introduction
Recently there have been significant research interest in cyberphysical systems (CPS) that connect the cyber world and the physical world, via integration of sensing, control, communication, computation and learning. Popular CPS applications include networked monitoring of industry, disaster management, smart grids, intelligent transportation systems, networked surveillance, etc. One important component of future intelligent transportation systems is autonomous vehicle. It is envisioned that future autonomous vehicles will be equipped with highquality cameras, whose images will be classified by a DNNbased classifier for object detection and recognition, in order to facilitate an informed maneuvering decision by the controller or autopilot. Clearly, vehicular safety in such cases is highly sensitive to image classification; any mistake in object detection or classification can lead to accidents. In the context of surveillance or security systems, adversarial images can greatly endanger human and system security.
Over the last few years, several studies have suggested that the DNNbased image classifier is highly vulnerable to deception attack [akhtar2018threat, eykholt2017robust]. In fact, with the emergence of internetofthings (IoT) providing an IP address to all gadgets including cameras, the autonomous vehicles will become more vulnerable to such attacks [chernikova2019self]. Hackers can easily tamper with the pixel values (see Figure 1) or the image data sent by the camera to the classifier. In a similar way, networked surveillance cameras will also become vulnerable to such malicious attacks.
In order to address the above challenge, we propose a new class of algorithms for adversarial image detection. Our first perturbationbased algorithm PERT performs PCA (Prinicipal Component Analysis) on clean image data set, and detects an adversary by perturbing a test image in the spectral domain along certain carefully chosen coordinates obtained from PCA. Next, its adaptive version APERT chooses the number of perturbations adaptively in order to minimize the expected number of perturbations subject to constraints on the false alarm and missed detection probabilities. Numerical results demonstrate the efficacy of these two algorithms.
Ia Related work
The existing research on adversarial images can be divided into two categories: attack design and attack mitigation.
IA1 Attack design
While there have been numerous attempts to tackle deception attacks in sensorbased remote estimation systems
[chattopadhyay2019security, chattopadhyay2018secure, chattopadhyay2018attack], the problem of design and mitigation of adversarial attack on images to cause misclassification is relatively new. The first paper on adversarial image generation was reported in [szegedy2013intriguing]. Since then, there have been significant research on attack design in this setting. All these attack schemes can be divided into two categories:
White box attack: Here the attacker knows the architecture, parameters, cost functions, etc of the classifier. Hence, it is easier to design such attacks. Examples of such attacks are given in [goodfellow2014explaining], [szegedy2013intriguing], [carlini2017towards], [madry2017towards], [papernot2016limitations], [kurakin2016adversarial].

Black box attack:
Here the adversary has access only to the output (e.g., logits or probabilities) of the classifier against a test input. Hence, the attacker has to probe the classifier with many test input images in order to estimate the sensitivity of the output with respect to the input. One black box attack is reported in
[brendel2017decision].
On the other hand, depending on attack goals, the attack schemes can be divided into two categories:

Targeted attack: Such attacks seek to misclassify a particular class to another predefined class. For example, a fruit classifier is made to classify all the apple images as banana. Such attacks are reported in [carlini2018audio] and [brendel2017decision].

Reliability Attack: Such attacks only seek to increase the classification error. Such attacks have been reported in [yuan2019adversarial], [brendel2017decision], [goodfellow2014explaining], [madry2017towards], [szegedy2013intriguing].
Some popular adversarial attacks are summarized below:

LBFGS Attack [szegedy2013intriguing]: This white box attack tries to find a perturbation to an image such that the perturbed image minimizes a cost function (where is cost parameter and is the label) of the classifier, while remains within some small set around the origin to ensure small perturbation. A Lagrange multiplier is used to relax the constraint on , which is found via line search.

Fast Gradient Sign Method (FGSM) [goodfellow2014explaining]: Here the perturbation is computed as where
the magnitude of perturbation. This perturbation can be computed via backpropagation.
Basic Iterative Method (BIM) [kurakin2016adversarial]: This is an iterative variant of FGSM.

Attack [carlini2017towards]: This is similar to [szegedy2013intriguing] except that: (i) [carlini2017towards] uses a cost function that is different from the classifier’s cost function , and (ii) the optimal Lagrange multiplier is found via binary search.

Projected Gradient Descent (PGD) [madry2017towards]: This involves applying FGSM iteratively and clipping the iterate images to ensure that they remain close to the original image.

Jacobian Saliency Map Attack (JSMA) [papernot2016limitations]: It is a greedy attack algorithm which selects the most important pixels by calculating Jacobian based saliency map, and modifies those pixels iteratively.

Boundary Attack [brendel2017decision]: This is a black box attack which starts from an adversarial point and then performs a random walk along the decision boundary between the adversarial and the nonadversarial regions, such that the iterate image stays in the adversarial region but the distance between the iterate image and the target image is progressively minimized. This is done via rejection sampling using a suitable proposal distribution, in order to find progressively smaller adversarial perturbations.
IA2 Attack mitigation
There are two possible approaches for defence against adversarial attack:

Robustness based defense: These methods seek to classify adversarial images correctly, e.g., [xie2019feature], [papernot2016distillation].

Detection based defense: These methods seek to just distinguish between adversarial and clean images; eg., [feinman2017detecting], [song2017pixeldefend].
Here we describe some popular attack mitigation schemes. The authors of [xie2019feature] proposed feature denoising to improve robustness of CNNs against adversarial images. They found that certain architectures were good for robustness even though they are not sufficient for accuracy improvements. However, when combined with adversarial training, these designs could be more robust. The authors of [feinman2017detecting] put forth a Bayesian view of detecting adversarial samples, claiming that the uncertainty associated with adversarial examples is more compared to clean ones. They used a Bayesian neural network to distinguish between adversarial and clean images on the basis of uncertainty estimation.
The authors of [song2017pixeldefend] trained a PixelCNN network [salimans2017pixelcnn++] to differentiate between clean and adversarial examples. They rejected adversarial samples using pvalue based ranking of PixelCNN. This scheme was able to detect several attacks like , Deepfool, BIM. The paper [wang2018detecting] observed that there is a significant difference between the percentage of label change due to perturbation in adversarial samples as compared to clean ones. They designed a statistical adversary detection algorithm called nMutant; inspired by mutation testing from software engineering community.
The authors of [papernot2016distillation] designed a method called network distillation to defend DNNs against adversarial examples. The original purpose of network distillation was the reduction of size of DNNs by transferring knowledge from a bigger network to a smaller one [ba2014deep], [hinton2015distilling]. The authors discovered that using hightemperature softmax reduces the model’s sensitivity towards small perturbations. This defense was tested on the MNIST and CIFAR10 data sets. It was observed that network distillation reduces the success rate of JSMA attack [papernot2016limitations] by and
respectively. However, a lot of new attacks have been proposed since then, which defeat defensive distillation (e.g.,
[carlini2016defensive]). The paper [goodfellow2014explaining]tried training an MNIST classifier with adversarial examples (adversarial retraining approach). A comprehensive analysis of this method on ImageNet data set found it to be effective against onestep attacks (eg., FGSM), but ineffective against iterative attacks (e.g., BIM
[kurakin2016adversarial]). After evaluating network distillation with adversarially trained networks on MNIST and ImageNet, [tramer2017ensemble] found it to be robust against white box attacks but not against black box ones.IB Our Contributions
In this paper, we make the following contributions:

We propose a novel detection algorithm PERT for adversarial attack detection. The algorithm performs PCA on clean image data set to obtain a set of orthonormal bases. Projection of a test image along some least significant principal components are randomly perturbed for detecting proximity to a decision boundary, which is used for detection. This combination of PCA and image perturbation in spectral domain, which is motivated by the empirical findings in [hendrycks2016early], is new to the literature.^{1}^{1}1The paper [liang2017deep] uses PCA but throws away least significant components, thereby removing useful information along those components, possibly leading to high false alarm rate. The paper [carlini2017adversarial] showed that their attack can break simple PCAbased defence, while our algorithm performs well against the attack of [carlini2017adversarial] as seen later in the numerical results.

PERT has low computational complexity; PCA is performed only once offline.

We also propose an adaptive version of PERT called APERT. The APERT algorithm declares an image to be adversarial by checking whether a specific sequential probability ratio exceeds an upper or a lower threshold. The problem of minimizing the expected number of perturbations per test image, subject to constraints on false alarm and missed detection probabilities, is relaxed via a pair of Lagrange multipliers. The relaxed problem is solved via simultaneous perturbation stochastic approximation (SPSA; see [spall1992multivariate]) to obtain the optimal threshold values, and the optimal Lagrange multipliers are learnt via twotimescale stochastic approximation [borkar2009stochastic] in order to meet the constraints. The use of stochastic approximation and SPSA to optimize the threshold values are new to the signal processing literature to the best of our knowledge. Also, the APERT algorithm has a sound theoretical motivation which is rare in most papers on adversarial image detection.

PERT and APERT are agnostic to attacker and classifier models, which makes them attractive to many practical applications.

Numerical results demonstrate high probability of attack detection and small value for false alarm probability under PERT and APERT against a competing algorithm, and reasonably low computational complexity in APERT.
IC Organization
Ii Static perturbation based algorithm
In this section, we propose an adversarial image detection algorithm based on random perturbation of an image in the spectral domain; the algorithm is called PERT. This algorithm is motivated by the two key observations:

The authors of [hendrycks2016early]
found that the injected adversarial noise mainly resides in least significant principal components. Intuitively, this makes sense since injecting noise to the most significant principal components would lead to detection by human eye. We have applied PCA on CIFAR10 training data set to learn its principal components sorted by decreasing eigenvalues; the ones with higher eigenvalues are the most significant principal components. CIFAR10 data set consists of 3072 dimensional images, applying PCA on the entire data set yields 3072 principal components. The cumulative explained variance ratio as a function of the number of components (in decreasing order of the eigenvalues) is shown in Figure
2; this figure shows that most of the variance is concentrated along the first few principal components. Hence, least significant components do not provide much additional information, and adversarial perturbation of these components should not change the image significantly. 
Several attackers intend push the image close to the decision boundary to fool a classifier [brendel2017decision]. Thus it is possible to detect an adversarial image if we can check whether it is close to a decision boundary or not. Hence, we propose a new scheme for exploring the neighborhood of a given image in spectral domain.
Hence, our algorithm performs PCA on a training data set, and finds the principal components. When a new test image (potentially adversarial) comes, it projects that image along these principal components, randomly perturbs the projection along a given number of least significant components, and then obtains another image from this perturbed spectrum. If the classifier yields same label for this new image and the original test image, then it is concluded that the original image is most likely not near a decision boundary and hence not an adversarial; else, an alarm is raised for adversarial attack. In fact, multiple perturbed images can be generated by this process, and if the label of the original test image differs with that of at least one perturbed image, an alarm is raised. The intuition behind this is that if an image is adversarial it will lie close to a decision boundary, and perturbation should push it to another region, thus changing the label generated by the classifier.
Discussion: PERT has several advantages over most algorithms in the literature:

PERT is basically a preprocessing algorithm for the test image, and hence it is agnostic to the attacker and classifier models.

The online part of PERT involves computing simple dot products and perturbations, which have very low complexity. PCA can be performed once offline and used for ever.
However, one should remember that PERT perturbs the least significant components randomly, and hence there is no guarantee that a perturbation will be in the right direction to ensure a crossover of the decision boundary. This issue can be resolved by developing more sophisticated perturbation methods using direction search, specifically in case some knowledge of the decision boundaries is available to the detector. Another option is to create many perturbations of a test image, at the expense of more computational complexity. However, in the next section, we will formulate the sequential version of PERT, which will minimize the mean number of image perturbations per image, under a budget on the missed detection probability and false alarm probability.
Iii Adaptive perturbation based algorithm
In Section II, the PERT algorithm used up to a constant number of perturbations of the test image in the spectral domain. However, the major drawback of PERT is that it might be wasteful in terms of computations. If an adversarial image is very close to the decision boundary, then small number of perturbations might be sufficient for detection. On the other hand, if the adversarial image is far away from a decision boundary, then more perturbations will be required to cross the decision boundary with high probability. Also, the PERT algorithm only checks for a decision boundary crossover (hard decision), while many DNNs yield a belief probability vector for the class of a test image (soft output); this soft output of DNNs can be used to improve detector performance and reduce its complexity.
In this section, we propose an adaptive version of PERT called APERT. The APERT algorithm sequentially perturbs the test image in spectral domain. A stopping rule is used by the preprocessing unit to decide when to stop perturbing a test image and declare a decision (adversarial or nonadversarial); this stopping rule is a twothreshold rule motivated by the sequential probability ratio test (SPRT [poor2013introduction]), on top of the decision boundary crossover checking. The threshold values are optimized using the theory of stochastic approximation [borkar2009stochastic] and SPSA [spall1992multivariate].
Iiia Mathematical formulation
Let us denote the random number of perturbations used in any adaptive technique based on random perturbation by , and let the probabilities of false alarm and missed detection of any randomly chosen test image under this technique be denoted by and respectively. We seek to solve the following constrained problem:
(CP) 
where and are two constraint values. However, (CP) can be relaxed by using two Lagrange multipliers to obtain the following unconstrained problem:
(UP) 
Let the optimal decision rule for (UP) under be denoted by . It is well known that, if there exists and such that , then is an optimal solution for (CP) as well.
Finding out for a pair is very challenging. Hence, we focus on the class of SPRTtype algorithms instead. Let us assume that the DNN based classifier generates a probability value against an input image; this probability is the belief of the classifier that the image under consideration is adversarial. Now, suppose that we sequentially perturb an image in the spectral domain as in PERT, and feed these perturbed images one by one to the DNN, which acts as our classifier. Let the DNN return category wise probabilistic distribution of the image in the form of a vector. We use these vectors to determine which indicates the likelihood (not necessarily a probability) of the th perturbed image being adversarial. Motivated by SPRT, the proposed APERT algorithm checks if the ratio crosses an upper threshold or a lower threshold after the th perturbation; an adversarial image is declared if , a nonadversarial image is declared if , and the algorithm continues perturbing the image if . In case exceeds a predetermined maximum number of perturbations without any threshold crossing, the image is declared to be nonadversarial.
Clearly, for given , the algorithm needs to compute the optimal threshold values and to minimize the cost in (UP). Also, and need to be computed to meet the constraints in (CP) with equality. APERT uses twotimescale stochastic approximation and SPSA for updating the Lagrange multipliers and the threshold values in the training phase, learns the optimal parameter values, and uses these parameter values in the test phase.
IiiB The SRT algorithm for image classification
Here we describe an SPRT based algorithm called sequential ration test or SRT for classifying an image . The algorithm takes , the PCA eigenvectors
, and a binary variable
as input, and classifies as adversarial or nonadversarial. This algorithm is used as one important component of the APERT algorithm described later. SRT blends ideas from PERT and the standard SPRT algorithm. However, as seen in the pseudocode of SRT, we use a quantity in the threshold testing where cannot be interpreted as a probability. Instead, is the normalized value of the norm of the difference between outputs and of the DNN against inputs and its th perturbation . The binary variable is used as a switch; if and if the belief probability vectors and lead to two different predicted categories, then SRT directly declares to be adversarial. It has been observed numerically that this results in a better adversarial image detection probability, and hence any test image in the proposed APERT scheme later is classified via SRT with .IiiC The APERT algorithm
IiiC1 The training phases
The APERT algorithm, designed for (CP), consists of two training phases and a testing phase. The first training phase simply runs the PCA algorithm. The second training phase basically runs stochastic approximation iterations to find so that the false alarm and missed detection probability constraints are satisfied with equality.
The second training phase of APERT requires three nonnegative sequences , and are chosen such that: (i) = = , (ii) , , (iii) , (iv) , (v) . The first two conditions are standard requirements for stochastic approximation. The third and fourth conditions are required for convergence of SPSA, and the fifth condition maintains the necessary timescale separation explained later.
The APERT algorithm also requires which is the percentage of adversarial images among all image samples used in the training phase II. It also maintains two iterates and to represent the number of clean and images encountered up to the th training image; i.e., .
Steps
of APERT correspond to SPSA which is basically a stochastic gradient descent scheme with noisy estimate of gradient, used for minimizing the objective of (
UP) over and for current iterates. SPSA allows us to compute a noisy gradient of the objective of (CP) by randomly and simultaneously perturbing in two opposite directions and obtaining the noisy estimate of gradient from the difference in the objective function evaluated at these two perturbed values; this allows us to avoid coordinatewise perturbation in gradient estimation. In has to be noted that, the cost to be optimized by SPSA has to be obtained from SRT. The and iterates are projected onto nonoverlapping compact intervals and (with ) to ensure boundedness.Steps are used to find and via stochastic approximation in a slower timescale. In has to be noted that, since , we have a twotimescale stochastic approximation [borkar2009stochastic] where the Lagrange multipliers are updated in a slower timescale and the threshold values are updated via SPSA in a faster timescale. The faster timescale iterates view the slower timescale iterates as quasistatic, while the slowertimescale iterates view the faster timescale iterates as almost equilibriated; as if, the slower timescale iterates vary in an outer loop and the faster timescale iterates vary in an inner loop. It has to be noted that, though standard twotimescale stochastic approximation theory guarantees some convergence under suitable conditions [borkar2009stochastic], here we cannot provide any convergence guarantee of the iterates due to the lack of established statistical properties of the images. It is also noted that, and are updated at different time instants; this corresponds to asynchronous stochastic approximation [borkar2009stochastic]. The and iterates are projected onto to ensure nonnegativity. Intuitively, if a false alarm is observed, the cost of false alarm, is increased. Similarly, if a missed detection is observed, then the cost of missed detection, , is increased, else it is decreased. Ideally, the goal is to reach to a pair so that the constraints in (CP) are met with equality, through we do not have any formal convergence proof.
IiiC2 Testing phase
The testing phase just uses SRT with for any test image. Since , a test image bypasses the threshold testing and is declared adversarial, in case the random perturbation results in predicted category change of the test image. It has been numerically observed (see Section IV) that this results in a small increase in false alarm rate but a high increase in adversarial image detection rate compared to . However, one has the liberty to avoid this and only use the threshold test in SRT by setting . Alternatively, one can set a slightly smaller value of in APERT with in order to compensate for the increase in false alarm.
Iv Experiments
Iva Performance of PERT
We evaluated our proposed algorithm on CIFAR10 data set and the classifier of [madry2017towards] implemented in a challenge to explore adversarial robustness of neural networks (see [MadryLabCifar10]).^{2}^{2}2Codes for our numerical experiments are available in [PCA_detection] and [SPRT_detection]. We used Foolbox library [rauber2017foolbox] for generating adversarial images. PCA was performed using Scikitlearn [scikitlearn] library in Python; this library allows us to customize the computational complexity and accuracy in PCA. Each image in CIFAR10 has pixels, where each pixel has three channels: red, green, blue. Hence, PCA provides orthonormal basis vectors. CIFAR10 has images, out of which images were used for PCA based training and rest of the images were used for evaluating the performance of the algorithm.
Table I shows the variation of detection probability (percentage of detected adversarial images) for adversarial images generated using various attacks, for number of components and various values for maximum possible number of samples (number of perturbations for a given image). Due to huge computational requirement in generating adversarial images via black box attack, we have considered only four white box attacks. It is evident that the attack detection probability (percentage) increases with ; this is intuitive since larger results in a higher probability of decision boundary crossover if an adversarial image is perturbed. The second column of Table I denotes the percentage of clean images that were declared to be adversarial by our algorithm, i.e., it contains the false alarm probabilities which also increase with . However, we observe that our preprocessing algorithm achieves very low false alarm probability and high attack detection probability under these four popular white box attacks. This conclusion is further reinforced in Table II, which shows the variation in detection performance with varying , for and . It is to be noted that the detection probability under the detection algorithm of [wang2018detecting] are and for and FGSM attacks; clearly our detection algorithm outperforms [wang2018detecting] while having low computation. The last column of Table II suggests that there is an optimal value of , since perturbation along more principal components may increase the decision boundary crossover probability but at the same time can modify the information along some most significant components as well.
No. of  Percentage Detection (%)  

Samples  Clean  FGSM  LBFGS  PGD  CW(L_{2}) 
05  1.2  50.02  89.16  55.03  96.47 
10  1.5  63.53  92.50  65.08  98.23 
15  1.7  69.41  93.33  67.45  99.41 
20  1.9  73.53  95.03  71.01  99.41 
25  1.9  75.29  95.03  75.14  100.00 
Clean images that are detected as adversarial 
No. of  Percentage Detection (%)  
Coefficients  Clean  FGSM  LBFGS  PGD  CW(L_{2}) 
No. of Samples(): 10  
0500 
1.20  58.23  90.83  57.40  95.90 
1000  1.50  69.41  93.33  60.95  95.45 
1500  2.10  64.11  91.67  61.53  95.00 
No. of Samples(): 20  
0500  1.20  68.23  93.33  68.05  95.90 
1000  1.90  74.11  94.16  70.41  95.90 
1500  2.50  71.18  95.00  71.00  95.00 
Clean images that are detected as adversarial 
IvB Performance of APERT
For APERT, we initialize , and choose step sizes . The Foolbox library was used to craft adversarial examples. The classification neural network is taken from [MadryLabCifar10]
norm is used to obtain the values since it was observed that norm outperforms norm. In the training process, of the training images were clean and images were adversarial.
Though there is no theoretical convergence guarantee for APERT, we have numerically observed convergence of , , and
IvB1 Computational complexity of PERT and APERT
We note that, a major source of computational complexity in PERT and APERT is perturbing an image and passing it through a classifier. In Table III and Table IV, we numerically compare the mean number of perturbations required for PERT and APERT under and respectively. The classification neural network was taken from [MadryLabCifar10].
Table III and Table IV show that the APERT algorithm requires much less perturbations compared to PERT for almost similar detection performance, for various attack algorithms and various test images that result in false alarm, adversarial image detection, missed detection and (correctly) clean image detection. It is also noted that, for the images resulting in missed detection and clean image detection, PERT has to exhaust all perturbation options before stopping. As a result, the mean number of perturbations in APERT becomes significantly smaller than PERT; see Table V. The key reason behind smaller number of perturbations in APERT is the fact that APERT uses a doublythreshold stopping rule motivated by the popular SPRT algorithm in detection theory. It is also observed that APERT with in the testing phase has slightly lower computaional complexity than APERT with , since APERT with has an additional flexibility of stopping the perturbation if there is a change in predicted category.
Mean Number of Samples Generated  
Attack  PERT  APERT  
Q = 1  Q = 0  
13.3  1.85  1.92  
LBFGS  13.5  2.19  2.19 
FGSM  15.5  2.14  2.56 
PGD  14.48  2.20  2.57 

We also implemented a Gaussian process regression based detector (GPRBD) from [lee2019adversarial] (not sequential in nature) which uses the neural network classifier of [MadryLabCifar10], tested it against our adversarial examples, and compared its runtime against that of PERT and APERT equipped with the neural network classifier of [MadryLabCifar10]. These experiments were run under the same colab runtime environment, in a single session. The runtine specifications are CPU Model name: Intel(R) Xeon(R) CPU @ 2.30GHz, Socket(s): 1, Core(s) per socket: 1, Thread(s) per core: 2, L3 cache: 46080K, CPU MHz: 2300.000, RAM available: 12.4 GB, Disk Space Available: 71 GB. Table VI shows that, APERT has significantly smaller runtime than PERT as expected, and slightly larger runtime than GPRBD. Also, APERT with has smaller runtime than .
Attack  Average Time Taken per Image (seconds)  
GPRBD  APERT  PERT  
Q = 1  Q = 0  
0.2829  0.6074  0.6398  4.1257  
LBFGS  0.2560  0.6982  0.7059  4.7895 
FGSM  0.2728  0.6372  0.7801  4.6421 
PGD  0.2694  0.6475  0.7789  4.4216 

IvB2 Performance of PERT and APERT
In Figure 3, we compare the ROC (receiver operating characteristic) plots of PERT, APERT and GPRBD algorithms, all implemented with the same neural network classifier of [MadryLabCifar10]. The Gaussian model used for GPRBD was implemented using [gpy2014] with the Kernel parameters set as follows: input dimensions = 10, variance = 1 and length scale = 0.01 as in [lee2019adversarial]. The Gaussian model parameter optimization was done using LBFGS with max iterations = 1000. It is obvious from Figure 3 that, for the same false alarm probability, APERT has higher or almost same attack detection rate compared to PERT. Also, APERT and PERT significantly outperform GPRBD. Hence, APERT yields a good compromise between ROC performance and computational complexity. It is also observed that APERT with always has a better ROC curve than APERT with in the testing phase.
Table VII and Table VIII show that the false alarm probability and attack detection probability of APERT increases with for a fixed , for both and . As increases, more least significant components are perturbed in the spectral domain, resulting in a higher probability of decision boundary crossover.
V Conclusion
In this paper, we have proposed two novel preprocessing schemes for detection of adversarial images, via a combination of PCAbased spectral decomposition, random perturbation, SPSA and twotimescale stochastic approximation. The proposed schemes have reasonably low computational complexity and are agnostic to attacker and classifier models. Numerical results on detection and false alarm probabilities demonstrate the efficacy of the proposed algorithms, despite having low computational complexity. We will extend this work for detection of black box attacks in our future research endeavour.
Comments
There are no comments yet.