In this work, we investigate whether adversarial perturbations—a subordinate signal in the image—influence individuals in the same way as they influence ANNs. We are not exploring here whether predominant and subordinate signals may have separation in human cognition, but rather that both signals ma...
Conversely, late-stage perturbations provide complete observational data but limit the potential impact (action space). This nuanced challenge forms the core of our investigation into real-time adversarial attack detection in VANETs [11]. Adversarial attacks in VANETs can be specifically crafted to ...
The results show that whether the current input sample is a signal sequence or a converted image, the DNN is vulnerable to the threat of adversarial examples. In the selected methods, whether it is under different perturbations or signal-to-noise ratio (SNRs), the momentum iteration method has...
To guarantee the original maliciousness semantic uncontaminated, DeepMal generates adversarial perturbations in specific positions without disrupting the context instructions. In this stage, we mark these specific positions through a series of zeros between separate instructions, because zeros will not affect ...
We first exploit a high frequency component filter to get the HFC of benign and adversarial samples, and pull away their Euclidean distance as much as possible to influence the model's decision. We then design a generative attack framework to construct adversarial perturbations or patches with ...
Automatic speech recognition (ASR) systems are vulnerable to audio adversarial examples that attempt to deceive ASR systems by adding perturbations to benign speech signals. Although an adversarial example and the original benign wave ar... N Park,S Ji,J Kim 被引量: 0发表: 2021年 SECURE AUDIO...
Despite its great success, deep learning severely suffers from robustness; i.e., deep neural networks are very vulnerable to adversarial attacks, even the
However, training on local clients also increases the susceptibility of the global model in federated learning to adversarial attacks [4,5,6]. For instance, attackers can successfully trick the global model with a high success rate by adding minor adversarial perturbations to evaluate samples during...
For example, Tabacof et al. [37] studied the impact of Gaussian noise with different intensities and distributions on adversarial examples. Raff et al. [38] randomly combined several weak transformation methods, including color precision reduction, JPEG noise, swirl, and FFT perturbations, to ...
Based on DL and ML, adversarial perturbations can impact IDS. However, anomaly detection approaches need to be improved by skewed and missing data points, which makes IDS training challenging. The study suggests that by addressing imbalanced data and lacking specific class instances, conditional ...