Furthermore, we design an adaptive defense strategy to mitigate gradient inversion attacks in spatiotemporal federated learning. By dynamically adjusting the perturbation levels, we can offer tailored protection for varying rounds of training data, thereby achieving a better trade-off between privacy and...
However, recently federated learning has been shown to be susceptible to gradient inversion attacks, where an adversary can compromise privacy by recreating the data that lead to a particular client's update. In this paper, we propose a new algorithm, SecAdam, to mitigate such emerging gradient ...
The implementation of A New Federated Learning Framework Against Gradient Inversion Attacks [AAAI 2025]. Pengxin Guo*, Shuang Zeng*, Wenhao Chen, Xiaodan Zhang, Weihong Ren, Yuyin Zhou, and Liangqiong Qu. Figure 1. Left. Existing methods mainly explore defenses mechanisms on the shared gradient...
Federated learning (FL) facilitates collaborative model training among multiple clients without raw data exposure. However, recent studies have shown that clients' private training data can be reconstructed from shared gradients in FL, a vulnerability known as gradient inversion attacks (GIAs). While ...
Despite addressing communication overheads, compressed SGD introduces trustworthiness concerns, as gradient exchanges among nodes are vulnerable to attacks like gradient inversion (GradInv) and membership inference attacks (MIA). The trustworthiness of compressed SGD remains underexplored, leaving important ...
et al. Security constrained unit commitment in smart energy systems: A flexibility-driven approach considering false data injection attacks in electric vehicle parking lots. International Journal of Electrical Power and Energy Systems, 2024, 161: 110180. DOI:10.1016/j.ijepes.2024.110180 74. Rahman,...
Such attacks, known as gradient inversion attacks, include techniques like deep leakage gradients (DLG). In this work, we explore the implications of gradient inversion attacks in FL and propose a novel defence mechanism, called Pruned Frequency-based Gradient Defence ( p FGD), to mitigate these...
In addition, well understanding gradient leakage attacks are beneficial to model inversion attacks. Furthermore, gradient leakage attacks can be performed in a covert way, which does not hamper the training performance. It is significant to study gradient leakage attacks deeply. In this paper, a ...
With the trend toward sharing pretrained models, the risk of stealing training data sets through member inference attacks and model inversion attacks is further heightened. To tackle the privacy‐preserving problems in deep learning tasks, we propose an improved Differential Privacy Stochastic Gradient ...
In this work, we explore the implications of Gradient Inversion attacks in FL and propose a novel defence mechanism, called Pruned Frequency-based Gradient Defence (pFGD), to mitigate these risks. Our defence strategy combines frequency transformation using techniques such as Discrete Cosine Transform...