📝 Selected Publications
(* indicates equal contribution; # indicates corresponding authorship.)

Towards Efficient Data-Free Black-box Adversarial Attack
Jie Zhang*, Bo Li*, Jianghe Xu, Shuang Wu, Shouhong Ding, Chao Wu#. (CVPR 2022) code
- In this paper, by rethinking the collaborative relationship between the generator and the substitute model, we design a novel black-box attack framework. The proposed method can efficiently imitate the target model through a small number of queries and achieve high attack success rate.

Federated Learning with Label Distribution Skew via Logits Calibration
Jie Zhang, Zhiqi Li, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Chao Wu#. (ICML 2022)
- In this work, we investigate the label distribution skew from a statistical view. We demonstrate both theoretically and empirically that previous methods based on softmax crossentropy are not suitable, which can result in local models heavily overfitting to minority classes and missing classes. Then, we propose FedLC (Federated learning via Logits Calibration), which calibrates the logits before softmax cross-entropy according to the probability of occurrence of each class.

Accelerating Dataset Distillation via Model Augmentation
Lei Zhang*, Jie Zhang*, Bowen Lei, Subhabrata Mukherjee, Xiang Pan, Bo Zhao, Caiwen Ding, Yao Li, Dongkuan Xu.
(CVPR 2023) code
- In this paper, we assume that training the synthetic data with diverse models leads to better generalization performance. Thus we propose two model augmentation techniques, i.e., using early-stage models and weight perturbation to learn an informative synthetic set with significantly reduced training cost. Extensive experiments demonstrate that our method achieves up to 20× speedup and comparable performance on par with state-of-the-art baseline methods.

DENSE: Data-Free One-Shot Federated Learning
Jie Zhang*, Chen Chen*, Bo Li, Lingjuan Lyu, Shuang Wu, Shouhong Ding, Chunhua Shen, Chao Wu#. (NeurIPS 2022) code
- The paper focuses on one-shot federated learning, i.e., the server can learn a model with a single communication round. The proposed FedSyn method has two stages: first, training a generator from the ensemble of models from clients; second, distilling the knowledge of the ensemble into a global model with synthetic data. We validate the efficacy of FedSyn by conducting extensive experiments on 6 different datasets with various non-IID settings generated from Dirichlet distributions. Results can well support that the proposed method consistently outperforms all the baselines.

Delving into Adversarial Robustness of Federated Learning
Jie Zhang*, Bo Li*, Chen Chen, Lingjuan Lyu, Shuang Wu, Shouhong Ding, Chao Wu#. (AAAI 2023)
- To facilitate a better understanding of the adversarial vulnerability of the existing FL methods, we conduct comprehensive robustness evaluations on various attacks and adversarial training methods. Moreover, we reveal the negative impacts induced by directly adopting adversarial training in FL, which seriously hurts the test accuracy, especially in non-IID settings. In this work, we propose a novel algorithm called Decision Boundary based Federated Adversarial Training (DBFAT), which consists of two components (local re-weighting and global regularization) to improve both accuracy and robustness of FL systems.
- ICCV 2023 Rethinking Data Distillation: Do Not Overlook Calibration, D. Zhu, B. Lei, Jie Zhang, Y. Fang, Y. Xie, R. Zhang, D. Xu.
- ICCV 2023 TARGET: Federated Class-Continual Learning via Exemplar-Free Distillation, Jie Zhang, Chen Chen, Weiming Zhuang, Lingjuan Lv.
- ICLR 2023 IDEAL: Query-Efficient Data-Free Learning from Black-Box Models, Jie Zhang*, Chen Chen*, Lingjuan Lyu. code
- Best Student Paper Award, AAAI 2022 FL workshop GEAR: A Margin-based Federated Adversarial Training Approach, Chen Chen*, Jie Zhang*, Lingjuan Lyu.