


default search action
Neil Zhenqiang Gong
Neil Gong 0001
Person information
- affiliation: Duke University, Durham, NC, USA
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j16]Wei Sun, Tingjun Chen, Neil Gong:
SoK: Secure Human-centered Wireless Sensing. Proc. Priv. Enhancing Technol. 2024(2): 313-329 (2024) - [j15]Yixin Wu, Xinlei He, Pascal Berrang, Mathias Humbert, Michael Backes, Neil Zhenqiang Gong, Yang Zhang:
Link Stealing Attacks Against Inductive Graph Neural Networks. Proc. Priv. Enhancing Technol. 2024(4): 818-839 (2024) - [c107]Yueqi Xie, Minghong Fang, Renjie Pi, Neil Gong:
GradSafe: Detecting Jailbreak Prompts for LLMs via Safety-Critical Gradient Analysis. ACL (1) 2024: 507-518 - [c106]Wen Huang, Hongbin Liu, Minxin Guo, Neil Gong:
Visual Hallucinations of Multi-modal Large Language Models. ACL (Findings) 2024: 9614-9631 - [c105]Jiawen Shi
, Zenghui Yuan
, Yinuo Liu
, Yue Huang
, Pan Zhou
, Lichao Sun
, Neil Zhenqiang Gong
:
Optimization-based Prompt Injection Attack to LLM-as-a-Judge. CCS 2024: 660-674 - [c104]Zonghao Huang
, Neil Zhenqiang Gong
, Michael K. Reiter
:
A General Framework for Data-Use Auditing of ML Models. CCS 2024: 1300-1314 - [c103]Minghong Fang
, Zifan Zhang
, Hairi
, Prashant Khanduri
, Jia Liu
, Songtao Lu
, Yuchen Liu
, Neil Gong
:
Byzantine-Robust Decentralized Federated Learning. CCS 2024: 2874-2888 - [c102]Bo Hui
, Haolin Yuan
, Neil Gong
, Philippe Burlina
, Yinzhi Cao
:
PLeak: Prompt Leaking Attacks against Large Language Model Applications. CCS 2024: 3600-3614 - [c101]Neil Gong
, Qi Li
, Xiaoli Zhang
:
AACD '24: 11th ACM Workshop on Adaptive and Autonomous Cyber Defense. CCS 2024: 4884-4885 - [c100]Jinghuai Zhang, Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
Data Poisoning Based Backdoor Attacks to Contrastive Learning. CVPR 2024: 24357-24366 - [c99]Yuqi Jia
, Saeed Vahidian
, Jingwei Sun
, Jianyi Zhang
, Vyacheslav Kungurtsev
, Neil Zhenqiang Gong, Yiran Chen
:
Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents. ECCV (78) 2024: 18-33 - [c98]Zhengyuan Jiang, Moyang Guo, Yuepeng Hu, Jinyuan Jia, Neil Zhenqiang Gong:
Certifiably Robust Image Watermark. ECCV (77) 2024: 427-443 - [c97]Roy Xie, Junlin Wang, Ruomin Huang, Minxing Zhang, Rong Ge, Jian Pei, Neil Gong, Bhuwan Dhingra:
ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods. EMNLP 2024: 8671-8689 - [c96]Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan Wu, Qihui Zhang, Yixin Liu, Pan Zhou, Yao Wan, Neil Zhenqiang Gong, Lichao Sun
:
MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use. ICLR 2024 - [c95]Kaijie Zhu, Jiaao Chen, Jindong Wang, Neil Zhenqiang Gong, Diyi Yang, Xing Xie:
DyVal: Dynamic Evaluation of Large Language Models for Reasoning Tasks. ICLR 2024 - [c94]Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Hanchi Sun, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric P. Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John C. Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao:
Position: TrustLLM: Trustworthiness in Large Language Models. ICML 2024 - [c93]Yueqi Xie, Minghong Fang, Neil Zhenqiang Gong:
FedREDefense: Defending against Model Poisoning Attacks for Federated Learning using Model Update Reconstruction Error. ICML 2024 - [c92]Kaijie Zhu
, Jindong Wang
, Jiaheng Zhou
, Zichen Wang
, Hao Chen
, Yidong Wang
, Linyi Yang
, Wei Ye
, Yue Zhang
, Neil Gong
, Xing Xie
:
PromptRobust: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts. LAMPS@CCS 2024: 57-68 - [c91]Hongbin Liu, Moyang Guo, Zhengyuan Jiang, Lun Wang, Neil Gong:
AudioMarkBench: Benchmarking Robustness of Audio Watermarking. NeurIPS 2024 - [c90]Hongbin Liu, Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong:
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning. SP (Workshops) 2024: 144-156 - [c89]Yuchen Yang, Bo Hui, Haolin Yuan, Neil Gong, Yinzhi Cao:
SneakyPrompt: Jailbreaking Text-to-image Generative Models. SP 2024: 897-912 - [c88]Hongbin Liu, Michael K. Reiter, Neil Zhenqiang Gong:
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models. USENIX Security Symposium 2024 - [c87]Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, Neil Zhenqiang Gong:
Formalizing and Benchmarking Prompt Injection Attacks and Defenses. USENIX Security Symposium 2024 - [c86]Minxue Tang, Anna Dai, Louis DiValentin, Aolin Ding, Amin Hass, Neil Zhenqiang Gong, Yiran Chen, Hai (Helen) Li:
ModelGuard: Information-Theoretic Defense Against Model Extraction Attacks. USENIX Security Symposium 2024 - [c85]Yichang Xu
, Ming Yin
, Minghong Fang
, Neil Zhenqiang Gong
:
Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks. WWW (Companion Volume) 2024: 798-801 - [c84]Ming Yin
, Yichang Xu
, Minghong Fang
, Neil Zhenqiang Gong
:
Poisoning Federated Recommender Systems with Fake Users. WWW 2024: 3555-3565 - [i112]Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric P. Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, John C. Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yue Zhao:
TrustLLM: Trustworthiness in Large Language Models. CoRR abs/2401.05561 (2024) - [i111]Ming Yin, Yichang Xu, Minghong Fang, Neil Zhenqiang Gong:
Poisoning Federated Recommender Systems with Fake Users. CoRR abs/2402.11637 (2024) - [i110]Yueqi Xie, Minghong Fang, Renjie Pi, Neil Zhenqiang Gong:
GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis. CoRR abs/2402.13494 (2024) - [i109]Wen Huang, Hongbin Liu, Minxin Guo, Neil Zhenqiang Gong:
Visual Hallucinations of Multi-modal Large Language Models. CoRR abs/2402.14683 (2024) - [i108]Hongbin Liu, Michael K. Reiter, Neil Zhenqiang Gong:
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models. CoRR abs/2402.14977 (2024) - [i107]Yichang Xu, Ming Yin, Minghong Fang, Neil Zhenqiang Gong:
Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks. CoRR abs/2403.03149 (2024) - [i106]Yuepeng Hu, Zhengyuan Jiang, Moyang Guo, Neil Zhenqiang Gong:
A Transfer Attack to Image Watermarks. CoRR abs/2403.15365 (2024) - [i105]Jiawen Shi, Zenghui Yuan, Yinuo Liu, Yue Huang, Pan Zhou, Lichao Sun, Neil Zhenqiang Gong:
Optimization-based Prompt Injection Attack to LLM-as-a-Judge. CoRR abs/2403.17710 (2024) - [i104]Zhengyuan Jiang, Moyang Guo, Yuepeng Hu, Neil Zhenqiang Gong:
Watermark-based Detection and Attribution of AI-Generated Content. CoRR abs/2404.04254 (2024) - [i103]Jiacheng Du, Jiahui Hu, Zhibo Wang, Peng Sun, Neil Zhenqiang Gong, Kui Ren:
SoK: Gradient Leakage in Federated Learning. CoRR abs/2404.05403 (2024) - [i102]Yueqi Xie, Minghong Fang, Neil Zhenqiang Gong:
PoisonedFL: Model Poisoning Attacks to Federated Learning via Multi-Round Consistency. CoRR abs/2404.15611 (2024) - [i101]Yixin Wu, Xinlei He, Pascal Berrang, Mathias Humbert, Michael Backes, Neil Zhenqiang Gong, Yang Zhang
:
Link Stealing Attacks Against Inductive Graph Neural Networks. CoRR abs/2405.05784 (2024) - [i100]Yujie Zhang, Neil Gong, Michael K. Reiter:
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning. CoRR abs/2405.06206 (2024) - [i99]Bo Hui, Haolin Yuan, Neil Zhenqiang Gong, Philippe Burlina, Yinzhi Cao:
PLeak: Prompt Leaking Attacks against Large Language Model Applications. CoRR abs/2405.06823 (2024) - [i98]Yuepeng Hu, Zhengyuan Jiang, Moyang Guo, Neil Zhenqiang Gong:
Stable Signature is Unstable: Removing Image Watermark from Diffusion Models. CoRR abs/2405.07145 (2024) - [i97]Hongbin Liu, Moyang Guo, Zhengyuan Jiang, Lun Wang, Neil Zhenqiang Gong:
AudioMarkBench: Benchmarking Robustness of Audio Watermarking. CoRR abs/2406.06979 (2024) - [i96]Minghong Fang, Zifan Zhang, Hairi, Prashant Khanduri, Jia Liu, Songtao Lu, Yuchen Liu, Neil Zhenqiang Gong:
Byzantine-Robust Decentralized Federated Learning. CoRR abs/2406.10416 (2024) - [i95]Roy Xie, Junlin Wang, Ruomin Huang, Minxing Zhang, Rong Ge, Jian Pei, Neil Zhenqiang Gong, Bhuwan Dhingra:
ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods. CoRR abs/2406.15968 (2024) - [i94]Dongping Chen, Jiawen Shi, Yao Wan, Pan Zhou, Neil Zhenqiang Gong, Lichao Sun:
Self-Cognition in Large Language Models: An Exploratory Study. CoRR abs/2407.01505 (2024) - [i93]Zhengyuan Jiang, Moyang Guo, Yuepeng Hu, Jinyuan Jia, Neil Zhenqiang Gong:
Certifiably Robust Image Watermark. CoRR abs/2407.04086 (2024) - [i92]Yuqi Jia, Minghong Fang, Hongbin Liu, Jinghuai Zhang, Neil Zhenqiang Gong:
Tracing Back the Malicious Clients in Poisoning Attacks to Federated Learning. CoRR abs/2407.07221 (2024) - [i91]Zedian Shao, Hongbin Liu, Yuepeng Hu, Neil Zhenqiang Gong:
Refusing Safe Prompts for Multi-modal Large Language Models. CoRR abs/2407.09050 (2024) - [i90]Mihai Christodorescu, Ryan Craven, Soheil Feizi, Neil Gong, Mia Hoffmann, Somesh Jha, Zhengyuan Jiang, Mehrdad Saberi Kamarposhti, John C. Mitchell, Jessica Newman, Emelia Probasco, Yanjun Qi, Khawaja Shams, Matthew Turek:
Securing the Future of GenAI: Policy and Technology. CoRR abs/2407.12999 (2024) - [i89]Zonghao Huang, Neil Zhenqiang Gong, Michael K. Reiter:
A General Framework for Data-Use Auditing of ML Models. CoRR abs/2407.15100 (2024) - [i88]Yupei Liu, Yuqi Jia, Jinyuan Jia, Neil Zhenqiang Gong:
Evaluating Large Language Model based Personal Information Extraction and Countermeasures. CoRR abs/2408.07291 (2024) - [i87]Xilong Wang, Hao Fu, Jindong Wang, Neil Zhenqiang Gong:
StringLLM: Understanding the String Processing Capability of Large Language Models. CoRR abs/2410.01208 (2024) - [i86]Zhongye Liu, Hongbin Liu, Yuepeng Hu, Zedian Shao, Neil Zhenqiang Gong:
Automatically Generating Visual Hallucination Test Cases for Multimodal Large Language Models. CoRR abs/2410.11242 (2024) - [i85]Zedian Shao, Hongbin Liu, Jaden Mu, Neil Zhenqiang Gong:
Making LLMs Vulnerable to Prompt Injection via Poisoning Alignment. CoRR abs/2410.14827 (2024) - [i84]Moyang Guo, Yuepeng Hu, Zhengyuan Jiang, Zeyu Li, Amir Sadovnik, Arka Daw, Neil Gong:
AI-generated Image Detection: Passive or Watermark? CoRR abs/2411.13553 (2024) - [i83]Mihai Christodorescu, Ryan Craven, Soheil Feizi, Neil Zhenqiang Gong, Mia Hoffmann, Somesh Jha, Zhengyuan Jiang, Mehrdad Saberi Kamarposhti, John C. Mitchell, Jessica Newman, Emelia Probasco, Yanjun Qi, Khawaja Shams, Matthew Turek:
Securing the Future of GenAI: Policy and Technology. IACR Cryptol. ePrint Arch. 2024: 855 (2024) - 2023
- [j14]Chengbin Pang
, Hongbin Liu
, Yifan Wang
, Neil Zhenqiang Gong, Bing Mao, Jun Xu:
Generation-based fuzzing? Don't build a new generator, reuse! Comput. Secur. 129: 103178 (2023) - [c83]Zhengyuan Jiang
, Jinghuai Zhang
, Neil Zhenqiang Gong
:
Evading Watermark based Detection of AI-Generated Content. CCS 2023: 1168-1181 - [c82]Jinghuai Zhang, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong:
PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees. CVPR 2023: 9496-9505 - [c81]Yuchen Yang, Haolin Yuan, Bo Hui, Neil Zhenqiang Gong, Neil Fendley, Philippe Burlina, Yinzhi Cao:
Fortifying Federated Learning against Membership Inference Attacks via Client-level Input Perturbation. DSN 2023: 288-301 - [c80]Zhengyuan Jiang, Minghong Fang, Neil Zhenqiang Gong:
IPCert: Provably Robust Intellectual Property Protection for Machine Learning. ICCV (Workshops) 2023: 3614-3623 - [c79]Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong:
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service. NDSS 2023 - [c78]Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang, Neil Zhenqiang Gong:
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information. SP 2023: 1366-1383 - [c77]Yuchen Yang, Bo Hui, Haolin Yuan, Neil Zhenqiang Gong, Yinzhi Cao:
PrivateFL: Accurate, Differentially Private Federated Learning via Personalized Data Transformation. USENIX Security Symposium 2023: 1595-1612 - [c76]Jinyuan Jia, Yupei Liu, Yuepeng Hu, Neil Zhenqiang Gong:
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks. USENIX Security Symposium 2023: 1703-1720 - [c75]Xiaoguang Li, Ninghui Li, Wenhai Sun, Neil Zhenqiang Gong, Hui Li:
Fine-grained Poisoning Attack to Local Differential Privacy Protocols for Mean and Variance Estimation. USENIX Security Symposium 2023: 1739-1756 - [i82]Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong:
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service. CoRR abs/2301.02905 (2023) - [i81]Jinghuai Zhang
, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong:
PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees. CoRR abs/2303.01959 (2023) - [i80]Jinyuan Jia, Yupei Liu, Yuepeng Hu, Neil Zhenqiang Gong:
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks. CoRR abs/2303.14601 (2023) - [i79]Zhengyuan Jiang, Jinghuai Zhang, Neil Zhenqiang Gong:
Evading Watermark based Detection of AI-Generated Content. CoRR abs/2305.03807 (2023) - [i78]Yuchen Yang, Bo Hui, Haolin Yuan, Neil Zhenqiang Gong, Yinzhi Cao:
SneakyPrompt: Evaluating Robustness of Text-to-image Generative Models' Safety Filters. CoRR abs/2305.12082 (2023) - [i77]Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang
, Wei Ye, Neil Zhenqiang Gong, Yue Zhang, Xing Xie:
PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts. CoRR abs/2306.04528 (2023) - [i76]Minglei Yin, Bin Liu, Neil Zhenqiang Gong, Xin Li:
Securing Visually-Aware Recommender Systems: An Adversarial Image Reconstruction and Detection Framework. CoRR abs/2306.07992 (2023) - [i75]Kaijie Zhu, Jiaao Chen, Jindong Wang, Neil Zhenqiang Gong, Diyi Yang, Xing Xie:
DyVal: Graph-informed Dynamic Evaluation of Large Language Models. CoRR abs/2309.17167 (2023) - [i74]Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan Wu, Qihui Zhang, Yixin Liu, Pan Zhou, Yao Wan, Neil Zhenqiang Gong, Lichao Sun:
MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use. CoRR abs/2310.03128 (2023) - [i73]Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, Neil Zhenqiang Gong:
Prompt Injection Attacks and Defenses in LLM-Integrated Applications. CoRR abs/2310.12815 (2023) - [i72]Yuqi Jia, Minghong Fang, Neil Zhenqiang Gong:
Competitive Advantage Attacks to Decentralized Federated Learning. CoRR abs/2310.13862 (2023) - [i71]Zonghao Huang, Neil Gong, Michael K. Reiter:
Mendata: A Framework to Purify Manipulated Training Data. CoRR abs/2312.01281 (2023) - [i70]Yuqi Jia, Saeed Vahidian, Jingwei Sun, Jianyi Zhang, Vyacheslav Kungurtsev, Neil Zhenqiang Gong, Yiran Chen:
Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents. CoRR abs/2312.01537 (2023) - 2022
- [j13]Jia Lu
, Ryan Tsoi, Nan Luo, Yuanchi Ha, Shangying Wang, Minjun Kwak, Yasa Baig, Nicole Moiseyev, Shari Tian, Alison Zhang, Neil Zhenqiang Gong, Lingchong You
:
Distributed information encoding and decoding using self-organized spatial patterns. Patterns 3(10): 100590 (2022) - [j12]Xiaoyu Cao
, Zaixi Zhang
, Jinyuan Jia
, Neil Zhenqiang Gong
:
FLCert: Provably Secure Federated Learning Against Poisoning Attacks. IEEE Trans. Inf. Forensics Secur. 17: 3691-3705 (2022) - [c74]Jinyuan Jia, Yupei Liu, Xiaoyu Cao, Neil Zhenqiang Gong:
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks. AAAI 2022: 9575-9583 - [c73]Minghong Fang
, Jia Liu
, Neil Zhenqiang Gong
, Elizabeth S. Bentley
:
AFLGuard: Byzantine-robust Asynchronous Federated Learning. ACSAC 2022: 632-646 - [c72]Binghui Wang, Tianchen Zhou, Song Li
, Yinzhi Cao, Neil Zhenqiang Gong:
GraphTrack: A Graph-based Cross-Device Tracking Framework. AsiaCCS 2022: 82-96 - [c71]Da Zhong, Haipei Sun, Jun Xu, Neil Zhenqiang Gong, Wendy Hui Wang
:
Understanding Disparate Effects of Membership Inference Attacks and their Countermeasures. AsiaCCS 2022: 959-974 - [c70]Yupei Liu, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong:
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning. CCS 2022: 2115-2128 - [c69]Jiyu Chen, Yiwen Guo, Hao Chen, Neil Gong:
Membership Inference Attack in Face of Data Transformations. CNS 2022: 299-307 - [c68]Xiaoyu Cao, Neil Zhenqiang Gong:
MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients. CVPR Workshops 2022: 3395-3403 - [c67]Huanrui Yang, Xiaoxuan Yang
, Neil Zhenqiang Gong, Yiran Chen:
HERO: hessian-enhanced robust optimization for unifying and improving generalization and quantization performance. DAC 2022: 25-30 - [c66]Haolin Yuan, Bo Hui, Yuchen Yang, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao:
Addressing Heterogeneity in Federated Learning via Distributional Transformation. ECCV (38) 2022: 179-195 - [c65]Xinlei He, Hongbin Liu, Neil Zhenqiang Gong, Yang Zhang
:
Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning. ECCV (31) 2022: 365-381 - [c64]Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, Neil Zhenqiang Gong:
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations. ICLR 2022 - [c63]Aritra Ray, Jinyuan Jia, Sohini Saha
, Jayeeta Chaudhuri, Neil Zhenqiang Gong, Krishnendu Chakrabarty:
Deep Neural Network Piration without Accuracy Loss. ICMLA 2022: 1032-1038 - [c62]Zaixi Zhang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients. KDD 2022: 2545-2555 - [c61]Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong:
MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples. NeurIPS 2022 - [c60]Jinyuan Jia, Yupei Liu, Neil Zhenqiang Gong:
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning. SP 2022: 2043-2059 - [c59]Yongji Wu, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data. USENIX Security Symposium 2022: 519-536 - [c58]Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning. USENIX Security Symposium 2022: 3629-3645 - [i69]Yupei Liu, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong:
StolenEncoder: Stealing Pre-trained Encoders. CoRR abs/2201.05889 (2022) - [i68]Binghui Wang, Tianchen Zhou, Song Li, Yinzhi Cao, Neil Zhenqiang Gong:
GraphTrack: A Graph-based Cross-Device Tracking Framework. CoRR abs/2203.06833 (2022) - [i67]Xiaoyu Cao, Neil Zhenqiang Gong:
MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients. CoRR abs/2203.08669 (2022) - [i66]Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning. CoRR abs/2205.06401 (2022) - [i65]Xiaoguang Li, Neil Zhenqiang Gong, Ninghui Li, Wenhai Sun, Hui Li:
Fine-grained Poisoning Attacks to Local Differential Privacy Protocols for Mean and Variance Estimation. CoRR abs/2205.11782 (2022) - [i64]Zaixi Zhang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients. CoRR abs/2207.09209 (2022) - [i63]Xinlei He, Hongbin Liu, Neil Zhenqiang Gong, Yang Zhang
:
Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning. CoRR abs/2207.12535 (2022) - [i62]Xiaoyu Cao, Zaixi Zhang, Jinyuan Jia, Neil Zhenqiang Gong:
FLCert: Provably Secure Federated Learning against Poisoning Attacks. CoRR abs/2210.00584 (2022) - [i61]Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong:
MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples. CoRR abs/2210.01111 (2022) - [i60]Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang, Neil Zhenqiang Gong:
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information. CoRR abs/2210.10936 (2022) - [i59]Haolin Yuan, Bo Hui, Yuchen Yang, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao:
Addressing Heterogeneity in Federated Learning via Distributional Transformation. CoRR abs/2210.15025 (2022) - [i58]Jinghuai Zhang
, Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning. CoRR abs/2211.08229 (2022) - [i57]Wei Sun, Tingjun Chen, Neil Gong:
SoK: Inference Attacks and Defenses in Human-Centered Wireless Sensing. CoRR abs/2211.12087 (2022) - [i56]Hongbin Liu, Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong:
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning. CoRR abs/2212.03334 (2022) - [i55]Minghong Fang, Jia Liu, Neil Zhenqiang Gong, Elizabeth S. Bentley:
AFLGuard: Byzantine-robust Asynchronous Federated Learning. CoRR abs/2212.06325 (2022) - 2021
- [j11]Chris Chao-Chun Cheng, Chen Shi, Neil Zhenqiang Gong, Yong Guan:
LogExtractor: Extracting digital evidence from android log messages via string and taint analysis. Digit. Investig. 37 Supplement: 301193 (2021) - [c57]Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Provably Secure Federated Learning against Malicious Clients. AAAI 2021: 6885-6893 - [c56]