Publications
(Irregularly updated! See the full publication list at Google scholar)
Preprints
* represents equal contribution
S. Liu, Y. Yao, J. Jia, S. Casper, N. Baracaldo, P. Hase, X. Xu, Y. Yao, H. Li, K. R. Varshney, M. Bansal, S. Koyejo, Y. Liu, Rethinking Machine Unlearning for Large Language Models
Selected Conference Papers
* represents equal contribution
Y. Yao*, J. Liu*, Y. Gong*, X. Liu, Y. Wang, X. Lin, S. Liu, Can Adversarial Examples Be Parsed to Reveal Victim Model Information?, WACV’25
Y. Zhang, C. Fan, Y. Zhang, Y. Yao, J. Jia, J. Liu, G. Zhang, G. Liu, R. R. Kompella, X. Liu, S. Liu, UnlearnCanvas: Stylized Image Dataset for Enhanced Machine Unlearning Evaluation in Diffusion Models, NeurIPS’24 Datasets and Benchmarks Track
Z. Pan, Y. Yao, G. Liu, B. Shen, H. V. Zhao, R. R. Kompella, S. Liu, From Trojan Horses to Castle Walls: Unveiling Bilateral Backdoor Effects in Diffusion Models, NeurIPS’24
J. Jia, J. Liu, Y. Zhang, P. Ram, N. Baracaldo, S. Liu, WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models, NeurIPS’24
Y. Zhang, X. Chen, J. Jia, Y. Zhang, C. Fan, J. Liu, M. Hong, K. Ding, S. Liu, Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models, NeurIPS’24
J. Jia, Y. Zhang, Y. Zhang, J. Liu, B. Runwal, J. Diffenderfer, B. Kailkhura, S. Liu, SOUL: Unlocking the Power of Second-order Optimization for LLM Unlearning, EMNLP’24
Y. Zhang*, J. Jia*, X. Chen, A. Chen, Y. Zhang, J. Liu, K. Ding, S. Liu, To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images … For Now, ECCV’24
C. Fan, J. Liu, A. Hero, S. Liu, Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning, ECCV’24
Y. Zhang*, P. Li*, J. Hong*, J. Li, Y. Zhang, W. Zheng, P.-Y. Chen, J. D. Lee, W. Yin, M. Hong, Z. Wang, S. Liu, T. Chen, Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark, ICML’24
C. Fan, J. Liu, Y. Zhang, D. Wei, E. Wong, S. Liu, SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation, ICLR’24 (spotlight)
A. Chen*, Y. Zhang*, J. Jia, J. Diffenderfer, J. Liu, K. Parasyris, Y. Zhang, Z. Zhang, B. Kailkhura, S. Liu, DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training, ICLR’24
J. Jia*, J. Liu*, P. Ram, Y. Yao, G. Liu, Y. Liu, P. Sharma, S. Liu, Model Sparsity Can Simplify Machine Unlearning, NeurIPS’23 (spotlight)
Y. Zhang*, Y. Zhang*, Aochuan Chen*, J. Jia, J. Liu, G. Liu, M. Hong, S. Chang, S Liu, Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning, NeurIPS’23
S. Zhang, M. Wang, H. Li, M. Liu, P.-Y. Chen, S. Lu, S. Liu, K. Murugesan, S. Chaudhury, On the Convergence and Sample Complexity Analysis of Deep Q-Networks with Greedy Exploration, NeurIPS’23
Y. Zhang, R. Cai, T. Chen, G. Zhang, H. Zhang, P.-Y. Chen, S. Chang, Z. Wang, S. Liu, Robust Mixture-of-Expert Training for Convolutional Neural Networks, ICCV’23
P. Khanduri, I. Tsaknakis, Y. Zhang, J. Liu, S. Liu, J. Zhang, M. Hong, Linearly Constrained Bilevel Optimization: A Smoothed Implicit Gradient Approach, ICML’23
M. Nowaz, R. Chowdhury, S. Zhang, M. Wang, S. Liu, P.-Y. Chen, Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks, ICML’23
A. Chen, Y. Yao, P.-Y. Chen, Y. Zhang, S. Liu, Understanding and Improving Visual Prompting: A Label-Mapping Perspective, CVPR’23
Y. Zhang, X. Chen, J. Jia, S. Liu, K. Ding, Text-Visual Prompting for Efficient 2D Temporal Video Grounding, CVPR’23
Y. Zhang, P. Sharma, P. Ram, M. Hong, K.R. Varshney, S. Liu, What Is Missing in IRM Training and Evaluation? Challenges and Solutions, ICLR’23
S. Zhang, M. Wang, P.-Y. Chen, S. Liu, S. Lu, M. Liu, Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks, ICLR’23
H. Li, M. Wang, S. Liu, P.-Y. Chen, A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity, ICLR’23
B. Hou, J. Jia, Y. Zhang, G. Zhang, Y. Zhang, S. Liu, S. Chang, TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven Optimization, ICLR’23
Y. Zhang*, A.K. Kamath*, Q. Wu*, Z. Fan*, W. Chen, Z. Wang, S. Chang, S. Liu, C. Hao, Data-Model-Circuit Tri-Design for Ultra-Light Video Intelligence on Edge Devices, ASP-DAC’23
J. Jia, S. Srikant, T. Mitrovska, C. Gan, S. Chang, S. Liu, U.-M. O'Reilly, CLAWSAT: Towards Both Robust and Accurate Code Models, the 30th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER 2023)
G. Zhang*, Y. Zhang*, Y. Zhang, W. Fan, Q. Li, S. Liu, Shiyu Chang, Fairness Reprogramming, NeurIPS’22
Y. Zhang*, Y. Yao*, P. Ram, P. Zhao, T. Chen, M. Hong, Y. Wang, S. Liu, Advancing Model Pruning via Bi-level Optimization, NeurIPS’22
G. Zhang*, S. Lu*, Y. Zhang, X. Chen, P.-Y. Chen, Q. Fan, L. Martie, L. Horesh, M. Hong, S. Liu, Distributed Adversarial Training to Robustify Deep Neural Networks at Scale, 38th Conference on Uncertainty in Artificial Intelligence (UAI) (Oral, 5% acceptance rate)
Y. Zhang*, G. Zhang*, P. Khanduri, M. Hong, S. Chang, S. Liu, Revisiting and advancing fast adversarial training through the lens of bi-level optimization, ICML’22
T. Chen, Z. Zhang, S. Liu, Y. Zhang, S. Chang, Z. Wang, Data-Efficient Double-Win Lottery Tickets from Robust Pre-training, ICML’22
T. Chen*, H. Zhang*, Z. Zhang, S. Chang, S. Liu, P.-Y. Chen, Z. Wang, Linearity Grafting: How Neuron Pruning Helps Certifiable Robustness, ICML’22
C.-Y. Ko, J. Mohapatra, S. Liu, P.-Y. Chen, L. Daniel, L. Weng,Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework, ICML’22
H. Li, M. Weng, S. Liu, P.-Y. Chen, J. Xiong, Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling, ICML’22
Y. Xie, D. Wang, P.-Y. Chen, J. Xiong, S. Liu, S. Koyejo, A Word is Worth A Thousand Dollars: Adversarial Attack on Tweets Fools Stock Prediction, 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL’22).
P. Zhao, P. Ram, S. Lu, Y. Yao, D. Bouneffouf, X. Lin, S. Liu, Learning to Generate Image Source-Agnostic Universal Adversarial Perturbations, IJCAI’22
T. Chen*, Z. Zhang*, Y. Zhang*, S. Chang, S. Liu, Z. Wang, Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free, CVPR’22
V. Asnani, X. Yin, T. Hassner, S. Liu, X. Liu, Proactive Image Manipulation Detection, CVPR’22
Y. Gong*, Y. Yao*, Y. Li, Y. Zhang, X. Liu, X. Lin, S. Liu, Reverse Engineering of Imperceptible Adversarial Image Perturbations, ICLR’22
S. Zhang, M. Wang, S. Liu, P.-Y. Chen, J. Xiong, How unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis, ICLR’22
Y. Zhang, Y. Yao, J. Jia, J. Yi, M. Hong, S. Chang, S. Liu, How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective, ICLR’22 (spotlight, acceptance rate 5%)
T. Huang, T. Chen, S. Liu, S. Chang, L. Amini, Z. Wang, Optimizer Amalgamation, ICLR’22
P. Khanduri, H. Yang, M. Hong, J. Liu, H.T. Wai, S. Liu, Decentralized Learning for Overparameterized Problems: A Multi-Agent Kernel Approximation Approach, ICLR’22
C. Fan, P. Ram and S. Liu, Sign-MAML: Efficient Model-Agnostic Meta-Learning by SignSGD, NeurIPS Workshop MetaLearn, 2021
X. Ma, G. Yuan, X. Shen, T. Chen, X. Chen, X. Chen, N. Liu, M. Qin, S. Liu, Z. Wang, Y. Wang. Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?, NeurIPS’21
L. Fan, S. Liu, P.-Y. Chen, G. Zhang, C. Gan. When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?, NeurIPS’21 [TechTalks]
G. Yuan, X. Ma, W. Niu, Z. Li, Z. Kong, N. Liu, Y. Gong, Z. Zhan, C. He, Q. Jin, S. Wang, M. Qin, B. Ren, Y. Wang, S. Liu, X. Lin. MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge, NeurIPS’21 (Spotlight)
J. Wang, T. Zhang, S. Liu, P.-Y. Chen, J. Xu, M. Fardad, B. Li. Adversarial Attack Generation Empowered by Min-Max Optimization, NeurIPS’21
S. Zhang, M. Wang, S. Liu, P.-Y. Chen, J. Xiong. Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks, NeurIPS’21
N. Liu, G. Yuan, Z. Che, X. Shen, X. Ma, Q. Jin, J. Ren, J. Tang, S. Liu, Y. Wang,
Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?, ICML’21
Z. Li, G. Yuan, W. Niu, Y. Li, P. Zhao, Y. Cai, X. Shen, Z. Zhan, Z. Kong, Q. Jin, Z. Chen, S. Liu, K. Yang, Y. Wang, B. Ren, and X. Lin. NPAS: A compiler-aware framework of unified network pruning and architecture search for beyond real-time mobile acceleration, CVPR’21 (Oral)
T. Chen, J. Frankle, S. Chang, S. Liu, Y. Zhang, M. Carbin, and Z. Wang. The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models, CVPR’21
J. Mohapatra, C.-Y. Ko, L. Weng, P.-Y. Chen, S. Liu, L. Daniel, Hidden Cost of Randomized Smoothing, AISTATS’21
Z. Li, P.-Y. Chen, S. Liu, S. Lu, Y. Xu, Rate-Improved Inexact Augmented Lagrangian Method for Constrained Nonconvex Optimization, AISTATS’21
R. Wang, K. Xu, S. Liu, P.-Y. Chen, T.-W. Weng, C. Gan, M. Wang, On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning, ICLR’21
T. Chen, Z. Zhang, S. Liu, S. Chang, and Z. Wang, Robust Overfitting May be Mitigated by Properly Learned Smoothening, ICLR’21
T. Chen, Z. Zhang, S. Liu, S. Chang, and Z. Wang, Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning, ICLR’21
S. Srikant, S. Liu, T. Mitrovska, S. Chang, Q. Fan, G. Zhang, U.-M. O'Reilly, Generating Adversarial Computer Programs using Optimized Obfuscations, ICLR’21
A. Boopathy, L. Weng, S. Liu, P.-Y. Chen, G. Zhang, L. Daniel, Fast Training of Provably Robust Neural Networks by SingleProp, AAAI’21
M. Cheng, P.-Y. Chen, S. Liu, S. Chang, C.-J. Hsieh, P. Das, Self-Progressing Robust Training, AAAI’21
W. Niu, M. Sun, Z. Li, J.-A. Chen, J. Guan, X. Shen, Y. Wang, S. Liu, X. Lin, B. Ren, RT3D: Achieving Real-Time Execution of 3D Convolutional Neural Networks on Mobile Devices, AAAI’21
T. Chen, J. Frankle, S. Chang, S. Liu, Y. Zhang, Z. Wang, M. Carbin, The Lottery Ticket Hypothesis for the Pre-trained BERT Networks, NeurIPS’20 (MIT News)
T. Chen, W. Zhang, J. Zhou, S. Chang, S. Liu, L. Amini, Z. Wang, Training Stronger Baselines for Learning to Optimize,
NeurIPS’20 (spotlight, acceptance rate 3%)
J. Mohapatra, C.-Y. Ko, L. Weng, P.-Y. Chen, S. Liu, L. Daniel, Higher-Order Certification For Randomized Smoothing,
NeurIPS’20 (spotlight, acceptance rate 3%)
K. Xu, G. Zhang, S. Liu, Q. Fan, M. Sun, H. Chen, P.-Y. Chen, Y. Wang, X. Lin, Adversarial T-shirt! Evading Person Detectors in A Physical World,
ECCV’20 (spotlight, acceptance rate 5%)
R. Wang, G. Zhang, S. Liu, P.-Y. Chen, J. Xiong, M. Wang, Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases,
ECCV’20
X. Ma, W. Niu, T. Zhang, S. Liu, S. Lin, H. Li, W. Wen, X. Chen, J. Tang, K. Ma, B. Ren, Y. Wang, An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices,
ECCV’20
S. Zhang, M. Wang, S. Liu, P.-Y. Chen, J. Xiong, Fast Learning of Graph Neural Networks with Guaranteed Generalizability: One-hidden-layer Case,
ICML’20
S. Dutta, D. Wei, H. Yueksel, P.-Y. Chen, S. Liu, K. R. Varshney, Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing,
ICML’20
A. Boopathy, S. Liu, G. Zhang, C. Liu, P.-Y. Chen, S. Chang, L. Daniel, Proper Network Interpretability Helps Adversarial Robustness in Classification,
ICML’20
S. Liu*, S. Lu*, X. Chen*, Y. Feng*, K. Xu*, A. Al-Dujaili*, M. Hong, U.-M. O'Reilly, Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML,
ICML’20
T. Chen, S. Liu, S. Chang, Y. Cheng, L. Amini, Z. Wang, Adversarial Robustness: From Self-Supervised Pretraining to Fine-Tuning,
CVPR’20
J. Mohapatra, L. Weng, P.-Y. Chen, S. Liu, L. Daniel, Towards Verifying Robustness of Neural Networks against Semantic Perturbations,
CVPR’20 (oral)
M. Cheng, S. Singh, P.-Y. Chen, S. Liu, C.-J. Hsieh, Sign-OPT: A Query-Efficient Hard-label Adversarial Attack,
ICLR’20
S. Liu*, P. Ram*, D. Vijaykeerthy, D. Bouneffouf, G. Bramble, H. Samulowitz, D. Wang, A. Conn, A. Gray, An ADMM Based Framework for AutoML Pipeline Configuration,
AAAI’20
L. Weng*, P. Zhao*, S. Liu, P.-Y. Chen, X. Lin, L. Daniel, Towards Certificated Model Robustness Against Weight Perturbations,
AAAI’20
X. Chen*, S. Liu*, K. Xu*, X. Li*, X. Lin, M. Hong, D. Cox, ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization,
NeurIPS’19
T. Zhang, S. Liu, Y. Wang, M. Fardad, Generation of Low Distortion Adversarial Attacks via Convex Programming, ICDM’19
P. Zhao, S. Liu, P.-Y. Chen, N. Hoang, K. Xu, B. Kailkhura, X. Lin, On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method,
ICCV’19
S. Ye*, K. Xu*, S. Liu, H. Cheng, J.-H. Lambrechts, H. Zhang, A. Zhou, K. Ma, Y. Wang, X. Lin, Adversarial Robustness vs. Model Compression, or Both?,
ICCV’19
K. Xu*, H. Chen*, S. Liu, P.-Y. Chen, T.-W. Wen, M. Hong, X. Lin, Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective,
IJCAI’19
P.-Y. Chen, L. Wu, S. Liu, I. Rajapakse, Fast Incremental von Neumann Graph Entropy Computation: Theory, Algorithm, and Applications,
ICML’19
S. Liu, P.-Y. Chen, X. Chen, M. Hong, signSGD via Zeroth-Order Oracle, ICLR’19
K. Xu*, S. Liu*, P. Zhao, P.-Y. Chen, H. Zhang, D. Erdogmus, Y. Wang, X. Lin, Structured Adversarial Attack: Towards General Implementation and Better Interpretability,
ICLR’19
X. Chen, S. Liu, R. Sun, M. Hong, On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization,
ICLR’19
A. Boopathy, L. Weng, P.-Y. Chen, S. Liu, L. Daniel, CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks,
AAAI’19
C.-C. Tu*, P. Ting*, P.-Y. Chen*, S. Liu, H. Zhang, J. Yi, C.-J. Hsieh, S.-M. Cheng, AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks,
AAAI’19
S. Liu, B. Kailkhura, P.-Y. Chen, P. Ting, S. Chang, L. Amini, Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization,
NeurIPS’18
S. Liu, A. Ren, Y. Wang, P. K. Varshney, Ultra-Fast Robust Compressive Sensing Based on Memristor Crossbars, ICASSP’17 (Best Student Paper Award, Third Place)
Journal Papers
Y. Zhang, P. Khanduri, I. Tsaknakis, Y. Yao, M. Hong, S. Liu, An Introduction to Bilevel Optimization: Foundations and applications in signal processing and machine learning, IEEE Signal Processing Magazine, vol. 41, no. 1, pp. 38-59, Jan. 2024
Y. Yao, X. Guo, V. Asnani, Y. Gong, J. Liu, X. Lin, X. Liu, S. Liu, Reverse Engineering of Deceptions on Machine- and Human-Centric Attacks, Foundations and Trends® in Privacy and Security: Vol. 6: No. 2, pp 53-152, 2024
S. Zhang, M. Wang, J. Xiong, S. Liu, P.-Y. Chen, Improved Linear Convergence of Training CNNs With Generalizability Guarantees: A One-Hidden-Layer Case, IEEE Transactions on Neural Networks and Learning Systems, 2020
S. Liu, P.-Y. Chen, B. Kailkhura, G. Zhang, A. O. Hero, P. K. Varshney, A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning, IEEE Signal Processing Magazine, 2020
F. Harirchi, D. Kim, O. Khalil, S. Liu, P. Elvati, M. Baranwal, A. Hero, A. Violi, On Sparse Identification of Complex Dynamical Systems: A Study on Discovering Influential Reactions in Chemical Reaction Networks, Fuel, Elsevier, 2020
S. Liu, H. Chen, S. Ronquist, L. Seaman, N. Ceglia, W. Meixner, L. A. Muir, P.-Y. Chen, G. Higgins, P. Baldi, S. Smale, A. O. Hero, I. Rajapakse, Genome Architecture Leads a Bifurcation in Cell Identity, iScience , Cell, 2018
S. Zhang, S. Liu, V. Sharma, P. K. Varshney, Optimal Sensor Collaboration for Parameter Tracking Using Energy Harvesting Sensors, IEEE Transactions on Signal Processing, 2018
S. Liu, Y. Wang, M. Fardad, and P. K. Varshney, Memristor-Based Optimization Framework for AI Applications, IEEE Circuits and Systems Magazine, 2018
S. Liu, P.-Y. Chen, and A. O. Hero, Distributed Dual Averaging over Evolving Networks of Growing Connectivity, IEEE Transactions on Signal Processing, 2018
P.-Y. Chen, S. Liu, Tradeoff of Graph Laplacian Smoothing Regularizer, IEEE Signal Process. Lett., 2017
H. Chen, L. Seaman, S. Liu, T. Ried, I. Rajapakse, conformation and gene expression patterns differ profoundly in human fibroblasts grown in spheroids versus monolayers
Nucleus, 2017
S. Liu, S. Kar, M. Fardad, P. K. Varshney, Optimized Sensor Collaboration for Estimation of Temporally Correlated Parameters,
IEEE Transactions on Signal Processing, 2017
B. Kailkhura, S. Liu, T. Wimalajeewa, P. K. Varshney, Measurement Matrix Design for Compressive Detection with Secrecy Guarantees, IEEE Wireless Communications Letters, 2016
S. Liu, S. P. Chepuri, M. Fardad, E. Masazade, G. Leus, P. K. Varshney, Sensor Selection for Estimation with Correlated Measurement Noise, IEEE Transactions on Signal Processing, 2016
S. Liu, S. Kar, M. Fardad, and P. K. Varshney, Sparsity-Aware Sensor Collaboration for Linear Coherent Estimation, IEEE Transactions on Signal Processing, 2015
S. Liu, A. Vempaty, M. Fardad, E. Masazade, and P. K. Varshney, Energy-Aware Sensor Selection in Field Reconstruction, IEEE Signal Processing Letters, 2014
X. Shen, S. Liu, and P. K. Varshney, Sensor Selection for Nonlinear Systems in Large Sensor Networks, IEEE Transactions on Aerospace and Electronic Systems, 2014
S. Liu, M. Fardad, E. Masazade and P. K. Varshney, Optimal Periodic Sensor Scheduling in Networks of Dynamical Systems, IEEE Transactions on Signal Processing, 201
|