(Irregularly updated! See the full publication list at Google scholar)

Selected Conference Papers

* represents equal contribution

  1. Y. Zhang, R. Cai, T. Chen, G. Zhang, H. Zhang, P.-Y. Chen, S. Chang, Z. Wang, S. Liu, Robust Mixture-of-Expert Training for Convolutional Neural Networks, ICCV’23

  2. P. Khanduri, I. Tsaknakis, Y. Zhang, J. Liu, S. Liu, J. Zhang, M. Hong, Linearly Constrained Bilevel Optimization: A Smoothed Implicit Gradient Approach, ICML’23

  3. M. Nowaz, R. Chowdhury, S. Zhang, M. Wang, S. Liu, P.-Y. Chen, Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks, ICML’23

  4. A. Chen, Y. Yao, P.-Y. Chen, Y. Zhang, S. Liu, Understanding and Improving Visual Prompting: A Label-Mapping Perspective, CVPR’23

  5. Y. Zhang, X. Chen, J. Jia, S. Liu, K. Ding, Text-Visual Prompting for Efficient 2D Temporal Video Grounding, CVPR’23

  6. Y. Zhang, P. Sharma, P. Ram, M. Hong, K.R. Varshney, S. Liu, What Is Missing in IRM Training and Evaluation? Challenges and Solutions, ICLR’23

  7. S. Zhang, M. Wang, P.-Y. Chen, S. Liu, S. Lu, M. Liu, Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks, ICLR’23

  8. H. Li, M. Wang, S. Liu, P.-Y. Chen, A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity, ICLR’23

  9. B. Hou, J. Jia, Y. Zhang, G. Zhang, Y. Zhang, S. Liu, S. Chang, TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven Optimization, ICLR’23

  10. Y. Zhang*, A.K. Kamath*, Q. Wu*, Z. Fan*, W. Chen, Z. Wang, S. Chang, S. Liu, C. Hao, Data-Model-Circuit Tri-Design for Ultra-Light Video Intelligence on Edge Devices, ASP-DAC’23

  11. J. Jia, S. Srikant, T. Mitrovska, C. Gan, S. Chang, S. Liu, U.-M. O'Reilly, CLAWSAT: Towards Both Robust and Accurate Code Models, the 30th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER 2023)

  12. G. Zhang*, Y. Zhang*, Y. Zhang, W. Fan, Q. Li, S. Liu, Shiyu Chang, Fairness Reprogramming, NeurIPS’22

  13. Y. Zhang*, Y. Yao*, P. Ram, P. Zhao, T. Chen, M. Hong, Y. Wang, S. Liu, Advancing Model Pruning via Bi-level Optimization, NeurIPS’22

  14. G. Zhang*, S. Lu*, Y. Zhang, X. Chen, P.-Y. Chen, Q. Fan, L. Martie, L. Horesh, M. Hong, S. Liu, Distributed Adversarial Training to Robustify Deep Neural Networks at Scale, 38th Conference on Uncertainty in Artificial Intelligence (UAI) (Oral, 5% acceptance rate)

  15. Y. Zhang*, G. Zhang*, P. Khanduri, M. Hong, S. Chang, S. Liu, Revisiting and advancing fast adversarial training through the lens of bi-level optimization, ICML’22

  16. T. Chen, Z. Zhang, S. Liu, Y. Zhang, S. Chang, Z. Wang, Data-Efficient Double-Win Lottery Tickets from Robust Pre-training, ICML’22

  17. T. Chen*, H. Zhang*, Z. Zhang, S. Chang, S. Liu, P.-Y. Chen, Z. Wang, Linearity Grafting: How Neuron Pruning Helps Certifiable Robustness, ICML’22

  18. C.-Y. Ko, J. Mohapatra, S. Liu, P.-Y. Chen, L. Daniel, L. Weng,Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework, ICML’22

  19. H. Li, M. Weng, S. Liu, P.-Y. Chen, J. Xiong, Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling, ICML’22

  20. Y. Xie, D. Wang, P.-Y. Chen, J. Xiong, S. Liu, S. Koyejo, A Word is Worth A Thousand Dollars: Adversarial Attack on Tweets Fools Stock Prediction, 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL’22).

  21. P. Zhao, P. Ram, S. Lu, Y. Yao, D. Bouneffouf, X. Lin, S. Liu, Learning to Generate Image Source-Agnostic Universal Adversarial Perturbations, IJCAI’22

  22. T. Chen*, Z. Zhang*, Y. Zhang*, S. Chang, S. Liu, Z. Wang, Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free, CVPR’22

  23. V. Asnani, X. Yin, T. Hassner, S. Liu, X. Liu, Proactive Image Manipulation Detection, CVPR’22

  24. Y. Gong*, Y. Yao*, Y. Li, Y. Zhang, X. Liu, X. Lin, S. Liu, Reverse Engineering of Imperceptible Adversarial Image Perturbations, ICLR’22

  25. S. Zhang, M. Wang, S. Liu, P.-Y. Chen, J. Xiong, How unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis, ICLR’22

  26. Y. Zhang, Y. Yao, J. Jia, J. Yi, M. Hong, S. Chang, S. Liu, How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective, ICLR’22 (spotlight, acceptance rate 5%)

  27. T. Huang, T. Chen, S. Liu, S. Chang, L. Amini, Z. Wang, Optimizer Amalgamation, ICLR’22

  28. P. Khanduri, H. Yang, M. Hong, J. Liu, H.T. Wai, S. Liu, Decentralized Learning for Overparameterized Problems: A Multi-Agent Kernel Approximation Approach, ICLR’22

  29. C. Fan, P. Ram and S. Liu, Sign-MAML: Efficient Model-Agnostic Meta-Learning by SignSGD, NeurIPS Workshop MetaLearn, 2021

  30. X. Ma, G. Yuan, X. Shen, T. Chen, X. Chen, X. Chen, N. Liu, M. Qin, S. Liu, Z. Wang, Y. Wang. Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?, NeurIPS’21

  31. L. Fan, S. Liu, P.-Y. Chen, G. Zhang, C. Gan. When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?, NeurIPS’21 [TechTalks]

  32. G. Yuan, X. Ma, W. Niu, Z. Li, Z. Kong, N. Liu, Y. Gong, Z. Zhan, C. He, Q. Jin, S. Wang, M. Qin, B. Ren, Y. Wang, S. Liu, X. Lin. MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge, NeurIPS’21 (Spotlight)

  33. J. Wang, T. Zhang, S. Liu, P.-Y. Chen, J. Xu, M. Fardad, B. Li. Adversarial Attack Generation Empowered by Min-Max Optimization, NeurIPS’21

  34. S. Zhang, M. Wang, S. Liu, P.-Y. Chen, J. Xiong. Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks, NeurIPS’21

  35. N. Liu, G. Yuan, Z. Che, X. Shen, X. Ma, Q. Jin, J. Ren, J. Tang, S. Liu, Y. Wang, Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?, ICML’21

  36. Z. Li, G. Yuan, W. Niu, Y. Li, P. Zhao, Y. Cai, X. Shen, Z. Zhan, Z. Kong, Q. Jin, Z. Chen, S. Liu, K. Yang, Y. Wang, B. Ren, and X. Lin. NPAS: A compiler-aware framework of unified network pruning and architecture search for beyond real-time mobile acceleration, CVPR’21 (Oral)

  37. T. Chen, J. Frankle, S. Chang, S. Liu, Y. Zhang, M. Carbin, and Z. Wang. The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models, CVPR’21

  38. J. Mohapatra, C.-Y. Ko, L. Weng, P.-Y. Chen, S. Liu, L. Daniel, Hidden Cost of Randomized Smoothing, AISTATS’21

  39. Z. Li, P.-Y. Chen, S. Liu, S. Lu, Y. Xu, Rate-Improved Inexact Augmented Lagrangian Method for Constrained Nonconvex Optimization, AISTATS’21

  40. R. Wang, K. Xu, S. Liu, P.-Y. Chen, T.-W. Weng, C. Gan, M. Wang, On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning, ICLR’21

  41. T. Chen, Z. Zhang, S. Liu, S. Chang, and Z. Wang, Robust Overfitting May be Mitigated by Properly Learned Smoothening, ICLR’21

  42. T. Chen, Z. Zhang, S. Liu, S. Chang, and Z. Wang, Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning, ICLR’21

  43. S. Srikant, S. Liu, T. Mitrovska, S. Chang, Q. Fan, G. Zhang, U.-M. O'Reilly, Generating Adversarial Computer Programs using Optimized Obfuscations, ICLR’21

  44. A. Boopathy, L. Weng, S. Liu, P.-Y. Chen, G. Zhang, L. Daniel, Fast Training of Provably Robust Neural Networks by SingleProp, AAAI’21

  45. M. Cheng, P.-Y. Chen, S. Liu, S. Chang, C.-J. Hsieh, P. Das, Self-Progressing Robust Training, AAAI’21

  46. W. Niu, M. Sun, Z. Li, J.-A. Chen, J. Guan, X. Shen, Y. Wang, S. Liu, X. Lin, B. Ren, RT3D: Achieving Real-Time Execution of 3D Convolutional Neural Networks on Mobile Devices, AAAI’21

  47. T. Chen, J. Frankle, S. Chang, S. Liu, Y. Zhang, Z. Wang, M. Carbin, The Lottery Ticket Hypothesis for the Pre-trained BERT Networks, NeurIPS’20 (MIT News)

  48. T. Chen, W. Zhang, J. Zhou, S. Chang, S. Liu, L. Amini, Z. Wang, Training Stronger Baselines for Learning to Optimize, NeurIPS’20 (spotlight, acceptance rate 3%)

  49. J. Mohapatra, C.-Y. Ko, L. Weng, P.-Y. Chen, S. Liu, L. Daniel, Higher-Order Certification For Randomized Smoothing, NeurIPS’20 (spotlight, acceptance rate 3%)

  50. K. Xu, G. Zhang, S. Liu, Q. Fan, M. Sun, H. Chen, P.-Y. Chen, Y. Wang, X. Lin, Adversarial T-shirt! Evading Person Detectors in A Physical World, ECCV’20 (spotlight, acceptance rate 5%)

  51. R. Wang, G. Zhang, S. Liu, P.-Y. Chen, J. Xiong, M. Wang, Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases, ECCV’20

  52. X. Ma, W. Niu, T. Zhang, S. Liu, S. Lin, H. Li, W. Wen, X. Chen, J. Tang, K. Ma, B. Ren, Y. Wang, An Image Enhancing Pattern-based Sparsity for Real-time Inference on Mobile Devices, ECCV’20

  53. S. Zhang, M. Wang, S. Liu, P.-Y. Chen, J. Xiong, Fast Learning of Graph Neural Networks with Guaranteed Generalizability: One-hidden-layer Case, ICML’20

  54. S. Dutta, D. Wei, H. Yueksel, P.-Y. Chen, S. Liu, K. R. Varshney, Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing, ICML’20

  55. A. Boopathy, S. Liu, G. Zhang, C. Liu, P.-Y. Chen, S. Chang, L. Daniel, Proper Network Interpretability Helps Adversarial Robustness in Classification, ICML’20

  56. S. Liu*, S. Lu*, X. Chen*, Y. Feng*, K. Xu*, A. Al-Dujaili*, M. Hong, U.-M. O'Reilly, Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML, ICML’20

  57. T. Chen, S. Liu, S. Chang, Y. Cheng, L. Amini, Z. Wang, Adversarial Robustness: From Self-Supervised Pretraining to Fine-Tuning, CVPR’20

  58. J. Mohapatra, L. Weng, P.-Y. Chen, S. Liu, L. Daniel, Towards Verifying Robustness of Neural Networks against Semantic Perturbations, CVPR’20 (oral)

  59. M. Cheng, S. Singh, P.-Y. Chen, S. Liu, C.-J. Hsieh, Sign-OPT: A Query-Efficient Hard-label Adversarial Attack, ICLR’20

  60. S. Liu*, P. Ram*, D. Vijaykeerthy, D. Bouneffouf, G. Bramble, H. Samulowitz, D. Wang, A. Conn, A. Gray, An ADMM Based Framework for AutoML Pipeline Configuration, AAAI’20

  61. L. Weng*, P. Zhao*, S. Liu, P.-Y. Chen, X. Lin, L. Daniel, Towards Certificated Model Robustness Against Weight Perturbations, AAAI’20

  62. X. Chen*, S. Liu*, K. Xu*, X. Li*, X. Lin, M. Hong, D. Cox, ZO-AdaMM: Zeroth-Order Adaptive Momentum Method for Black-Box Optimization, NeurIPS’19

  63. T. Zhang, S. Liu, Y. Wang, M. Fardad, Generation of Low Distortion Adversarial Attacks via Convex Programming, ICDM’19

  64. P. Zhao, S. Liu, P.-Y. Chen, N. Hoang, K. Xu, B. Kailkhura, X. Lin, On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method, ICCV’19

  65. S. Ye*, K. Xu*, S. Liu, H. Cheng, J.-H. Lambrechts, H. Zhang, A. Zhou, K. Ma, Y. Wang, X. Lin, Adversarial Robustness vs. Model Compression, or Both?, ICCV’19

  66. K. Xu*, H. Chen*, S. Liu, P.-Y. Chen, T.-W. Wen, M. Hong, X. Lin, Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI’19

  67. P.-Y. Chen, L. Wu, S. Liu, I. Rajapakse, Fast Incremental von Neumann Graph Entropy Computation: Theory, Algorithm, and Applications, ICML’19

  68. S. Liu, P.-Y. Chen, X. Chen, M. Hong, signSGD via Zeroth-Order Oracle, ICLR’19

  69. K. Xu*, S. Liu*, P. Zhao, P.-Y. Chen, H. Zhang, D. Erdogmus, Y. Wang, X. Lin, Structured Adversarial Attack: Towards General Implementation and Better Interpretability, ICLR’19

  70. X. Chen, S. Liu, R. Sun, M. Hong, On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization, ICLR’19

  71. A. Boopathy, L. Weng, P.-Y. Chen, S. Liu, L. Daniel, CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks, AAAI’19

  72. C.-C. Tu*, P. Ting*, P.-Y. Chen*, S. Liu, H. Zhang, J. Yi, C.-J. Hsieh, S.-M. Cheng, AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks, AAAI’19

  73. S. Liu, B. Kailkhura, P.-Y. Chen, P. Ting, S. Chang, L. Amini, Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization, NeurIPS’18

  74. S. Liu, A. Ren, Y. Wang, P. K. Varshney, Ultra-Fast Robust Compressive Sensing Based on Memristor Crossbars, ICASSP’17 (Best Student Paper Award, Third Place)

Journal Papers

  1. S. Zhang, M. Wang, J. Xiong, S. Liu, P.-Y. Chen, Improved Linear Convergence of Training CNNs With Generalizability Guarantees: A One-Hidden-Layer Case, IEEE Transactions on Neural Networks and Learning Systems, 2020

  2. S. Liu, P.-Y. Chen, B. Kailkhura, G. Zhang, A. O. Hero, P. K. Varshney, A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning, IEEE Signal Processing Magazine, 2020

  3. F. Harirchi, D. Kim, O. Khalil, S. Liu, P. Elvati, M. Baranwal, A. Hero, A. Violi, On Sparse Identification of Complex Dynamical Systems: A Study on Discovering Influential Reactions in Chemical Reaction Networks, Fuel, Elsevier, 2020

  4. S. Liu, H. Chen, S. Ronquist, L. Seaman, N. Ceglia, W. Meixner, L. A. Muir, P.-Y. Chen, G. Higgins, P. Baldi, S. Smale, A. O. Hero, I. Rajapakse, Genome Architecture Leads a Bifurcation in Cell Identity, iScience , Cell, 2018

  5. S. Zhang, S. Liu, V. Sharma, P. K. Varshney, Optimal Sensor Collaboration for Parameter Tracking Using Energy Harvesting Sensors, IEEE Transactions on Signal Processing, 2018

  6. S. Liu, Y. Wang, M. Fardad, and P. K. Varshney, Memristor-Based Optimization Framework for AI Applications, IEEE Circuits and Systems Magazine, 2018

  7. S. Liu, P.-Y. Chen, and A. O. Hero, Distributed Dual Averaging over Evolving Networks of Growing Connectivity, IEEE Transactions on Signal Processing, 2018

  8. P.-Y. Chen, S. Liu, Tradeoff of Graph Laplacian Smoothing Regularizer, IEEE Signal Process. Lett., 2017

  9. H. Chen, L. Seaman, S. Liu, T. Ried, I. Rajapakse, conformation and gene expression patterns differ profoundly in human fibroblasts grown in spheroids versus monolayers Nucleus, 2017

  10. S. Liu, S. Kar, M. Fardad, P. K. Varshney, Optimized Sensor Collaboration for Estimation of Temporally Correlated Parameters, IEEE Transactions on Signal Processing, 2017

  11. B. Kailkhura, S. Liu, T. Wimalajeewa, P. K. Varshney, Measurement Matrix Design for Compressive Detection with Secrecy Guarantees, IEEE Wireless Communications Letters, 2016

  12. S. Liu, S. P. Chepuri, M. Fardad, E. Masazade, G. Leus, P. K. Varshney, Sensor Selection for Estimation with Correlated Measurement Noise, IEEE Transactions on Signal Processing, 2016

  13. S. Liu, S. Kar, M. Fardad, and P. K. Varshney, Sparsity-Aware Sensor Collaboration for Linear Coherent Estimation, IEEE Transactions on Signal Processing, 2015

  14. S. Liu, A. Vempaty, M. Fardad, E. Masazade, and P. K. Varshney, Energy-Aware Sensor Selection in Field Reconstruction, IEEE Signal Processing Letters, 2014

  15. X. Shen, S. Liu, and P. K. Varshney, Sensor Selection for Nonlinear Systems in Large Sensor Networks, IEEE Transactions on Aerospace and Electronic Systems, 2014

  16. S. Liu, M. Fardad, E. Masazade and P. K. Varshney, Optimal Periodic Sensor Scheduling in Networks of Dynamical Systems, IEEE Transactions on Signal Processing, 201