Sijia Liu - CSE@MSU

Temp 

Assistant Professor, Department of Computer Science and Engineering,
Michigan State University, East Lansing, MI 48824
Affiliated Professor, MIT-IBM Watson AI Lab, Cambridge, MA 02142
Email: liusiji5@msu.edu
Twitter: @sijialiu17
Google scholar

Prospective Students

I always look for working with highly motivated students, in terms of RA/TA/externship/internship/visiting students. Interested candidates are strongly encouraged to contact me by email, together with resume and transcripts.

Short Bio

Sijia Liu received the Ph.D. degree (with All-University Doctoral Prize) in Electrical and Computer Engineering from Syracuse University, NY, USA, in 2016. He was a Postdoctoral Research Fellow at the University of Michigan, Ann Arbor, in 2016-2017, and a Research Staff Member at the MIT-IBM Watson AI Lab in 2018-2020. His research interests include scalable and trustworthy AI, e.g., adversarial deep learning, optimization theory and methods, computer vision, and computational biology. He received the Best Student Paper Award at the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’16), and the Best Paper Runner-Up Award at the 38th Conference on Uncertainty in Artificial Intelligence (UAI’22). He has published over 50 papers at top-tier ML/CV conferences, such as NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, AISTATS, and AAAI (please refer to the CS ranking).

He is currently a Senior Member of IEEE, a Technical Committee (TC) Member of Machine Learning for Signal Processing (MLSP) in the IEEE’s Signal Processing Society, and an affiliated faculty at the MIT-IBM Watson AI Lab, IBM Research. He has organized a series of Adversarial ML workshops in ICML’22 and KDD’19-’22, and provided tutorials on Trustworthy and Scalable ML in AAAI’23, NeurIPS’22, and CVPR’20.

Research Interests

My research spans the areas of machine learning (ML)/deep learning (DL), optimization, computer vision, security, signal processing and data science, with a focus on developing learning algorithms and theory, and robust and explainable artificial intelligence (AI). These research themes provide a solid foundation for reaching my long-term research objective: Making AI systems safe and scalable. As AI moves from the lab into the real world (e.g., autonomous vehicles), ensuring its safety becomes a paramount requirement prior to its deployment. Moreover, as datasets, ML/DL models, and learning tasks become increasingly complex, getting ML/DL to scale calls for new advances in learning algorithm design. Thus, robustness and scalability underscore my current and future research, and of course, these two goals are intertwined. More broadly, the study towards robust and scalable AI could make a significant impact on machine learning theories, and induce more promising applications in, e.g., automated ML, meta-learning, privacy and security, hardware design, and big data analysis. I intend to seek a new learning frontier when the current learning algorithms become infeasible, and formalize foundations of secure learning.

Please refer to Projects and our OPTML group for some research highlights.

Representative Publications

  • Trustworthy AI : Robustness, fairness, and model explanation

  1. Robust Mixture-of-Expert Training for Convolutional Neural Networks
    Y. Zhang, R. Cai, T. Chen, G. Zhang, H. Zhang, P.-Y. Chen, S. Chang, Z. Wang, S. Liu
    ICCV’23

  2. Model Sparsity Can Simplify Machine Unlearning
    J. Jia*, J. Liu*, P. Ram, Y. Yao, G. Liu, Y. Liu, P. Sharma, S. Liu
    NeurIPS’23

  3. Understanding and Improving Visual Prompting: A Label-Mapping Perspective
    A. Chen, Y. Yao, P.-Y. Chen, Y. Zhang, S. Liu
    CVPR’23

  4. Revisiting and advancing fast adversarial training through the lens of bi-level optimization
    Y. Zhang*, G. Zhang*, P. Khanduri, M. Hong, S. Chang, S. Liu (* Equal contribution)
    ICML’22

  5. How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
    Y. Zhang, Y. Yao, J. Jia, J. Yi, M. Hong, S. Chang, S. Liu
    ICLR’22

  • Scalable AI : Model compression, distributed learning, black-box optimization, and automated ML

  1. Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning
    Y. Zhang*, Y. Zhang*, Aochuan Chen*, J. Jia, J. Liu, G. Liu, M. Hong, S. Chang, S Liu
    NeurIPS’23

  2. Distributed Adversarial Training to Robustify Deep Neural Networks at Scale
    G. Zhang*, S. Lu*, Y. Zhang, X. Chen, P.-Y. Chen, Q. Fan, L. Martie, L. Horesh, M. Hong, S. Liu
    UAI’22 (Best Paper Runner-Up Award)

  3. Advancing Model Pruning via Bi-level Optimization
    Y. Zhang*, Y. Yao*, P. Ram, P. Zhao, T. Chen, M. Hong, Y. Wang, S. Liu
    NeurIPS’22

  4. Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML
    S. Liu, S. Lu, X. Chen, Y. Feng, K. Xu, A. Al-Dujaili, M. Hong, U.-M. O'Reilly
    ICML’20

  5. A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning
    S. Liu, P.-Y. Chen, B. Kailkhura, G. Zhang, A. O. Hero, P. K. Varshney
    IEEE Signal Processing Magazine, 2020

News

* [Feature Article@IEEE SPM] We are thrilled to share that our tutorial artile titled “An Introduction to Bilevel Optimization: Foundations and applications in signal processing and machine learning” has been published in the IEEE Signal Processing Magazine as a Feature Article.

* [New Preprints] We are pleased to announce the release of the following papers on arXiv:
[1] Rethinking Machine Unlearning for Large Language Models;
[2] Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark;
[3] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models;
[4] Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning.

* We are thrilled to share that our research paper titled “Reverse Engineering Deceptions in Machine- and Human-Centric Attacks” has been officially published in Foundations and Trends® in Privacy and Security.

* [Launch of the MSU-UM-ARO Project Website] The “Lifelong Multimodal Fusion by Cross Layer Distributed Optimization” project receives funding from the Army Research Office (ARO).

* Tutorial “Machine Unlearning in Computer Vision: Foundations and Applications” is accepted for presentation by CVPR 2024. See you in Seattle!

* Four papers in ICLR’24: (1) Machine unlearning for safe image generation; (2) DeepZero: Training neural networks from scratch using only forward passes; (3) Backdoor data sifting; (4) Visual prompting automation

* [New Preprints] We are pleased to announce the release of the following papers on arXiv:
[1] To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images … For Now;
[2] From Trojan Horses to Castle Walls: Unveiling Bilateral Backdoor Effects in Diffusion Models.

* Tutorial on “Zeroth-Order Machine Learning: Fundamental Principles and Emerging Applications in Foundation Models” is accepted by ICASSP’24 and AAAI’24.

* NeurIPS 2023: 3 Papers Accepted – 1 Spotlight and 2 Posters. Congratulations to Jignhan, Jiancheng, and Yuguang for their spotlight acceptance with 'Model Sparsity Simplifies Machine Unlearning.’ And kudos to Yihua, Yimeng, Aochuan, Jinghan, and Jiancheng for their poster acceptance with 'Selectivity Boosts Transfer Learning Efficiency.’

* Grateful to receive a grant from Army Research Office (ARO) as the PI.

* Our paper on Adversarial Training for MoE has been chosen for an Oral Presentation at ICCV’23.

* Grateful to receive a gift funding from Cisco Research as the PI.

* Call for participation in 2nd AdvML-Frontiers Workshop@ICML’23.

* One paper in ICCV’23 on Adversarial Robustness of Mixture-of-Experts.

* Grateful to receive a CPS Medium Grant Award from NSF as a co-PI.

* Slides of our CVPR’23 tutorial on Reverse Engineering of Deceptions (RED) is now available at the tutorial page. [link]

* Our paper “Visual Prompting for Adversarial Robustness” received the Top 3% Paper Recognition at ICASSP 2023. Congrats to Aochuan, Peter (internship at OPTML in 2022), Yuguang, and Pin-Yu (IBM Research)!

* Grateful to be elected as Associate Editor of IEEE Transactions on Aerospace and Electronic Systems.

* Two papers in ICML’23 and CFP for 2nd AdvML-Frontiers Workshop.

* A new arXiv paper is released: Model Sparsification Can Simplify Machine Unlearning! [Paper] [Code]

* Grateful to receive a grant from Lawrence Livermore National Laboratory.

* Call for Papers and AdvML Rising Star Award Applications in the workshop AdvML-Frontiers, ICML’23

* A new arXiv paper is released: Adversarial attacks can be parsed to reveal victim model information! [Paper]

* The 2nd Workshop on New Frontiers in Adversarial Machine Learning has been accepted by ICML’23.

* Grateful to receive a grant from DSO National Laboratories.

* Two papers in CVPR’23.

* Three papers in ICASSP’23.

* CVPR’23 tutorial on Reverse Engineering of Deception: Foundations and Applications is accepted and will be given with Xiaoming Liu (MSU) and Xue Lin (Northeastern).

* AAAI’23 tutorial on Bi-level Optimization in ML: Foundations and Applications is now available at link.

* Four papers in ICLR’23: Issues and Fixes in IRM, TextGrad: Differentiable Solution to NLP Attack Generation, Provable Benefits of Sparse GNN, Sample Complexity Analysis of ViT.

* One paper in ASP-DAC’23.

* One paper in SANER 2023: Towards Both Robust and Accurate Code Models; Equally contributed by Jinghan Jia (MSU) and Shashank Srikant (MIT).

* Grateful to be selected as a presenter of the AAAI 2023 New Faculty Highlight Program.

* Tutorial on Foundational Robustness of Foundation Models will be given in NeurIPS’22.

* Tutorial on Bi-level Machine Learning will be given in AAAI’23.

* Two papers in NeurIPS’22.

* Grateful to receive a Robust Intelligence (RI) Core Small Grant Award from NSF as the PI.

* Grateful to receive the Best Paper Runner-Up Award at UAI’2022 in recognition of our work “Distributed Adversarial Training to Robustify Deep Neural Networks at Scale”.

* One paper in UAI’22 (Oral presentation).

* Five papers in ICML’22: Bi-level adversarial training; Winning lottery tickets from robust pretraining; Pruning helps certified robustness; Contrastive learning theory; Generalization theory of GCN.

* One paper in NAACL’22.

* One paper in IJCAI’22.

* CFP: 1st Workshop on New Frontiers in Adversarial Machine Learning at ICML’22 (AdvML-Frontiers@ICML’22).

* Grateful to receive a gift funding from Cisco Research as the PI.

* Congratulations to Yihua Zhang for his first CVPR paper.

* Two papers in CVPR 2022.

* Congratulations to Yimeng Zhang, Yuguang Yao, Jianghan Jia for their first ICLR papers.

* Five papers in ICLR 2022: Reverse Engineering of Adversaries, Black-Box Defense (spotlight), Learning to Optimize, Self-Training Theory, Distributed Learning.

* Our work on interpreting and advancing adversarial training via bi-level optimization is now available on arXiv; equally contributed by Yihua Zhang (MSU) and Guanhua Zhang (UCSB).

* Grateful to receive a DARPA IP2 AIE Grant as a Co-PI.

* Five papers in NeurIPS 2021.

* Our MSU-NEU team (with PI Xiaoming Liu and co-PI Xue Lin) entered the Phase 2 of DARPA AIE RED.

* One paper in ICML 2021.

* MIT news ‘Toward deep-learning models that can reason about code more like humans’ on our ICLR’21 work ‘Adversarial Programs’ [paper, code].

* Two papers in CVPR 2021.

* Two papers in AISTATS 2021.

* Four papers in ICLR 2021.

* Three papers in AAAI 2021.

* Grateful to receive a DARPA RED AIE Grant as a Co-PI.