Sijia Liu - CSE@MSU


Assistant Professor, Department of Computer Science and Engineering,
Michigan State University, East Lansing, MI 48824
Affiliated Professor, MIT-IBM Watson AI Lab, Cambridge, MA 02142
Twitter: @sijialiu17
Google scholar

Prospective Students

I always look for working with highly motivated students, in terms of RA/TA/externship/internship/visiting students. Interested candidates are strongly encouraged to contact me by email, together with resume and transcripts.

Short Bio

Sijia Liu received the Ph.D. degree (with All-University Doctoral Prize) in Electrical and Computer Engineering from Syracuse University, NY, USA, in 2016. He was a Postdoctoral Research Fellow at the University of Michigan, Ann Arbor, in 2016-2017, and a Research Staff Member at the MIT-IBM Watson AI Lab in 2018-2020. His research interests include scalable and trustworthy AI, e.g., adversarial deep learning, optimization theory and methods, computer vision, and computational biology. He received the Best Student Paper Award at the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). His work has been published at top-tier ML/CV conferences such as NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, AISTATS, AAAI, and IJCAI.

Research Interests

My research spans the areas of machine learning (ML)/deep learning (DL), optimization, computer vision, security, signal processing and data science, with a focus on developing learning algorithms and theory, and robust and explainable artificial intelligence (AI). These research themes provide a solid foundation for reaching my long-term research objective: Making AI systems safe and scalable. As AI moves from the lab into the real world (e.g., autonomous vehicles), ensuring its safety becomes a paramount requirement prior to its deployment. Moreover, as datasets, ML/DL models, and learning tasks become increasingly complex, getting ML/DL to scale calls for new advances in learning algorithm design. Thus, robustness and scalability underscore my current and future research, and of course, these two goals are intertwined. More broadly, the study towards robust and scalable AI could make a significant impact on machine learning theories, and induce more promising applications in, e.g., automated ML, meta-learning, privacy and security, hardware design, and big data analysis. I intend to seek a new learning frontier when the current learning algorithms become infeasible, and formalize foundations of secure learning.

Please refer to Projects for some research highlights.


* Grateful to be selected as a presenter of the AAAI 2023 New Faculty Highlight Program.

* Tutorial on Foundational Robustness of Foundation Models will be given in NeurIPS’22.

* Tutorial on Bi-level Machine Learning will be given in AAAI’23.

* Two papers in NeurIPS’22.

* Grateful to receive a Robust Intelligence (RI) Core Small Grant Award from NSF as the PI.

* Grateful to receive the Best Paper Runner-Up Award at UAI’2022 in recognition of our work “Distributed Adversarial Training to Robustify Deep Neural Networks at Scale”.

* One paper in UAI’22 (Oral presentation).

* Five papers in ICML’22: Bi-level adversarial training; Winning lottery tickets from robust pretraining; Pruning helps certified robustness; Contrastive learning theory; Generalization theory of GCN.

* One paper in NAACL’22.

* One paper in IJCAI’22.

* CFP: 1st Workshop on New Frontiers in Adversarial Machine Learning at ICML’22 (AdvML-Frontiers@ICML’22).

* Grateful to receive a gift funding from Cisco Research as the PI.

* Congratulations to Yihua Zhang for his first CVPR paper.

* Two papers in CVPR 2022.

* Congratulations to Yimeng Zhang, Yuguang Yao, Jianghan Jia for their first ICLR papers.

* Five papers in ICLR 2022: Reverse Engineering of Adversaries, Black-Box Defense (spotlight), Learning to Optimize, Self-Training Theory, Distributed Learning.

* Our work on interpreting and advancing adversarial training via bi-level optimization is now available on arXiv; equally contributed by Yihua Zhang (MSU) and Guanhua Zhang (UCSB).

* Grateful to receive a DARPA IP2 AIE Grant as a Co-PI.

* Five papers in NeurIPS 2021.

* Our MSU-NEU team (with PI Xiaoming Liu and co-PI Xue Lin) entered the Phase 2 of DARPA AIE RED.

* One paper in ICML 2021.

* MIT news ‘Toward deep-learning models that can reason about code more like humans’ on our ICLR’21 work ‘Adversarial Programs’ [paper, code].

* Two papers in CVPR 2021.

* Two papers in AISTATS 2021.

* Four papers in ICLR 2021.

* Three papers in AAAI 2021.

* Grateful to receive a DARPA RED AIE Grant as a Co-PI.