Songtao Lu
|
Senior Research Scientist
Mathematics and Theoretical Computer Science Department
Thomas J. Watson Research Center & MIT-IBM Watson AI Lab
IBM Research, Yorktown Heights, New York 10598, USA
E-mail: songtaoibm@gmail.com
Google Scholar
|
About me
I am a senior research scientist at the Thomas J. Watson Research Center, an IBM principal investigator at the MIT-IBM Watson AI Lab, and an IBM PI of the RPI-IBM AI Research Collaboration program. Also, I serve as senior personnel at the NSF AI Institute for Future Edge Networks and Distributed Intelligence (AI-EDGE)
Recent news
05/01/2024 Five papers are accpeted by ICML 2024.
04/03/2024 Our work on "Toward Byzantine-robust decentralized federated learning" is accepted by ACM CCS 2024.
03/27/2024 I will serve as an Area Chair for NeurIPS 2024.
01/16/2024 Our work on min-max optimization for policy evaluation with nonlinear function approximation is selected as a spotlight presentation by ICLR 2024.
12/13/2023 I am thrilled to have received an IBM Research Accomplishment Award for my contributions to advancing optimization techniques for next-generation distributed intelligence.
12/13/2023 Four papers were accepted by ICASSP 2024.
11/02/2023 It is my great pleasure to have received an IBM Plateau Invention Achievement Award.
08/21/2023 I have been elevated as an IEEE Senior Member.
07/03/2023 I am truly honored to receive an IBM Entrepreneur Award.
12/02/2022 Our work on conditional moment alignment for improved generalization in federated learning received the FL-NeurIPS Outstanding Paper Award!
07/29/2022 Our work on distributed adversarial training to robustify deep neural networks at scale received the UAI 2022 Best Paper Runner-Up Award!
Recent Representative Works
SLM: A Smoothed First-Order Lagrangian Method for Structured Constrained Nonconvex Optimization
Songtao Lu
NeurIPS, 2023.
Bilevel Optimization with Coupled Decision-Dependent Distributions
Songtao Lu
ICML, 2023.
A Stochastic Linearized Augmented Lagrangian Method for Decentralized Bilevel Optimization
Songtao Lu, Siliang Zeng, Xiaodong Cui, Mark S. Squillante, Lior Horesh, Brian Kingsbury, Jia Liu, Mingyi Hong
NeurIPS, 2022.
A Single-Loop Gradient Descent and Perturbed Ascent Algorithm for Nonconvex Functional Constrained Optimization
Songtao Lu
ICML, 2022.
Decentralized Policy Gradient Descent Ascent for Safe Multi-Agent Reinforcement Learning
Songtao Lu, Kaiqing Zhang, Tianyi Chen, Tamer Basar, and Lior Horesh
AAAI, 2021.
Linearized ADMM Converges to Second-Order Stationary Points for Non-Convex Problems
Songtao Lu, Jason Lee, Meisam Razaviyayn, Mingyi Hong
IEEE Transactions on Signal Processing, 2021.
Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems
Songtao Lu, Meisam Razaviyayn, Bo Yang, Kejun Huang, Mingyi Hong
NeurIPS, 2020.
Hybrid Block Successive Approximation for One-Sided Non-Convex Min-Max Problems: Algorithms and Applications
Songtao Lu, Ioannis Tsaknakis, Mingyi Hong, Yongxin Chen
IEEE Transactions on Signal Processing, 2020.
PA-GD: On the Convergence of Perturbed Alternating Gradient Descent to Second-Order Stationary Points for Structured Nonconvex Optimization
Songtao Lu, Mingyi Hong, Zhengdao Wang
ICML, 2019.
|