News
Followings are some recent news about me, important or miscellaneous.
25.01 New paper on federated learning for multiplayer games
Our new paper Multiplayer Federated Learning: Reaching Equilibrium with Less Communication
has been announced.
We propose Multiplayer Federated Learning (MpFL), a new federated learning framework that models the clients as rational players in a multiplayer game, aiming to reach an equilibrium.
25.01 Paper published at SIAM Journal on Optimization
Our paper Accelerated Minimax Algorithms Flock Together
has finally been published!
25.01 Postdoc appointment at JHU
I have started my postdoc at Johns Hopkins. Looking forward to the new opportunities ahead within the enriching environment. Good morning Baltimore!
24.07 ICML 2024 spotlight paper
Excited to announce that our paper Optimal Acceleration for Minimax and Fixed-Point Problems is Not Unique
has been accepted for publication at ICML 2024 as a spotlight paper (top 335/9473=3.5% of papers)!
I will visit Vienna, Austria to present our paper.
24.06 Talk at EUROPT 2024
I will give a talk at EUROPT 2024 held in Lund, Sweden.
24.05 Paper accepted to SIAM Journal on Optimization
Excited to announce that our paper Accelerated Minimax Algorithms Flock Together
has been accepted to SIAM Journal on Optimization!
24.05 New paper on minimax optimization and fixed-point problems
Our new paper Optimal Acceleration for Minimax and Fixed-Point Problems is Not Unique
has been announced.
We discover novel algorithms for fixed-point problems (equivalent to proximal algorithms for monotone inclusion) achieving the exact optimal complexity. We establish a new duality theory between fixed-point algorithms (the H-duality) and exhibit an analogous phenomenon in minimax optimization, together with the continuous-time analysis bridging between the two setups.
23.12 NeurIPS 2023
I am attending NeurIPS to present our paper Censored Sampling of Diffusion Models Using 3 Minutes of Human Feedback
.
23.08 Talk at WYMK 2023
I will give an invited talk at Workshop for Young Mathematicians in Korea (WYMK 2023) on sampling algorithms and diffusion models.
23.07 New paper on diffusion models with KRAFTON
Our new paper Censored Sampling of Diffusion Models Using 3 Minutes of Human Feedback
has been announced.
This is a joint work with KRAFTON AI Research Center. The paper proposes a methodology to align diffusion models’ sample generation with specified practical requirements, and notably, it is the most empirical paper of mine.
23.06 New ICML Workshop paper
Our paper Diffusion Probabilistic Models Generalize when They Fail to Memorize
has been accepted to ICML 2023 Workshop on Structured Probabilistic Inference & Generative Modeling.
23.06 Talk at SIAM OP23
I will give a talk at 2023 SIAM Conference on Optimization on my paper Accelerated Minimax Algorithms Flock Together
.
22.10 Talk at INFORMS
I will give a talk at 2022 INFORMS Annual Meeting on my papers on minimax optimization.
22.07 Youlchon Fellowship
I have been selected as a recipient of Youlchon AI Star Fellowship. Many thanks!
22.05 New paper on minimax optimization
Our new paper Accelerated Minimax Algorithms Flock Together
has been announced.
It establishes a new connection, the merging path property, among order-optimal algorithms for minimax optimization and order-optimal algorithms for fixed-point problems, which are all based on the mechanism of anchoring ($\approx$ Halpern iteration).
The paper is currently under revision at SIAM Journal on Optimization.
22.02 AISTATS paper
Our paper Robust Probabilistic Time Series Forecasting
has been accepted for publication at AISTATS 2022.
This work is based on my internship project at AWS AI Labs, where I worked with Youngsuk Park and Bernie Wang.
21.05 ~ 21.08 Internship: AWS AI Labs
I will be working at AWS AI Labs as an applied scientist intern.
21.02 ICML Papers
Two papers have been accepted for publication at ICML 2021.
Accelerated Algorithms for Smooth Convex-Concave Minimax Problems with $\mathcal{O}(1/k^2)$ Rate on Squared Gradient Norm
is selected as a long talk (top 166/5513=3% of papers).
It introduces the algorithm Extra Anchored Gradient which reduces the gradient norm of smooth convex-concave objectives with an optimal complexity.
WGAN with an Infinitely Wide Generator Has No Spurious Stationary Points
analyzes setups in which a WGAN objective is favorable to training via alternating gradient ascent-descent.