Fine-Tuning Without Forgetting In-Context Learning: A Theoretical Analysis of Linear Attention Models

Chungpa Lee, Jy-yong Sohn, Kangwook Lee

Theoretical ICML 2026

Fine-Tuning Without Forgetting In-Context Learning: A Theoretical Analysis of Linear Attention Models

Zeroth-Order Optimization at the Edge of Stability

Minhak Song, Liang Zhang, Bingcong Li, Niao He, Michael Muehlebach, Sewoong Oh

Language Model ICML 2026

Zeroth-Order Optimization at the Edge of Stability

Learnable Logit Adjustment for Imbalanced Semi-Supervised Learning under Class Distribution Mismatch

Hyuck Lee, Taemin Park, Heeyoung Kim

Theoretical ICCV 2025

Learnable Logit Adjustment for Imbalanced Semi-Supervised Learning under Class Distribution Mismatch

SuFP: Piecewise Bit Allocation Floating-Point for Robust Neural Network Quantization

Geonwoo Ko, Sungyeob Yoo, Seri Ham, Seeyeon Kim, Minkyu Kim, Joo-Young Kim

Theoretical TMLR 2025

SuFP: Piecewise Bit Allocation Floating-Point for Robust Neural Network Quantization