A Note on Hybrid Online Reinforcement and Imitation Learning for LLMs: Formulations and Algorithms
Abstract
We present a unified framework for Large Language Model (LLM) fine-tuning that integrates Imitation Learning and Reinforcement Learning. By analyzing the gradient of a composite objective combining trajectory-level KL divergence with task rewards, we derive a natural decomposition into two components: (1) an analytically computable Dense Gradient for token-level imitation, and (2) a Monte Carlo estimated Sparse Gradient for long-horizon reward optimization. The Dense Gradient admits a closed-form logit-level formula, enabling efficient GPU implementation.
Links & Resources
Authors
Cite This Paper
Li, Y., Li, Z., Liu, J. (2025). A Note on Hybrid Online Reinforcement and Imitation Learning for LLMs: Formulations and Algorithms. arXiv preprint arXiv:2512.23097.
Yingru Li, Ziniu Li, and Jiacai Liu. "A Note on Hybrid Online Reinforcement and Imitation Learning for LLMs: Formulations and Algorithms." arXiv preprint arXiv:2512.23097 (2025).