Systems for LLM Inference Zeyu Hao, Rubing Yang, Lingkun Long SlimInfer: Accelerating Long-Context LLM Inference via Dynamic Token Pruning. Lingkun Long, Ruibin Yang, Yushi Huang, Desheng Hui, Ao Zhou, Jianlei Yang, AAAI Conference on Artificial Intelligence (AAAI), 2026. Zeyu Hao Master Student Rubing Yang Master Student Lingkun Long Master Student