"Layer-Condensed KV Cache for Efficient Inference of Large Language Models."

Haoyi Wu, Kewei Tu (2024)

Details and statistics

DOI: 10.18653/V1/2024.ACL-LONG.602

access: open

type: Conference or Workshop Paper

metadata version: 2025-01-19