This repo includes papers and blogs about Efficient Transformers, Length Extrapolation, Long-Term Memory, Retrieval-Augmented Generation (RAG), and Evaluation for Long Context Modeling.
🔥 Must-read papers for LLM-based Long Context Modeling.
Thanks for all the great contributors on GitHub!🔥⚡🔥
- 1. Survey Papers
- 2. Efficient Attention
- 3. Recurrent Transformers
- 4. State Space Models
- 5. Length Extrapolation 🔥RoPE🔥
- 6. Long Term Memory
- 7. RAG and ICL
- 8. Agent
- 9. Compress
- 10. Long Video and Image
- 11. Benchmark and Evaluation
- 12. Long Text Generation
- 13. Blogs
- Acknowledgements
-
[2024.12.18]
- Paper: An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding NeurIPS 2024
- Paper: More Tokens, Lower Precision: Towards the Optimal Token-Precision Trade-off in KV Cache Compression
- Paper: Boosting Long-Context Information Seeking via Query-Guided Activation Refilling
- Paper: Core Context Aware Attention for Long Context Language Modeling
- Paper: EXIT: Context-Aware Extractive Compression for Enhancing Retrieval-Augmented Generation
-
[2024.12.17]
- Paper: SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator
- Paper: SpeechPrune: Context-aware Token Pruning for Speech Information Retrieval
- Paper: FTP: A Fine-grained Token-wise Pruner for Large Language Models via Token Routing
- Paper: VisDoM: Multi-Document QA with Visually Rich Elements Using Multimodal Retrieval-Augmented Generation
- Paper: CSR:Achieving 1 Bit Key-Value Cache via Sparse Representation AAAI 2025
- Paper: VCA: Video Curious Agent for Long Video Understanding
-
[2024.12.16]
-
[2024.12.13]
-
[2024.12.12]
-
[2024.12.11]
Month Papers
-
[2024.12.10]
-
[2024.12.09]
-
[2024.12.06]
-
[2024.12.05]
-
[2024.12.04]
-
[2024.12.03]
-
[2024.12.02]
- Paper: Transformers Can Do Arithmetic with the Right Embeddings NeurIPS 2024
- Paper: Arithmetic Transformers Can Length-Generalize in Both Operand Length and Count
- Paper: DENIAHL: In-Context Features Influence LLM Needle-In-A-Haystack Abilities
- Paper: T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs
- Paper: LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long Videos
-
[2024.11.28]
-
[2024.11.27]
-
[2024.11.25]
-
[2024.11.21]
-
[2024.11.18]
- Paper: A Benchmark for Long-Form Medical Question Answering NeurIPS 2024
-
[2024.11.15]
-
[2024.11.14]
You can directly click on the title to jump to the corresponding PDF link location
-
Efficient Transformers: A Survey. Yi Tay, Mostafa Dehghani, Dara Bahri, Donald Metzler. Arxiv 2022.
-
A Survey on Long Text Modeling with Transformers. Zican Dong, Tianyi Tang, Lunyi Li, Wayne Xin Zhao. Arxiv 2023.
-
Neural Natural Language Processing for Long Texts: A Survey of the State-of-the-Art. Dimitrios Tsirmpas, Ioannis Gkionis, Ioannis Mademlis, Georgios Papadopoulos. Arxiv 2023.
-
Advancing Transformer Architecture in Long-Context Large Language Models: A Comprehensive Survey. Yunpeng Huang, Jingwei Xu, Zixu Jiang, Junyu Lai, Zenan Li, Yuan Yao, Taolue Chen, Lijuan Yang, Zhou Xin, Xiaoxing Ma. Arxiv 2023.
-
Length Extrapolation of Transformers: A Survey from the Perspective of Position Encoding. Liang Zhao, Xiaocheng Feng, Xiachong Feng, Bing Qin, Ting Liu. Arxiv 2024.
-
The What, Why, and How of Context Length Extension Techniques in Large Language Models -- A Detailed Survey. Saurav Pawar, S.M Towhidul Islam Tonmoy, S M Mehedi Zaman, Vinija Jain, Aman Chadha, Amitava Das. Arxiv 2024.
-
State Space Model for New-Generation Network Alternative to Transformers: A Survey. Xiao Wang, Shiao Wang, Yuhe Ding, Yuehang Li, Wentao Wu, Yao Rong, Weizhe Kong, Ju Huang, Shihao Li, Haoxiang Yang, Ziwen Wang, Bo Jiang, Chenglong Li, Yaowei Wang, Yonghong Tian, Jin Tang. Arxiv 2024.
-
A Survey on Efficient Inference for Large Language Models. Zixuan Zhou, Xuefei Ning, Ke Hong, Tianyu Fu, Jiaming Xu, Shiyao Li, Yuming Lou, Luning Wang, Zhihang Yuan, Xiuhong Li, Shengen Yan, Guohao Dai, Xiao-Ping Zhang, Yuhan Dong, Yu Wang. Arxiv 2024.
-
A Survey on RAG Meets LLMs: Towards Retrieval-Augmented Large Language Models. Yujuan Ding, Wenqi Fan, Liangbo Ning, Shijie Wang, Hengyun Li, Dawei Yin, Tat-Seng Chua, Qing Li. Arxiv 2024.
-
Evaluation of Retrieval-Augmented Generation: A Survey. Hao Yu, Aoran Gan, Kai Zhang, Shiwei Tong, Qi Liu, Zhaofeng Liu. Arxiv 2024.
-
The CAP Principle for LLM Serving: A Survey of Long-Context Large Language Model Serving. Pai Zeng, Zhenyu Ning, Jieru Zhao, Weihao Cui, Mengwei Xu, Liwei Guo, Xusheng Chen, Yizhou Shan. Arxiv 2024.
-
Keep the Cost Down: A Review on Methods to Optimize LLM' s KV-Cache Consumption. Luohe Shi, Hongyi Zhang, Yao Yao, Zuchao Li, Hai Zhao. Arxiv 2024.
- Contextual Compression in Retrieval-Augmented Generation for Large Language Models: A Survey. Sourav Verma. Arxiv 2024.
-
Retrieval Augmented Generation (RAG) and Beyond: A Comprehensive Survey on How to Make your LLMs use External Data More Wisely. Siyun Zhao, Yuqing Yang, Zilong Wang, Zhiyuan He, Luna K. Qiu, Lili Qiu. Arxiv 2024.
-
Prompt Compression for Large Language Models: A Survey. Zongqian Li, Yinhong Liu, Yixuan Su, Nigel Collier. Arxiv 2024.
-
Generating Long Sequences with Sparse Transformers. Rewon Child, Scott Gray, Alec Radford, Ilya Sutskever. Arxiv 2019.
-
Blockwise selfattention for long document understanding. Jiezhong Qiu, Hao Ma, Omer Levy, Wen-tau Yih, Sinong Wang, Jie Tang. EMNLP 2020.
- Longformer: The Long-Document Transformer. Iz Beltagy, Matthew E. Peters, Arman Cohan. Arxiv 2020.
-
ETC: Encoding Long and Structured Inputs in Transformers. Joshua Ainslie, Santiago Ontanon, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, Li Yang. EMNLP 2020.
-
Big Bird: Transformers for Longer Sequences. Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. NeurIPS 2020.
- Reformer: The efficient transformer. Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. ICLR 2020.
- Sparse Sinkhorn Attention. Yi Tay, Dara Bahri, Liu Yang, Donald Metzler, Da-Cheng Juan. ICML 2020.
-
Sparse and continuous attention mechanisms. André F. T. Martins, António Farinhas, Marcos Treviso, Vlad Niculae, Pedro M. Q. Aguiar, Mário A. T. Figueiredo. NIPS 2020.
-
Efficient Content-Based Sparse Attention with Routing Transformers. Aurko Roy, Mohammad Saffar, Ashish Vaswani, David Grangier. TACL 2021.
- LongT5: Efficient text-to-text transformer for long sequences. Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. NAACL 2022.
- Efficient Long-Text Understanding with Short-Text Models. Maor Ivgi, Uri Shaham, Jonathan Berant. TACL 2023.
- Parallel Context Windows for Large Language Models. Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Inbal Magar, Omri Abend, Ehud Karpas, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham. ACL 2023.
- Unlimiformer: Long-Range Transformers with Unlimited Length Input. Amanda Bertsch, Uri Alon, Graham Neubig, Matthew R. Gormley. Arxiv 2023.
- Landmark Attention: Random-Access Infinite Context Length for Transformers. Amirkeivan Mohtashami, Martin Jaggi Arxiv 2023.
- LONGNET: Scaling Transformers to 1,000,000,000 Tokens. Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei. Arxiv 2023.
- Adapting Language Models to Compress Contexts. Alexis Chevalier, Alexander Wettig, Anirudh Ajith, Danqi Chen. Arxiv 2023.
- Blockwise Parallel Transformer for Long Context Large Models. Hao Liu, Pieter Abbeel. Arxiv 2023.
- MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers. Lili Yu, Dániel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, Mike Lewis. Arxiv 2023.
-
Dynamic Context Pruning for Efficient and Interpretable Autoregressive Transformers. Sotiris Anagnostidis, Dario Pavllo, Luca Biggio, Lorenzo Noci, Aurelien Lucchi, Thomas Hofmann. Arxiv 2023.
-
Long-range Language Modeling with Self-retrieval. Ohad Rubin, Jonathan Berant. Arxiv 2023.
-
Max-Margin Token Selection in Attention Mechanism. Davoud Ataee Tarzanagh, Yingcong Li, Xuechen Zhang, Samet Oymak. Arxiv 2023.
-
Chunk, Align, Select: A Simple Long-sequence Processing Method for Transformers. Jiawen Xie, Pengyu Cheng, Xiao Liang, Yong Dai, Nan Du. Arxiv 2023.
-
Sparse Token Transformer with Attention Back Tracking. Heejun Lee, Minki Kang, Youngwan Lee, Sung Ju Hwang. ICLR 2023.
-
Empower Your Model with Longer and Better Context Comprehension. YiFei Gao, Lei Wang, Jun Fang, Longhua Hu, Jun Cheng. Arxiv 2023.
-
Ring Attention with Blockwise Transformers for Near-Infinite Context. Hao Liu, Matei Zaharia, Pieter Abbeel. Arxiv 2023.
-
Efficient Streaming Language Models with Attention Sinks. Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis. Arxiv 2023.
-
HyperAttention: Long-context Attention in Near-Linear Time. Insu Han, Rajesh Jayaram, Amin Karbasi, Vahab Mirrokni, David P. Woodruff, Amir Zandieh. Arxiv 2023.
-
Fovea Transformer: Efficient Long-Context Modeling with Structured Fine-to-Coarse Attention. Ziwei He,Jian Yuan,Le Zhou,Jingwen Leng,Bo Jiang. Arxiv 2023.
-
ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition. Lu Ye, Ze Tao, Yong Huang, Yang Li. Arxiv 2024.
-
Training-Free Long-Context Scaling of Large Language Models. Chenxin An, Fei Huang, Jun Zhang, Shansan Gong, Xipeng Qiu, Chang Zhou, Lingpeng Kong. Arxiv 2024.
-
LongHeads: Multi-Head Attention is Secretly a Long Context Processor. Yi Lu, Xin Zhou, Wei He, Jun Zhao, Tao Ji, Tao Gui, Qi Zhang, Xuanjing Huang. Arxiv 2024.
-
Zebra: Extending Context Window with Layerwise Grouped Local-Global Attention. Kaiqiang Song, Xiaoyang Wang, Sangwoo Cho, Xiaoman Pan, Dong Yu. Arxiv 2023.
-
SnapKV: LLM Knows What You are Looking for Before Generation. Yuhong Li, Yingbing Huang, Bowen Yang, Bharat Venkitesh, Acyr Locatelli, Hanchen Ye, Tianle Cai, Patrick Lewis, Deming Chen. Arxiv 2024.
-
Sequence can Secretly Tell You What to Discard. Jincheng Dai, Zhuowei Huang, Haiyun Jiang, Chen Chen, Deng Cai, Wei Bi, Shuming Shi. Arxiv 2024.
-
SinkLoRA: Enhanced Efficiency and Chat Capabilities for Long-Context Large Language Models. Hengyu Zhang. Arxiv 2024.
-
HiP Attention: Sparse Sub-Quadratic Attention with Hierarchical Attention Pruning. Heejun Lee, Geon Park, Youngwan Lee, Jina Kim, Wonyoung Jeong, Myeongjae Jeon, Sung Ju Hwang. Arxiv 2024.
-
Taking a Deep Breath: Enhancing Language Modeling of Large Language Models with Sentinel Tokens. Weiyao Luo, Suncong Zheng, Heming Xia, Weikang Wang, Yan Lei, Tianyu Liu, Shuang Chen, Zhifang Sui. Arxiv 2024.
-
MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression. Weiyao Luo, Suncong Zheng, Heming Xia, Weikang Wang, Yan Lei, Tianyu Liu, Shuang Chen, Zhifang Sui. Arxiv 2024.
-
Sparser is Faster and Less is More: Efficient Sparse Attention for Long-Range Transformers. Chao Lou, Zixia Jia, Zilong Zheng, Kewei Tu. Arxiv 2024.
-
Near-Lossless Acceleration of Long Context LLM Inference with Adaptive Structured Sparse Attention. Qianchao Zhu, Jiangfei Duan, Chang Chen, Siran Liu, Xiuhong Li, Guanyu Feng, Xin Lv, Huanqi Cao, Xiao Chuanfu, Xingcheng Zhang, Dahua Lin, Chao Yang. Arxiv 2024.
-
Neurocache: Efficient Vector Retrieval for Long-range Language Modeling. Ali Safaya, Deniz Yuret. Arxiv 2024.
-
Weighted Grouped Query Attention in Transformers. Sai Sena Chinnakonduru, Astarag Mohapatra. Arxiv 2024.
-
Selective Attention Improves Transformer. Yaniv Leviathan, Matan Kalman, Yossi Matias. Arxiv 2024.
-
TidalDecode: Fast and Accurate LLM Decoding with Position Persistent Sparse Attention. Lijie Yang, Zhihao Zhang, Zhuofu Chen, Zikun Li, Zhihao Jia. Arxiv 2024.
-
FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding. Jingyang Deng, Zhengyang Shen, Boyang Wang, Lixin Su, Suqi Cheng, Ying Nie, Junfeng Wang, Dawei Yin, Jinwen Ma. Arxiv 2024.
-
Beyond Linear Approximations: A Novel Pruning Approach for Attention Matrix. Yingyu Liang, Jiangxuan Long, Zhenmei Shi, Zhao Song, Yufa Zhou. Arxiv 2024.
-
Extra Global Attention Designation Using Keyword Detection in Sparse Transformer Architectures. Evan Lucas, Dylan Kangas, Timothy C Havens. Arxiv 2024.
-
SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs. Yizhao Gao, Zhichen Zeng, Dayou Du, Shijie Cao, Hayden Kwok-Hay So, Ting Cao, Fan Yang, Mao Yang. Arxiv 2024.
-
Selective Attention: Enhancing Transformer through Principled Context Control. Xuechen Zhang, Xiangyu Chang, Mingchen Li, Amit Roy-Chowdhury, Jiasi Chen, Samet Oymak. NeurIPS 2024.
-
Core Context Aware Attention for Long Context Language Modeling. Yaofo Chen, Zeng You, Shuhai Zhang, Haokun Li, Yirui Li, Yaowei Wang, Mingkui Tan. Arxiv 2024.
- Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention. Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, François Fleuret. ICML 2020.
- Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations. Tri Dao, Albert Gu, Matthew Eichhorn, Atri Rudra, Christopher Ré. Arxiv 2019.
-
Masked language modeling for proteins via linearly scalable long-context transformers. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, David Belanger, Lucy Colwell, Adrian Weller. Arxiv 2020.
-
Rethinking attention with performers. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, Adrian Weller. Arxiv 2020.
- Linformer: Self-attention with linear complexity. Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao Ma. Arxiv 2020.
- Random Feature Attention. Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, Lingpeng Kong. Arxiv 2021.
- Luna: Linear unified nested attention. Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, Luke Zettlemoyer. Arxiv 2021.
- Fnet: Mixing tokens with fourier transforms. James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. Arxiv 2021.
- Gated Linear Attention Transformers with Hardware-Efficient Training. Songlin Yang, Bailin Wang, Yikang Shen, Rameswar Panda, Yoon Kim. Arxiv 2023.
-
Latent Attention for Linear Time Transformers. Rares Dolga, Marius Cobzarenco, David Barber. Arxiv 2024.
-
Simple linear attention language models balance the recall-throughput tradeoff. Simran Arora, Sabri Eyuboglu, Michael Zhang, Aman Timalsina, Silas Alberti, Dylan Zinsley, James Zou, Atri Rudra, Christopher Ré. Arxiv 2024.
- Linear Attention Sequence Parallelism. Weigao Sun, Zhen Qin, Dong Li, Xuyang Shen, Yu Qiao, Yiran Zhong. Arxiv 2024.
- Softmax Attention with Constant Cost per Token. Franz A. Heinsen. Arxiv 2024.
![GitHub Repo stars](https://img.shields.io/github/stars/glassroom/heinsen_attention
- Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length. Xuezhe Ma, Xiaomeng Yang, Wenhan Xiong, Beidi Chen, Lili Yu, Hao Zhang, Jonathan May, Luke Zettlemoyer, Omer Levy, Chunting Zhou. Arxiv 2024.
![GitHub Repo stars](https://img.shields.io/github/stars/XuezheMax/megalodon
-
Various Lengths, Constant Speed: Efficient Language Modeling with Lightning Attention. Zhen Qin, Weigao Sun, Dong Li, Xuyang Shen, Weixuan Sun, Yiran Zhong. Arxiv 2024.
-
Unlocking the Secrets of Linear Complexity Sequence Model from A Unified Perspective. Zhen Qin, Xuyang Shen, Weigao Sun, Dong Li, Stan Birchfield, Richard Hartley, Yiran Zhong. Arxiv 2024.
-
Attention as an RNN. Leo Feng, Frederick Tung, Hossein Hajimirsadeghi, Mohamed Osama Ahmed, Yoshua Bengio, Greg Mori. Arxiv 2024.
-
You Only Scan Once: Efficient Multi-dimension Sequential Modeling with LightNet. Zhen Qin, Yuxin Mao, Xuyang Shen, Dong Li, Jing Zhang, Yuchao Dai, Yiran Zhong. Arxiv 2024.
![GitHub Repo stars](https://img.shields.io/github/stars/OpenNLPLab/LightNet
- When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models. Haoran You, Yichao Fu, Zheng Wang, Amir Yazdanbakhsh, Yingyan (Celine)Lin. Arxiv 2024.
![GitHub Repo stars](https://img.shields.io/github/stars/GATECH-EIC/Linearized-LLM
- Learning to (Learn at Test Time): RNNs with Expressive Hidden States. Yu Sun, Xinhao Li, Karan Dalal, Jiarui Xu, Arjun Vikram, Genghan Zhang, Yann Dubois, Xinlei Chen, Xiaolong Wang, Sanmi Koyejo, Tatsunori Hashimoto, Carlos Guestrin. Arxiv 2024.
![GitHub Repo stars](https://img.shields.io/github/stars/test-time-training/ttt-lm-pytorch ![GitHub Repo stars](https://img.shields.io/github/stars/test-time-training/ttt-lm-jax
- Gated Slot Attention for Efficient Linear-Time Sequence Modeling. Yu Zhang, Songlin Yang, Ruijie Zhu, Yue Zhang, Leyang Cui, Yiqiao Wang, Bolun Wang, Freda Shi, Bailin Wang, Wei Bi, Peng Zhou, Guohong Fu. Arxiv 2024.
![GitHub Repo stars](https://img.shields.io/github/stars/sustcsonglin/flash-linear-attention
- Neural Legal Judgment Prediction in English. Ilias Chalkidis, Ion Androutsopoulos, Nikolaos Aletras. ACL 2019.
-
Hierarchical Neural Network Approaches for Long Document Classification. Snehal Khandve, Vedangi Wagh, Apurva Wani, Isha Joshi, Raviraj Joshi. ICML 2022.
-
Hi-transformer: Hierarchical interactive transformer for efficient and effective long document modeling. Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang. ACL-IJCNLP 2021
-
Erniesparse: Learning hierarchical efficient transformer through regularized self-attention. Yang Liu, Jiaxiang Liu, Li Chen, Yuxiang Lu, Shikun Feng, Zhida Feng, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang. Arxiv 2022.
-
Self-attention Does Not Need O(n^2) Memory. Markus N. Rabe, Charles Staats. Arxiv 2021.
-
Faster Causal Attention Over Large Sequences Through Sparse Flash Attention. Matteo Pagliardini, Daniele Paliotta, Martin Jaggi, François Fleuret. Arxiv 2023.
-
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness. Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, Christopher Ré. Arxiv 2022.
- FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning. Tri Dao. Arxiv 2023.
- Efficient Memory Management for Large Language Model Serving with PagedAttention. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, Ion Stoica. Arxiv 2023.
- TransNormerLLM: A Faster and Better Large Language Model with Improved TransNormer. Zhen Qin, Dong Li, Weigao Sun, Weixuan Sun, Xuyang Shen, Xiaodong Han, Yunshen Wei, Baohong Lv, Xiao Luo, Yu Qiao, Yiran Zhong. Arxiv 2023.
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models. Zhen Qin, Weigao Sun, Dong Li, Xuyang Shen, Weixuan Sun, Yiran Zhong. Arxiv 2024.
-
ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition. Lu Ye, Ze Tao, Yong Huang, Yang Li. Arxiv 2024.
-
SnapKV: LLM Knows What You are Looking for Before Generation. Yuhong Li, Yingbing Huang, Bowen Yang, Bharat Venkitesh, Acyr Locatelli, Hanchen Ye, Tianle Cai, Patrick Lewis, Deming Chen. Arxiv 2024.
-
Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs. Suyu Ge, Yunan Zhang, Liyuan Liu, Minjia Zhang, Jiawei Han, Jianfeng Gao. ICLR 2024 Oral.
-
Keyformer: KV Cache Reduction through Key Tokens Selection for Efficient Generative Inference. Muhammad Adnan, Akhil Arunkumar, Gaurav Jain, Prashant J. Nair, Ilya Soloveychik, Purushotham Kamath. Arxiv 2024.
-
Efficient LLM Inference with Kcache. Qiaozhi He, Zhihua Wu. Arxiv 2024.
-
You Only Cache Once: Decoder-Decoder Architectures for Language Models. Yutao Sun, Li Dong, Yi Zhu, Shaohan Huang, Wenhui Wang, Shuming Ma, Quanlu Zhang, Jianyong Wang, Furu Wei. Arxiv 2024.
-
Fast Transformer Decoding: One Write-Head is All You Need. Noam Shazeer. Arxiv 2019.
-
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints. Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebrón, Sumit Sanghai. Arxiv 2023.
-
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model. DeepSeek-AI. Arxiv 2024.
- Layer-Condensed KV Cache for Efficient Inference of Large Language Models. Haoyi Wu, Kewei Tu. ACL 2024.
-
Reducing Transformer Key-Value Cache Size with Cross-Layer Attention. William Brandon, Mayank Mishra, Aniruddha Nrusimha, Rameswar Panda, Jonathan Ragan Kelly. Arxiv 2024.
-
PyramidInfer: Pyramid KV Cache Compression for High-throughput LLM Inference. William Brandon, Mayank Mishra, Aniruddha Nrusimha, Rameswar Panda, Jonathan Ragan Kelly. Arxiv 2024.
-
Unlocking Data-free Low-bit Quantization with Matrix Decomposition for KV Cache Compression. Peiyu Liu, Ze-Feng Gao, Wayne Xin Zhao, Yipeng Ma, Tao Wang, Ji-Rong Wen. Arxiv 2024.
-
MiniCache: KV Cache Compression in Depth Dimension for Large Language Models. Akide Liu, Jing Liu, Zizheng Pan, Yefei He, Gholamreza Haffari, Bohan Zhuang. NeurIPS 2024.
-
PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling. Zefan Cai., Yichi Zhang, Bofei Gao, Tianyu Liu, Keming Lu, Wayne Xiong, Yue Dong, Baobao Chang, Junjie Hu, Wen Xiao. Arxiv 2024.
-
Effectively Compress KV Heads for LLM. Hao Yu, Zelan Yang, Shen Li, Yong Li, Jianxin Wu. Arxiv 2024.
-
A Simple and Effective L2 Norm-Based Strategy for KV Cache Compression. Alessio Devoto, Yu Zhao, Simone Scardapane, Pasquale Minervini. Arxiv 2024.
-
Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference. Jiaming Tang, Yilong Zhao, Kan Zhu, Guangxuan Xiao, Baris Kasikci, Song Han. ICML 2024.
-
Attention Score is not All You Need for Token Importance Indicator in KV Cache Reduction: Value Also Matters. Zhiyu Guo, Hidetaka Kamigaito, Taro Watanabe. Arxiv 2024.
-
CItruS: Chunked Instruction-aware State Eviction for Long Sequence Modeling. Yu Bai, Xiyuan Zou, Heyan Huang, Sanxing Chen, Marc-Antoine Rondeau, Yang Gao, Jackie Chi Kit Cheung. Arxiv 2024.
-
D2O: Dynamic Discriminative Operations for Efficient Generative Inference of Large Language Models. Zhongwei Wan, Xinjian Wu, Yu Zhang, Yi Xin, Chaofan Tao, Zhihong Zhu, Xin Wang, Siqi Luo, Jing Xiong, Mi Zhang. Arxiv 2024.
-
MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression. Weiyao Luo, Suncong Zheng, Heming Xia, Weikang Wang, Yan Lei, Tianyu Liu, Shuang Chen, Zhifang Sui. Arxiv 2024.
-
LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inference. Zhongwei Wan, Ziang Wu, Che Liu, Jinfa Huang, Zhihong Zhu, Peng Jin, Longyue Wang, Li Yuan. Arxiv 2024.
-
Training-Free Exponential Extension of Sliding Window Context with Cascading KV Cache. Jeffrey Willette, Heejun Lee, Youngwan Lee, Myeongjae Jeon, Sung Ju Hwang. Arxiv 2024.
-
QuickLLaMA: Query-aware Inference Acceleration for Large Language Models. Jingyao Li, Han Shi, Xin Jiang, Zhenguo Li, Hong Xu, Jiaya Jia. Arxiv 2024.
- MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention. Huiqiang Jiang, Yucheng Li, Chengruidong Zhang, Qianhui Wu, Xufang Luo, Surin Ahn, Zhenhua Han, Amir H. Abdi, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, Lili Qiu. Arxiv 2024.
-
Model Tells You Where to Merge: Adaptive KV Cache Merging for LLMs on Long-Context Tasks. Zheng Wang, Boxiao Jin, Zhongzhi Yu, Minjia Zhang. Arxiv 2024.
-
Optimizing KV Cache Eviction in LLMs: Adaptive Allocation for Enhanced Budget Utilization. Yuan Feng, Junlin Lv, Yukun Cao, Xike Xie, S. Kevin Zhou. Arxiv 2024.
-
Beyond KV Caching: Shared Attention for Efficient LLMs. Bingli Liao, Danilo Vasconcellos Vargas. Arxiv 2024.
-
PQCache: Product Quantization-based KVCache for Long Context LLM Inference. Hailin Zhang, Xiaodong Ji, Yilin Chen, Fangcheng Fu, Xupeng Miao, Xiaonan Nie, Weipeng Chen, Bin Cui. Arxiv 2024.
-
LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inference. Qichen Fu, Minsik Cho, Thomas Merth, Sachin Mehta, Mohammad Rastegari, Mahyar Najibi. Arxiv 2024.
-
Farewell to Length Extrapolation, a Training-Free Infinite Context with Finite Attention Scope. Xiaoran Liu, Qipeng Guo, Yuerong Song, Zhigeng Liu, Kai Lv, Hang Yan, Linlin Li, Qun Liu, Xipeng Qiu. Arxiv 2024.
-
RazorAttention: Efficient KV Cache Compression Through Retrieval Heads. Hanlin Tang, Yang Lin, Jing Lin, Qingsen Han, Shikuan Hong, Yiwu Yao, Gongyi Wang. Arxiv 2024.
-
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision. Jay Shah, Ganesh Bikshandi, Ying Zhang, Vijay Thakkar, Pradeep Ramani, Tri Dao. Arxiv 2024.
-
ThinK: Thinner Key Cache by Query-Driven Pruning. Yuhui Xu, Zhanming Jie, Hanze Dong, Lei Wang, Xudong Lu, Aojun Zhou, Amrita Saha, Caiming Xiong, Doyen Sahoo. Arxiv 2024.
-
A2SF: Accumulative Attention Scoring with Forgetting Factor for Token Pruning in Transformer Decoder. Hyun-rae Jo, Dongkun Shin. Arxiv 2024.
-
Cross-layer Attention Sharing for Large Language Models. Yongyu Mu, Yuzhang Wu, Yuchun Fan, Chenglong Wang, Hengyu Li, Qiaozhi He, Murun Yang, Tong Xiao, Jingbo Zhu. Arxiv 2024.
-
NACL: A General and Effective KV Cache Eviction Framework for LLMs at Inference Time. Yilong Chen, Guoxia Wang, Junyuan Shang, Shiyao Cui, Zhenyu Zhang, Tingwen Liu, Shuohuan Wang, Yu Sun, Dianhai Yu, Hua Wu. ACL 2024.
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters. Vasudev Shyam, Jonathan Pilault, Emily Shepperd, Quentin Anthony, Beren Millidge. Arxiv 2024.
- MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding. Jian Chen, Vashisth Tiwari, Ranajoy Sadhukhan, Zhuoming Chen, Jinyuan Shi, Ian En-Hsu Yen, Beidi Chen. Arxiv 2024.
- CSKV: Training-Efficient Channel Shrinking for KV Cache in Long-Context Scenarios. Luning Wang, Shiyao Li, Xuefei Ning, Zhihang Yuan, Shengen Yan, Guohao Dai, Yu Wang. Arxiv 2024.
-
RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval. Di Liu, Meng Chen, Baotong Lu, Huiqiang Jiang, Zhenhua Han, Qianxi Zhang, Qi Chen, Chengruidong Zhang, Bailu Ding, Kai Zhang, Chen Chen, Fan Yang, Yuqing Yang, Lili Qiu. Arxiv 2024.
-
InstInfer: In-Storage Attention Offloading for Cost-Effective Long-Context LLM Inference. Xiurui Pan, Endian Li, Qiao Li, Shengwen Liang, Yizhou Shan, Ke Zhou, Yingwei Luo, Xiaolin Wang, Jie Zhang. Arxiv 2024.
-
CritiPrefill: A Segment-wise Criticality-based Approach for Prefilling Acceleration in LLMs. Junlin Lv, Yuan Feng, Xike Xie, Xin Jia, Qirong Peng, Guiming Xie. Arxiv 2024.
-
Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction. Zhenmei Shi, Yifei Ming, Xuan-Phi Nguyen, Yingyu Liang, Shafiq Joty. Arxiv 2024.
-
Inference-Friendly Models With MixAttention. Shashank Rajput, Ying Sheng, Sean Owen, Vitaliy Chiley. Arxiv 2024.
-
KV-Compress: Paged KV-Cache Compression with Variable Compression Rates per Attention Head. Isaac Rehg. Arxiv 2024.
- Locret: Enhancing Eviction in Long-Context LLM Inference with Trained Retaining Heads. Yuxiang Huang, Binhang Yuan, Xu Han, Chaojun Xiao, Zhiyuan Liu. Arxiv 2024.
-
InfiniPot: Infinite Context Processing on Memory-Constrained LLMs. Minsoo Kim, Kyuhong Shim, Jungwook Choi, Simyung Chang. Arxiv 2024.
-
UNComp: Uncertainty-Aware Long-Context Compressor for Efficient Large Language Model Inference. Jing Xiong, Jianghan Shen, Fanghua Ye, Chaofan Tao, Zhongwei Wan, Jianqiao Lu, Xun Wu, Chuanyang Zheng, Zhijiang Guo, Lingpeng Kong, Ngai Wong. Arxiv 2024.
-
LoRC: Low-Rank Compression for LLMs KV Cache with a Progressive Compression Strategy. Rongzhi Zhang, Kuang Wang, Liyuan Liu, Shuohang Wang, Hao Cheng, Chao Zhang, Yelong Shen. Arxiv 2024.
-
DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads. Guangxuan Xiao, Jiaming Tang, Jingwei Zuo, Junxian Guo, Shang Yang, Haotian Tang, Yao Fu, Song Han. Arxiv 2024.
-
In-context KV-Cache Eviction for LLMs via Attention-Gate. Zihao Zeng, Bokai Lin, Tianqi Hou, Hao Zhang, Zhijie Deng. Arxiv 2024.
-
SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction. Xuan Zhang, Cunxiao Du, Chao Du, Tianyu Pang, Wei Gao, Min Lin. Arxiv 2024.
- A Systematic Study of Cross-Layer KV Sharing for Efficient LLM Inference. You Wu, Haoyi Wu, Kewei Tu. Arxiv 2024.
- KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing. Yifei Yang, Zouying Cao, Qiguang Chen, Libo Qin, Dongjie Yang, Hai Zhao, Zhi Chen. Arxiv 2024.
-
Lossless KV Cache Compression to 2%. Zhen Yang, J.N.Han, Kan Wu, Ruobing Xie, An Wang, Xingwu Sun, Zhanhui Kang. Arxiv 2024.
-
MatryoshkaKV: Adaptive KV Compression via Trainable Orthogonal Projection. Bokai Lin, Zihao Zeng, Zipeng Xiao, Siqi Kou, Tianqi Hou, Xiaofeng Gao, Hao Zhang, Zhijie Deng. Arxiv 2024.
-
EPIC: Efficient Position-Independent Context Caching for Serving Large Language Models. Junhao Hu, Wenrui Huang, Haoyi Wang, Weidong Wang, Tiancheng Hu, Qin Zhang, Hao Feng, Xusheng Chen, Yizhou Shan, Tao Xie. Arxiv 2024.
-
MagicPIG: LSH Sampling for Efficient LLM Generation. Zhuoming Chen, Ranajoy Sadhukhan, Zihao Ye, Yang Zhou, Jianyu Zhang, Niklas Nolte, Yuandong Tian, Matthijs Douze, Leon Bottou, Zhihao Jia, Beidi Chen. Arxiv 2024.
-
Not All Heads Matter: A Head-Level KV Cache Compression Method with Integrated Retrieval and Reasoning. Yu Fu, Zefan Cai, Abedelkadir Asi, Wayne Xiong, Yue Dong, Wen Xiao. Arxiv 2024.
-
Long Sequence Modeling with Attention Tensorization: From Sequence to Tensor Learning. Aosong Feng, Rex Ying, Leandros Tassiulas. Arxiv 2024.
-
ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference. Hanshi Sun, Li-Wen Chang, Wenlei Bao, Size Zheng, Ningxin Zheng, Xin Liu, Harry Dong, Yuejie Chi, Beidi Chen. Arxiv 2024.
- BUZZ: Beehive-structured Sparse KV Cache with Segmented Heavy Hitters for Efficient LLM Inference. Junqi Zhao, Zhijin Fang, Shu Li, Shaohui Yang, Shichao He. Arxiv 2024.
-
VL-Cache: Sparsity and Modality-Aware KV Cache Compression for Vision-Language Model Inference Acceleration. Dezhan Tu, Danylo Vashchilenko, Yuzhe Lu, Panpan Xu. Arxiv 2024.
-
TokenSelect: Efficient Long-Context Inference and Length Extrapolation for LLMs via Dynamic Token-Level KV Cache Selection. Wei Wu, Zhuoshi Pan, Chao Wang, Liyi Chen, Yunchu Bai, Kun Fu, Zheng Wang, Hui Xiong. Arxiv 2024.
-
Recycled Attention: Efficient inference for long-context language models. Fangyuan Xu, Tanya Goyal, Eunsol Choi. Arxiv 2024.
- Squeezed Attention: Accelerating Long Context Length LLM Inference. Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Monishwaran Maheswaran, June Paik, Michael W. Mahoney, Kurt Keutzer, Amir Gholami. Arxiv 2024.
- When Precision Meets Position: BFloat16 Breaks Down RoPE in Long-Context Training. Haonan Wang, Qian Liu, Chao Du, Tongyao Zhu, Cunxiao Du, Kenji Kawaguchi, Tianyu Pang. Arxiv 2024.
- Star Attention: Efficient LLM Inference over Long Sequences. Shantanu Acharya, Fei Jia, Boris Ginsburg. Arxiv 2024.
-
Pushing the Limits of LLM Inference via 2-Bit Layer-Discriminative KV Cache. Akshat Sharma, Hangliang Ding, Jianping Li, Neel Dani, Minjia Zhang. Arxiv 2024.
-
Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Context Sparsification. Wenxuan Huang, Zijie Zhai, Yunhang Shen, Shaoshen Cao, Fei Zhao, Xiangfeng Xu, Zheyu Ye, Shaohui Lin. Arxiv 2024.
-
Compressing KV Cache for Long-Context LLM Inference with Inter-Layer Attention Similarity. Da Ma, Lu Chen, Situo Zhang, Yuxun Miao, Su Zhu, Zhi Chen, Hongshen Xu, Hanqi Li, Shuai Fan, Lei Pan, Kai Yu. Arxiv 2024.
-
AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning. Yiwu Zhong, Zhuoming Liu, Yin Li, Liwei Wang. Arxiv 2024.
-
ClusterKV: Manipulating LLM KV Cache in Semantic Space for Recallable Compression. Guangda Liu, Chengwei Li, Jieru Zhao, Chenqi Zhang, Minyi Guo. Arxiv 2024.
-
BatchLLM: Optimizing Large Batched LLM Inference with Global Prefix Sharing and Throughput-oriented Token Batching. Zhen Zheng, Xin Ji, Taosong Fang, Fanghao Zhou, Chuanjie Liu, Gang Peng. Arxiv 2024.
-
Cross-Self KV Cache Pruning for Efficient Vision-Language Inference. Xiaohuan Pei, Tao Huang, Chang Xu. Arxiv 2024.
-
Ltri-LLM: Streaming Long Context Inference for LLMs with Training-Free Dynamic Triangular Attention Pattern. Hongyin Tang, Di Xiu, Lanrui Wang, Xiurui Geng, Jingang Wang, Xunliang Cai. Arxiv 2024.
-
XKV: Personalized KV Cache Memory Reduction for Long-Context LLM Inference. Weizhuo Li, Zhigang Wang, Yu Gu, Ge Yu. Arxiv 2024.
-
SparseAccelerate: Efficient Long-Context Inference for Mid-Range GPUs. James Vo. Arxiv 2024.
-
EMS: Adaptive Evict-then-Merge Strategy for Head-wise KV Cache Compression Based on Global-Local Importance. Yingxin Li, Ye Li, Yuan Meng, Xinzhu Ma, Zihan Geng, Shutao Xia, Zhi Wang. Arxiv 2024.
-
ZigZagkv: Dynamic KV Cache Compression for Long-context Modeling based on Layer Uncertainty. Meizhi Zhong, Xikai Liu, Chen Zhang, Yikun Lei, Yan Gao, Yao Hu, Kehai Chen, Min Zhang. Arxiv 2024.
-
SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator. Guoxuan Chen, Han Shi, Jiawei Li, Yihang Gao, Xiaozhe Ren, Yimeng Chen, Xin Jiang, Zhenguo Li, Weiyang Liu, Chao Huang. Arxiv 2024.
-
More Tokens, Lower Precision: Towards the Optimal Token-Precision Trade-off in KV Cache Compression. Jiebin Zhang, Dawei Zhu, Yifan Song, Wenhao Wu, Chuqiao Kuang, Xiaoguang Li, Lifeng Shang, Qun Liu, Sujian Li. Arxiv 2024.
-
Boosting Long-Context Information Seeking via Query-Guided Activation Refilling. Hongjin Qian, Zheng Liu, Peitian Zhang, Zhicheng Dou, Defu Lian. Arxiv 2024.
- Transformer-XL: Attentive language models beyond a fixed-length context. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. ACL 2019.
- Compressive Transformers for Long-Range Sequence Modelling. Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Timothy P. Lillicrap. Arxiv 2019.
- Memformer: The memory-augmented transformer. Qingyang Wu, Zhenzhong Lan, Kun Qian, Jing Gu, Alborz Geramifard, Zhou Yu. Arxiv 2020.
-
ERNIE-Doc: A Retrospective Long-Document Modeling Transformer. SiYu Ding, Junyuan Shang, Shuohuan Wang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang. ACL-IJCNLP 2021.
-
Memorizing Transformers. Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, Christian Szegedy. Arxiv 2022.
- Recurrent Attention Networks for Long-text Modeling. Xianming Li, Zongxi Li, Xiaotian Luo, Haoran Xie, Xing Lee, Yingbin Zhao, Fu Lee Wang, Qing Li. ACL 2023.
- RWKV: Reinventing RNNs for the Transformer Era. Bo Peng, Eric Alcaide, Quentin Anthony, Alon Albalak, Samuel Arcadinho, Huanqi Cao, Xin Cheng, Michael Chung, Matteo Grella, Kranthi Kiran GV, Xuzheng He, Haowen Hou, Przemyslaw Kazienko, Jan Kocon, Jiaming Kong, Bartlomiej Koptyra, Hayden Lau, Krishna Sri Ipsit Mantri, Ferdinand Mom, Atsushi Saito, Xiangru Tang, Bolun Wang, Johan S. Wind, Stansilaw Wozniak, Ruichong Zhang, Zhenyuan Zhang, Qihang Zhao, Peng Zhou, Jian Zhu, Rui-Jie Zhu. Arxiv 2023.
-
Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model. Yinghan Long, Sayeed Shafayet Chowdhury, Kaushik Roy. Arxiv 2023.
-
Scaling Transformer to 1M tokens and beyond with RMT. Aydar Bulatov, Yuri Kuratov, Mikhail S. Burtsev. Arxiv 2023.
-
Block-Recurrent Transformers. DeLesley Hutchins, Imanol Schlag, Yuhuai Wu, Ethan Dyer, Behnam Neyshabur. Arxiv 2023.
- TRAMS: Training-free Memory Selection for Long-range Language Modeling. Haofei Yu, Cunxiang Wang, Yue Zhang, Wei Bi. Arxiv 2023.
-
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models. Soham De, Samuel L. Smith, Anushan Fernando, Aleksandar Botev, George Cristian-Muraru, Albert Gu, Ruba Haroun, Leonard Berrada, Yutian Chen, Srivatsan Srinivasan, Guillaume Desjardins, Arnaud Doucet, David Budden, Yee Whye Teh, Razvan Pascanu, Nando De Freitas, Caglar Gulcehre. Arxiv 2024.
-
Extensible Embedding: A Flexible Multipler For LLM's Context Length. Ninglu Shao, Shitao Xiao, Zheng Liu, Peitian Zhang. Arxiv 2024.
- Eagle and Finch: RWKV with Matrix-Valued States and Dynamic Recurrence. Bo Peng, Daniel Goldstein, Quentin Anthony, Alon Albalak, Eric Alcaide, Stella Biderman, Eugene Cheah, Teddy Ferdinan, Haowen Hou, Przemysław Kazienko, Kranthi Kiran GV, Jan Kocoń, Bartłomiej Koptyra, Satyapriya Krishna, Ronald McClelland Jr., Niklas Muennighoff, Fares Obeid, Atsushi Saito, Guangyu Song, Haoqin Tu, Stanisław Woźniak, Ruichong Zhang, Bingchen Zhao, Qihang Zhao, Peng Zhou, Jian Zhu, Rui-Jie Zhu. Arxiv 2024.
-
Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention. Tsendsuren Munkhdalai, Manaal Faruqui, Siddharth Gopal. Arxiv 2024.
-
RecurrentGemma: Moving Past Transformers for Efficient Open Language Models. Aleksandar Botev, Soham De, Samuel L Smith, Anushan Fernando, George-Cristian Muraru, Ruba Haroun, Leonard Berrada, Razvan Pascanu, Pier Giuseppe Sessa, Robert Dadashi, Léonard Hussenot, Johan Ferret, Sertan Girgin, Olivier Bachem, Alek Andreev, Kathleen Kenealy, Thomas Mesnard, Cassidy Hardin, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, Pouya Tafti, Armand Joulin, Noah Fiedel, Evan Senter, Yutian Chen, Srivatsan Srinivasan, Guillaume Desjardins, David Budden, Arnaud Doucet, Sharad Vikram, Adam Paszke, Trevor Gale, Sebastian Borgeaud, Charlie Chen, Andy Brock, Antonia Paterson, Jenny Brennan, Meg Risdal, Raj Gundluru, Nesh Devanathan, Paul Mooney, Nilay Chauhan, Phil Culliton, Luiz GUStavo Martins, Elisa Bandy, David Huntsperger, Glenn Cameron, Arthur Zucker, Tris Warkentin, Ludovic Peran, Minh Giang, Zoubin Ghahramani, Clément Farabet, Koray Kavukcuoglu, Demis Hassabis, Raia Hadsell, Yee Whye Teh, Nando de Frietas. Arxiv 2024.
-
Linearizing Large Language Models. Jean Mercat, Igor Vasiljevic, Sedrick Keh, Kushal Arora, Achal Dave, Adrien Gaidon, Thomas Kollar. Arxiv 2024.
- VisualRWKV: Exploring Recurrent Neural Networks for Visual Language Models. Haowen Hou, Peigen Zeng, Fei Ma, Fei Richard Yu. Arxiv 2024.
- Just read twice: closing the recall gap for recurrent language models. Simran Arora, Aman Timalsina, Aaryan Singhal, Benjamin Spector, Sabri Eyuboglu, Xinyi Zhao, Ashish Rao, Atri Rudra, Christopher Ré. Arxiv 2024.
- Associative Recurrent Memory Transformer. Ivan Rodkin, Yuri Kuratov, Aydar Bulatov, Mikhail Burtsev. ICML 2024 Workshop.
- GoldFinch: High Performance RWKV/Transformer Hybrid with Linear Pre-Fill and Extreme KV-Cache Compression. Daniel Goldstein, Fares Obeid, Eric Alcaide, Guangyu Song, Eugene Cheah. Arxiv 2024.
- Analysis of Argument Structure Constructions in a Deep Recurrent Language Model. Pegah Ramezani, Achim Schilling, Patrick Krauss. Arxiv 2024.
- Mamba: Linear-Time Sequence Modeling with Selective State Spaces. Albert Gu, Tri Dao. Arxiv 2023.
-
MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts. Maciej Pióro, Kamil Ciebiera, Krystian Król, Jan Ludziejewski, Sebastian Jaszczur. Arxiv 2024.
-
MambaByte: Token-free Selective State Space Model. Junxiong Wang, Tushaar Gangavarapu, Jing Nathan Yan, Alexander M Rush. Arxiv 2024.
-
LOCOST: State-Space Models for Long Document Abstractive Summarization. Florian Le Bronnec, Song Duong, Mathieu Ravaut, Alexandre Allauzen, Nancy F. Chen, Vincent Guigue, Alberto Lumbreras, Laure Soulier, Patrick Gallinari. Arxiv 2024.
-
State Space Models as Foundation Models: A Control Theoretic Overview. Carmen Amo Alonso, Jerome Sieber, Melanie N. Zeilinger. Arxiv 2024.
-
Jamba: A Hybrid Transformer-Mamba Language Model. Opher Lieber, Barak Lenz, Hofit Bata, Gal Cohen, Jhonathan Osin, Itay Dalmedigos, Erez Safahi, Shaked Meirom, Yonatan Belinkov, Shai Shalev-Shwartz, Omri Abend, Raz Alon, Tomer Asida, Amir Bergman, Roman Glozman, Michael Gokhman, Avashalom Manevich, Nir Ratner, Noam Rozen, Erez Shwartz, Mor Zusman, Yoav Shoham. Arxiv 2024.
-
Robustifying State-space Models for Long Sequences via Approximate Diagonalization. Annan Yu, Arnur Nigmetov, Dmitriy Morozov, Michael W. Mahoney, N. Benjamin Erichson. ICLR 2024 Spotlight.
-
Zamba: A Compact 7B SSM Hybrid Model. Paolo Glorioso, Quentin Anthony, Yury Tokpanov, James Whittington, Jonathan Pilault, Adam Ibrahim, Beren Millidge. Arxiv 2024.
-
Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality. Tri Dao, Albert Gu. Arxiv 2024.
- Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling. Liliang Ren, Yang Liu, Yadong Lu, Yelong Shen, Chen Liang, Weizhu Chen. Arxiv 2024.
- An Empirical Study of Mamba-based Language Models. Roger Waleffe, Wonmin Byeon, Duncan Riach, Brandon Norick, Vijay Korthikanti, Tri Dao, Albert Gu, Ali Hatamizadeh, Sudhakar Singh, Deepak Narayanan, Garvit Kulshreshtha, Vartika Singh, Jared Casper, Jan Kautz, Mohammad Shoeybi, Bryan Catanzaro. Arxiv 2024.
-
B'MOJO: Hybrid State Space Realizations of Foundation Models with Eidetic and Fading Memory. Luca Zancato, Arjun Seshadri, Yonatan Dukler, Aditya Golatkar, Yantao Shen, Benjamin Bowman, Matthew Trager, Alessandro Achille, Stefano Soatto. Arxiv 2024.
-
MambaForGCN: Enhancing Long-Range Dependency with State Space Model and Kolmogorov-Arnold Networks for Aspect-Based Sentiment Analysis. Adamu Lawan, Juhua Pu, Haruna Yunusa, Aliyu Umar, Muhammad Lawan. Arxiv 2024.
-
Discrete Diffusion Language Model for Long Text Summarization. Do Huu Dat, Do Duc Anh, Anh Tuan Luu, Wray Buntine. Arxiv 2024.
-
ML-Mamba: Efficient Multi-Modal Large Language Model Utilizing Mamba-2. Wenjun Huang, Jianguo Hu. Arxiv 2024.
-
Jamba-1.5: Hybrid Transformer-Mamba Models at Scale. Jamba Team: Barak Lenz, Alan Arazi, Amir Bergman, Avshalom Manevich, Barak Peleg, Ben Aviram, Chen Almagor, Clara Fridman, Dan Padnos, Daniel Gissin, Daniel Jannai, Dor Muhlgay, Dor Zimberg, Edden M Gerber, Elad Dolev, Eran Krakovsky, Erez Safahi, Erez Schwartz, Gal Cohen, Gal Shachaf, Haim Rozenblum, Hofit Bata, Ido Blass, Inbal Magar, Itay Dalmedigos, Jhonathan Osin, Julie Fadlon, Maria Rozman, Matan Danos, Michael Gokhman, Mor Zusman, Naama Gidron, Nir Ratner, Noam Gat, Noam Rozen, Oded Fried, Ohad Leshno, Omer Antverg, Omri Abend, Opher Lieber, Or Dagan, Orit Cohavi, Raz Alon, Ro'i Belson, Roi Cohen, Rom Gilad, Roman Glozman, Shahar Lev, Shaked Meirom, Tal Delbari, Tal Ness, Tomer Asida, Tom Ben Gal, Tom Braude, Uriya Pumerantz, Yehoshua Cohen, Yonatan Belinkov, Yuval Globerson, Yuval Peleg Levy, Yoav Shoham. Arxiv 2024.
-
SpikingSSMs: Learning Long Sequences with Sparse and Parallel Spiking State Space Models. Shuaijie Shen, Chao Wang, Renzhuo Huang, Yan Zhong, Qinghai Guo, Zhichao Lu, Jianguo Zhang, Luziwei Leng. Arxiv 2024.
-
ReMamba: Equip Mamba with Effective Long-Sequence Modeling. Danlong Yuan, Jiahao Liu, Bei Li, Huishuai Zhang, Jingang Wang, Xunliang Cai, Dongyan Zhao. Arxiv 2024.
-
Stuffed Mamba: State Collapse and State Capacity of RNN-Based Long-Context Modeling. Yingfa Chen, Xinrong Zhang, Shengding Hu, Xu Han, Zhiyuan Liu, Maosong Sun. Arxiv 2024.
-
Taipan: Efficient and Expressive State Space Language Models with Selective Attention. Chien Van Nguyen, Huy Huu Nguyen, Thang M. Pham, Ruiyi Zhang, Hanieh Deilamsalehy, Puneet Mathur, Ryan A. Rossi, Trung Bui, Viet Dac Lai, Franck Dernoncourt, Thien Huu Nguyen. Arxiv 2024.
-
Rethinking Token Reduction for State Space Models. Zheng Zhan, Yushu Wu, Zhenglun Kong, Changdi Yang, Yifan Gong, Xuan Shen, Xue Lin, Pu Zhao, Yanzhi Wang. EMNLP 2024.
- Attamba: Attending To Multi-Token States. Yash Akhauri, Safeen Huda, Mohamed S. Abdelfattah. Arxiv 2024.
- Gated Delta Networks: Improving Mamba2 with Delta Rule. Songlin Yang, Jan Kautz, Ali Hatamizadeh. Arxiv 2024.
- RoFormer: Enhanced Transformer with Rotary Position Embedding. Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, Yunfeng Liu. Arxiv 2021.
- Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. Ofir Press, Noah A. Smith, Mike Lewis. ICLR 2022.
-
KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation. Ta-Chung Chi, Ting-Han Fan, Peter J. Ramadge, Alexander I. Rudnicky. Arxiv 2022.
-
Dissecting Transformer Length Extrapolation via the Lens of Receptive Field Analysis. Ta-Chung Chi, Ting-Han Fan, Alexander I. Rudnicky, Peter J. Ramadge. ACL 2023.
-
A Length-Extrapolatable Transformer. Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary, Xia Song, Furu Wei. ACL 2023.
- Randomized Positional Encodings Boost Length Generalization of Transformers. Anian Ruoss, Grégoire Delétang, Tim Genewein, Jordi Grau-Moya, Róbert Csordás, Mehdi Bennani, Shane Legg, Joel Veness. ACL 2023.
- The Impact of Positional Encoding on Length Generalization in Transformers. Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, Siva Reddy. Arxiv 2023.
- Focused Transformer: Contrastive Training for Context Scaling. Szymon Tworkowski, Konrad Staniszewski, Mikołaj Pacek, Yuhuai Wu, Henryk Michalewski, Piotr Miłoś. Arxiv 2023.
-
Extending Context Window of Large Language Models via Positional Interpolation. Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian. Arxiv 2023.
-
Exploring Transformer Extrapolation. Zhen Qin, Yiran Zhong, Hui Deng. Arxiv 2023.
- LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models. Chi Han, Qifan Wang, Wenhan Xiong, Yu Chen, Heng Ji, Sinong Wang. Arxiv 2023.
- YaRN: Efficient Context Window Extension of Large Language Models. Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole. Arxiv 2023.
- PoSE: Efficient Context Window Extension of LLMs via Positional Skip-wise Training. Dawei Zhu,Nan Yang,Liang Wang,Yifan Song,Wenhao Wu,Furu Wei,Sujian Li. Arxiv 2023.
- LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models. Yukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, Jiaya Jia. ICLR 2024 Oral.
-
Scaling Laws of RoPE-based Extrapolation. Xiaoran Liu, Hang Yan, Shuo Zhang, Chenxin An, Xipeng Qiu, Dahua Lin. Arxiv 2023.
-
Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation. Ta-Chung Chi,Ting-Han Fan,Alexander I. Rudnicky. Arxiv 2023.
- CoCA: Fusing position embedding with Collinear Constrained Attention for fine-tuning free context window extending. Shiyi Zhu, Jing Ye, Wei Jiang, Qi Zhang, Yifan Wu, Jianguo Li. Arxiv 2023.
-
Structured Packing in LLM Training Improves Long Context Utilization. Konrad Staniszewski, Szymon Tworkowski, Sebastian Jaszczur, Henryk Michalewski, Łukasz Kuciński, Piotr Miłoś. Arxiv 2024.
-
LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning. Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, Xia Hu. Arxiv 2024.
-
Infinite-LLM: Efficient LLM Service for Long Context with DistAttention and Distributed KVCache. Bin Lin, Tao Peng, Chen Zhang, Minmin Sun, Lanbo Li, Hanyu Zhao, Wencong Xiao, Qi Xu, Xiafei Qiu, Shen Li, Zhigang Ji, Yong Li, Wei Lin. Arxiv 2024.
-
Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models. Zhen Qin, Weigao Sun, Dong Li, Xuyang Shen, Weixuan Sun, Yiran Zhong. Arxiv 2024.
- Extending LLMs' Context Window with 100 Samples. Yikai Zhang, Junlong Li, Pengfei Liu. Arxiv 2024.
-
E^2-LLM: Efficient and Extreme Length Extension of Large Language Models. Jiaheng Liu, Zhiqi Bai, Yuanxing Zhang, Chenchen Zhang, Yu Zhang, Ge Zhang, Jiakai Wang, Haoran Que, Yukang Chen, Wenbo Su, Tiezheng Ge, Jie Fu, Wenhu Chen, Bo Zheng. Arxiv 2024.
-
With Greater Text Comes Greater Necessity: Inference-Time Training Helps Long Text Generation. Y. Wang, D. Ma, D. Cai. Arxiv 2024.
- Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation. Zhenyu He, Guhao Feng, Shengjie Luo, Kai Yang, Di He, Jingjing Xu, Zhi Zhang, Hongxia Yang, Liwei Wang. ICML 2024.
- Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens. Jiacheng Liu, Sewon Min, Luke Zettlemoyer, Yejin Choi, Hannaneh Hajishirzi. Arxiv 2024.
-
LongRoPE: Extending LLM ContextWindow Beyond 2 Million Tokens. Yiran Ding, Li Lyna Zhang, Chengruidong Zhang, Yuanyuan Xu, Ning Shang, Jiahang Xu, Fan Yang, Mao Yang. Arxiv 2024.
-
Data Engineering for Scaling Language Models to 128K Context. Yao Fu, Rameswar Panda, Xinyao Niu, Xiang Yue, Hannaneh Hajishirzi, Yoon Kim, Hao Peng. Arxiv 2024.
-
Transformers Can Achieve Length Generalization But Not Robustly. Yongchao Zhou, Uri Alon, Xinyun Chen, Xuezhi Wang, Rishabh Agarwal, Denny Zhou. Arxiv 2024.
-
Long-Context Language Modeling with Parallel Context Encoding. Howard Yen, Tianyu Gao, Danqi Chen. ACL 2024.
- CLEX: Continuous Length Extrapolation for Large Language Models. Guanzheng Chen, Xin Li, Zaiqiao Meng, Shangsong Liang, Lidong Bing. Arxiv 2023.
- Resonance RoPE: Improving Context Length Generalization of Large Language Models. Suyuchen Wang, Ivan Kobyzev, Peng Lu, Mehdi Rezagholizadeh, Bang Liu. Arxiv 2024.
- Can't Remember Details in Long Documents? You Need Some R&R. Devanshu Agrawal, Shang Gao, Martin Gajek. Arxiv 2024.
- Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding. Zhenyu Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xiaoxia Wu, Zhangyang Wang. Arxiv 2024.
-
InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Memory. Chaojun Xiao, Pengle Zhang, Xu Han, Guangxuan Xiao, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Song Han, Maosong Sun. Arxiv 2024.
-
Naive Bayes-based Context Extension for Large Language Models. Jianlin Su, Murtadha Ahmed, Wenbo, Luo Ao, Mingren Zhu, Yunfeng Liu. Arxiv 2024.
-
Keyformer: KV Cache Reduction through Key Tokens Selection for Efficient Generative Inference. Muhammad Adnan, Akhil Arunkumar, Gaurav Jain, Prashant J. Nair, Ilya Soloveychik, Purushotham Kamath. Arxiv 2024.
-
In-Context Pretraining: Language Modeling Beyond Document Boundaries. Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Xi Victoria Lin, Noah A. Smith, Luke Zettlemoyer, Wen-tau Yih, Mike Lewis. ICLR 2024 Spotlight.
-
Effective Long-Context Scaling of Foundation Models. Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, Hao Ma. Arxiv 2023.
-
Fewer Truncations Improve Language Modeling. Hantian Ding, Zijian Wang, Giovanni Paolini, Varun Kumar, Anoop Deoras, Dan Roth, Stefano Soatto. Arxiv 2024.
-
Length Generalization of Causal Transformers without Position Encoding. Jie Wang, Tao Ji, Yuanbin Wu, Hang Yan, Tao Gui, Qi Zhang, Xuanjing Huang, Xiaoling Wang. Arxiv 2024.
- Extending Llama-3's Context Ten-Fold Overnight. Peitian Zhang, Ninglu Shao, Zheng Liu, Shitao Xiao, Hongjin Qian, Qiwei Ye, Zhicheng Dou. Arxiv 2024.
- Long Context Alignment with Short Instructions and Synthesized Positions. Wenhao Wu, Yizhong Wang, Yao Fu, Xiang Yue, Dawei Zhu, Sujian Li. Arxiv 2024.
-
xLSTM: Extended Long Short-Term Memory. Maximilian Beck, Korbinian Pöppel, Markus Spanring, Andreas Auer, Oleksandra Prudnikova, Michael Kopp, Günter Klambauer, Johannes Brandstetter, Sepp Hochreiter. Arxiv 2024.
-
DAPE: Data-Adaptive Positional Encoding for Length Extrapolation. Chuanyang Zheng, Yihang Gao, Han Shi, Minbin Huang, Jingyao Li, Jing Xiong, Xiaozhe Ren, Michael Ng, Xin Jiang, Zhenguo Li, Yu Li. NeurIPS 2024.
-
Contextual Position Encoding: Learning to Count What's Important. Olga Golovneva, Tianlu Wang, Jason Weston, Sainbayar Sukhbaatar. Arxiv 2024.
-
Quest: Query-centric Data Synthesis Approach for Long-context Scaling of Large Language Model. Chaochen Gao, Xing Wu, Qi Fu, Songlin Hu. Arxiv 2024.
-
Position Coupling: Improving Length Generalization of Arithmetic Transformers Using Task Structure. Hanseul Cho, Jaeyoung Cha, Pranjal Awasthi, Srinadh Bhojanapalli, Anupam Gupta, Chulhee Yun. NeurIPS 2024.
-
LongSkywork: A Training Recipe for Efficiently Extending Context Length in Large Language Models. Liang Zhao, Tianwen Wei, Liang Zeng, Cheng Cheng, Liu Yang, Peng Cheng, Lijie Wang, Chenxia Li, Xuejie Wu, Bo Zhu, Yimeng Gan, Rui Hu, Shuicheng Yan, Han Fang, Yahui Zhou. Arxiv 2024.
-
Explicitly Encoding Structural Symmetry is Key to Length Generalization in Arithmetic Tasks. Mahdi Sabbaghi, George Pappas, Hamed Hassani, Surbhi Goel. Arxiv 2024.
-
An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding. Tong Wu, Yanpeng Zhao, Zilong Zheng. NeurIPS 2024.
-
3D-RPE: Enhancing Long-Context Modeling Through 3D Rotary Position Encoding. Xindian Ma, Wenyuan Liu, Peng Zhang, Nan Xu. Arxiv 2024.
-
Mixture of In-Context Experts Enhance LLMs' Long Context Awareness. Hongzhan Lin, Ang Lv, Yuhan Chen, Chen Zhu, Yang Song, Hengshu Zhu, Rui Yan. Arxiv 2024.
-
Human-like Episodic Memory for Infinite Context LLMs. Zafeirios Fountas, Martin A Benfeghoul, Adnan Oomerjee, Fenia Christopoulou, Gerasimos Lampouras, Haitham Bou-Ammar, Jun Wang. Arxiv 2024.
-
Scaling Granite Code Models to 128K Context. Matt Stallone, Vaibhav Saxena, Leonid Karlinsky, Bridget McGinn, Tim Bula, Mayank Mishra, Adriana Meza Soria, Gaoyuan Zhang, Aditya Prasad, Yikang Shen, Saptha Surendran, Shanmukha Guttula, Hima Patel, Parameswaran Selvam, Xuan-Hong Dang, Yan Koyfman, Atin Sood, Rogerio Feris, Nirmit Desai, David D. Cox, Ruchir Puri, Rameswar Panda. Arxiv 2024.
-
ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities. Peng Xu, Wei Ping, Xianchao Wu, Zihan Liu, Mohammad Shoeybi, Bryan Catanzaro. Arxiv 2024.
-
Efficient Solutions For An Intriguing Failure of LLMs: Long Context Window Does Not Mean LLMs Can Analyze Long Sequences Flawlessly. Peyman Hosseini, Ignacio Castro, Iacopo Ghinassi, Matthew Purver. Arxiv 2024.
-
FocusLLM: Scaling LLM's Context by Parallel Decoding. Zhenyu Li, Yike Zhang, Tengyu Pan, Yutao Sun, Zhichao Duan, Junjie Fang, Rong Han, Zixuan Wang, Jianyong Wang. Arxiv 2024.
- LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models. Zhiyuan Hu, Yuliang Liu, Jinman Zhao, Suyuchen Wang, Yan Wang, Wei Shen, Qing Gu, Anh Tuan Luu, See-Kiong Ng, Zhiwei Jiang, Bryan Hooi. Arxiv 2024.
-
E2LLM: Encoder Elongated Large Language Models for Long-Context Understanding and Reasoning. Zihan Liao, Jun Wang, Hang Yu, Lingxiao Wei, Jianguo Li, Jun Wang, Wei Zhang. Arxiv 2024.
-
Untie the Knots: An Efficient Data Augmentation Strategy for Long-Context Pre-Training in Language Models. Junfeng Tian, Da Zheng, Yang Cheng, Rui Wang, Colin Zhang, Debing Zhang. Arxiv 2024.
- PEAR: Position-Embedding-Agnostic Attention Re-weighting Enhances Retrieval-Augmented Generation with Zero Inference Overhead. Tao Tan, Yining Qian, Ang Lv, Hongzhan Lin, Songhao Wu, Yongbo Wang, Feng Wang, Jingtong Wu, Xin Lu, Rui Yan. Arxiv 2024.
-
Efficient Long-range Language Modeling with Self-supervised Causal Retrieval. Xiang Hu, Zhihao Teng, Wei Wu, Kewei Tu. Arxiv 2024.
-
A Little Goes a Long Way: Efficient Long Context Training and Inference with Partial Contexts. Suyu Ge, Xihui Lin, Yunan Zhang, Jiawei Han, Hao Peng. Arxiv 2024.
-
Extending Context Window of Large Language Models from a Distributional Perspective. Yingsheng Wu, Yuxuan Gu, Xiaocheng Feng, Weihong Zhong, Dongliang Xu, Qing Yang, Hongtao Liu, Bing Qin. Arxiv 2024.
- How to Train Long-Context Language Models (Effectively). Tianyu Gao, Alexander Wettig, Howard Yen, Danqi Chen. Arxiv 2024.
-
Differential Transformer. Tianzhu Ye, Li Dong, Yuqing Xia, Yutao Sun, Yi Zhu, Gao Huang, Furu Wei. Arxiv 2024.
-
DAPE V2: Process Attention Score as Feature Map for Length Extrapolation. Chuanyang Zheng, Yihang Gao, Han Shi, Jing Xiong, Jiankai Sun, Jingyao Li, Minbin Huang, Xiaozhe Ren, Michael Ng, Xin Jiang, Zhenguo Li, Yu Li. Arxiv 2024.
-
Why Does the Effective Context Length of LLMs Fall Short?. Chenxin An, Jun Zhang, Ming Zhong, Lei Li, Shansan Gong, Yao Luo, Jingjing Xu, Lingpeng Kong. Arxiv 2024.
-
LOGO -- Long cOntext aliGnment via efficient preference Optimization. Zecheng Tang, Zechen Sun, Juntao Li, Qiaoming Zhu, Min Zhang. Arxiv 2024.
-
Selecting Influential Samples for Long Context Alignment via Homologous Models' Guidance and Contextual Awareness Measurement. Shuzheng Si, Haozhe Zhao, Gang Chen, Yunshui Li, Kangyang Luo, Chuancheng Lv, Kaikai An, Fanchao Qi, Baobao Chang, Maosong Sun. Arxiv 2024.
-
Two are better than one: Context window extension with multi-grained self-injection. Wei Han, Pan Zhou, Soujanya Poria, Shuicheng Yan. Arxiv 2024.
- LongReward: Improving Long-context Large Language Models with AI Feedback. Jiajie Zhang, Zhongni Hou, Xin Lv, Shulin Cao, Zhenyu Hou, Yilin Niu, Lei Hou, Yuxiao Dong, Ling Feng, Juanzi Li. Arxiv 2024.
-
HoPE: A Novel Positional Encoding Without Long-Term Decay for Enhanced Context Awareness and Extrapolation. Yuhan Chen, Ang Lv, Jian Luan, Bin Wang, Wei Liu. Arxiv 2024.
-
What is Wrong with Perplexity for Long-context Language Modeling?. Lizhe Fang, Yifei Wang, Zhaoyang Liu, Chenheng Zhang, Stefanie Jegelka, Jinyang Gao, Bolin Ding, Yisen Wang. Arxiv 2024.
-
Circuit Complexity Bounds for RoPE-based Transformer Architecture. Bo Chen, Xiaoyu Li, Yingyu Liang, Jiangxuan Long, Zhenmei Shi, Zhao Song. Arxiv 2024.
-
Large Language Models Can Self-Improve in Long-context Reasoning. Siheng Li, Cheng Yang, Zesen Cheng, Lemao Liu, Mo Yu, Yujiu Yang, Wai Lam. Arxiv 2024.
-
Transformers Can Do Arithmetic with the Right Embeddings. Sean Michael McLeish, Arpit Bansal, Alex Stein, Neel Jain, John Kirchenbauer, Brian R. Bartoldson, Bhavya Kailkhura, Abhinav Bhatele, Jonas Geiping, Avi Schwarzschild, Tom Goldstein. NeurIPS 2024.
-
Arithmetic Transformers Can Length-Generalize in Both Operand Length and Count. Hanseul Cho, Jaeyoung Cha, Srinadh Bhojanapalli, Chulhee Yun. Arxiv 2024.
- Breaking the Stage Barrier: A Novel Single-Stage Approach to Long Context Extension for Large Language Models. Haoran Lian, Junmin Chen, Wei Huang, Yizhe Xiong, Wenping Hu, Guiguang Ding, Hui Chen, Jianwei Niu, Zijia Lin, Fuzheng Zhang, Di Zhang. Arxiv 2024.
- Unleashing Infinite-Length Input Capacity for Large-scale Language Models with Self-Controlled Memory System. Xinnian Liang, Bing Wang, Hui Huang, Shuangzhi Wu, Peihao Wu, Lu Lu, Zejun Ma, Zhoujun Li. Arxiv 2023.
- MemoryBank: Enhancing Large Language Models with Long-Term Memory. Wanjun Zhong, Lianghong Guo, Qiqi Gao, He Ye, Yanlin Wang. Arxiv 2023.
-
Improve Long-term Memory Learning Through Rescaling the Error Temporally. Shida Wang, Zhanglu Yan. Arxiv 2023.
-
Recursively Summarizing Enables Long-Term Dialogue Memory in Large Language Models. Qingyue Wang, Liang Ding, Yanan Cao, Zhiliang Tian, Shi Wang, Dacheng Tao, Li Guo. Arxiv 2023.
-
Empowering Working Memory for Large Language Model Agents. Jing Guo, Nan Li, Jianchuan Qi, Hang Yang, Ruiqiao Li, Yuzhen Feng, Si Zhang, Ming Xu. Arxiv 2024.
-
Evolving Large Language Model Assistant with Long-Term Conditional Memory. Ruifeng Yuan, Shichao Sun, Zili Wang, Ziqiang Cao, Wenjie Li. Arxiv 2024.
-
Commonsense-augmented Memory Construction and Management in Long-term Conversations via Context-aware Persona Refinement. Hana Kim, Kai Tzu-iunn Ong, Seoyeon Kim, Dongha Lee, Jinyoung Yeo. Arxiv 2024.
-
A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts. Kuang-Huei Lee, Xinyun Chen, Hiroki Furuta, John Canny, Ian Fischer. Arxiv 2024.
-
Steering Conversational Large Language Models for Long Emotional Support Conversations. Navid Madani, Sougata Saha, Rohini Srihari. Arxiv 2024.
-
SPAR: Personalized Content-Based Recommendation via Long Engagement Attention. Chiyu Zhang, Yifei Sun, Jun Chen, Jie Lei, Muhammad Abdul-Mageed, Sinong Wang, Rong Jin, Sem Park, Ning Yao, Bo Long. Arxiv 2024.
-
Compress to Impress: Unleashing the Potential of Compressive Memory in Real-World Long-Term Conversations. Nuo Chen, Hongguang Li, Juhua Huang, Baoyuan Wang, Jia Li. Arxiv 2024.
-
StreamingDialogue: Prolonged Dialogue Learning via Long Context Compression with Minimal Losses. Jia-Nan Li, Quan Tu, Cunli Mao, Zhengtao Yu, Ji-Rong Wen, Rui Yan. Arxiv 2024.
-
Prompts As Programs: A Structure-Aware Approach to Efficient Compile-Time Prompt Optimization. Tobias Schnabel, Jennifer Neville. Arxiv 2024.
- HMT: Hierarchical Memory Transformer for Long Context Language Processing. Tobias Schnabel, Jennifer Neville. Arxiv 2024.
- SirLLM: Streaming Infinite Retentive LLM. Yao Yao, Zuchao Li, Hai Zhao. Arxiv 2024.
- Toward Conversational Agents with Context and Time Sensitive Long-term Memory. Nick Alonso, Tomás Figliolia, Anthony Ndirango, Beren Millidge. Arxiv 2024.
-
Position Debiasing Fine-Tuning for Causal Perception in Long-Term Dialogue. Shixuan Fan, Wei Wei, Wendi Li, Xian-Ling Mao, Wenfeng Xie, Dangyang Chen. Arxiv 2024.
-
Enhancing Long-Term Memory using Hierarchical Aggregate Tree for Retrieval Augmented Generation. Aadharsh Aadhithya A, Sachin Kumar S, Soman K.P. Arxiv 2024.
-
Suri: Multi-constraint Instruction Following for Long-form Text Generation. Chau Minh Pham, Simeng Sun, Mohit Iyyer. Arxiv 2024.
- HiAgent: Hierarchical Working Memory Management for Solving Long-Horizon Agent Tasks with Large Language Model. Mengkang Hu, Tianxing Chen, Qiguang Chen, Yao Mu, Wenqi Shao, Ping Luo. Arxiv 2024.
- CreDes: Causal Reasoning Enhancement and Dual-End Searching for Solving Long-Range Reasoning Problems using LLMs. Kangsheng Wang, Xiao Zhang, Hao Liu, Songde Han, Huimin Ma, Tianyu Hu. Arxiv 2024.
-
Walking Down the Memory Maze: Beyond Context Limit through Interactive Reading. Howard Chen, Ramakanth Pasunuru, Jason Weston, Asli Celikyilmaz. Arxiv 2023.
-
Attendre: Wait To Attend By Retrieval With Evicted Queries in Memory-Based Transformers for Long Context Processing. Zi Yang, Nan Hua. Arxiv 2024.
-
BGE Landmark Embedding: A Chunking-Free Embedding Method For Retrieval Augmented Long-Context Large Language Models. Kun Luo, Zheng Liu, Shitao Xiao, Kang Liu. Arxiv 2024.
- Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity. Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, Jong C. Park. Arxiv 2024.
- RQ-RAG: Learning to Refine Queries for Retrieval Augmented Generation. Chi-Min Chan, Chunpu Xu, Ruibin Yuan, Hongyin Luo, Wei Xue, Yike Guo, Jie Fu. Arxiv 2024.
-
Improving Retrieval Augmented Open-Domain Question-Answering with Vectorized Contexts. Zhuo Chen, Xinyu Wang, Yong Jiang, Pengjun Xie, Fei Huang, Kewei Tu. Arxiv 2024.
-
Superposition Prompting: Improving and Accelerating Retrieval-Augmented Generation. Thomas Merth, Qichen Fu, Mohammad Rastegari, Mahyar Najibi. Arxiv 2024.
-
Multi-view Content-aware Indexing for Long Document Retrieval. Kuicai Dong, Derrick Goh Xin Deik, Yi Quan Lee, Hao Zhang, Xiangyang Li, Cong Zhang, Yong Liu. Arxiv 2024.
-
Retrieval Head Mechanistically Explains Long-Context Factuality. Wenhao Wu, Yizhong Wang, Guangxuan Xiao, Hao Peng, Yao Fu. Arxiv 2024.
-
FlashBack:Efficient Retrieval-Augmented Language Modeling for Long Context Inference. Runheng Liu, Xingchen Xiao, Heyan Huang, Zewen Chi, Zhijing Wu. Arxiv 2024.
-
Feature-Adaptive and Data-Scalable In-Context Learning. Jiahao Li, Quan Wang, Licheng Zhang, Guoqing Jin, Zhendong Mao. Arxiv 2024.
- KG-RAG: Bridging the Gap Between Knowledge and Creativity. Diego Sanmartin. Arxiv 2024.
- HippoRAG: Neurobiologically Inspired Long-Term Memory for Large Language Models. Bernal Jiménez Gutiérrez, Yiheng Shu, Yu Gu, Michihiro Yasunaga, Yu Su. Arxiv 2024.
- Implicit In-context Learning. Zhuowei Li, Zihao Xu, Ligong Han, Yunhe Gao, Song Wen, Di Liu, Hao Wang, Dimitris N. Metaxas. Arxiv 2024.
-
Are Long-LLMs A Necessity For Long-Context Tasks?. Hongjin Qian, Zheng Liu, Peitian Zhang, Kelong Mao, Yujia Zhou, Xu Chen, Zhicheng Dou. Arxiv 2024.
-
Accelerating Inference of Retrieval-Augmented Generation via Sparse Context Selection. Yun Zhu, Jia-Chen Gu, Caitlin Sikora, Ho Ko, Yinxiao Liu, Chu-Cheng Lin, Lei Shu, Liangchen Luo, Lei Meng, Bang Liu, Jindong Chen. Arxiv 2024.
-
Is In-Context Learning Sufficient for Instruction Following in LLMs?. Hao Zhao, Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion. Arxiv 2024.
-
FragRel: Exploiting Fragment-level Relations in the External Memory of Large Language Models. Xihang Yue, Linchao Zhu, Yi Yang. Arxiv 2024.
-
Multi-Head RAG: Solving Multi-Aspect Problems with LLMs. Maciej Besta, Ales Kubicek, Roman Niggli, Robert Gerstenberger, Lucas Weitzendorf, Mingyuan Chi, Patrick Iff, Joanna Gajda, Piotr Nyczyk, Jürgen Müller, Hubert Niewiadomski, Marcin Chrapek, Michał Podstawski, Torsten Hoefler. Arxiv 2024.
-
Demonstration Notebook: Finding the Most Suited In-Context Learning Example from Interactions. Yiming Tang, Bin Dong. Arxiv 2024.
-
Retrieval Meets Reasoning: Dynamic In-Context Editing for Long-Text Understanding. Weizhi Fei, Xueyan Niu, Guoqing Xie, Yanhua Zhang, Bo Bai, Lei Deng, Wei Han. Arxiv 2024.
-
FoRAG: Factuality-optimized Retrieval Augmented Generation for Web-enhanced Long-form Question Answering. Tianchi Cai, Zhiwen Tan, Xierui Song, Tao Sun, Jiyan Jiang, Yunqi Xu, Yinger Zhang, Jinjie Gu. Arxiv 2024.
-
Can Few-shot Work in Long-Context? Recycling the Context to Generate Demonstrations. Arie Cattan, Alon Jacovi, Alex Fabrikant, Jonathan Herzig, Roee Aharoni, Hannah Rashkin, Dror Marcus, Avinatan Hassidim, Yossi Matias, Idan Szpektor, Avi Caciularu. Arxiv 2024.
-
LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs. Ziyan Jiang, Xueguang Ma, Wenhu Chen. Arxiv 2024.
-
Multimodal Task Vectors Enable Many-Shot Multimodal In-Context Learning. Brandon Huang, Chancharik Mitra, Assaf Arbelle, Leonid Karlinsky, Trevor Darrell, Roei Herzig. Arxiv 2024.
-
From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data. Zheyang Xiong, Vasilis Papageorgiou, Kangwook Lee, Dimitris Papailiopoulos. Arxiv 2024.
-
Memory3: Language Modeling with Explicit Memory. Hongkang Yang, Zehao Lin, Wenjin Wang, Hao Wu, Zhiyu Li, Bo Tang, Wenqiang Wei, Jinbo Wang, Zeyun Tang, Shichao Song, Chenyang Xi, Yu Yu, Kai Chen, Feiyu Xiong, Linpeng Tang, Weinan E. Arxiv 2024.
-
Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting. Zilong Wang, Zifeng Wang, Long Le, Huaixiu Steven Zheng, Swaroop Mishra, Vincent Perot, Yuwei Zhang, Anush Mattapalli, Ankur Taly, Jingbo Shang, Chen-Yu Lee, Tomas Pfister. Arxiv 2024.
-
Retrieve, Summarize, Plan: Advancing Multi-hop Question Answering with an Iterative Approach. Zhouyu Jiang, Mengshu Sun, Lei Liang, Zhiqiang Zhang. Arxiv 2024.
-
R^2AG: Incorporating Retrieval Information into Retrieval Augmented Generation. Fuda Ye, Shuangyin Li, Yongqi Zhang, Lei Chen. Arxiv 2024.
- Making Long-Context Language Models Better Multi-Hop Reasoners. Yanyang Li, Shuo Liang, Michael R. Lyu, Liwei Wang. ACL 2024.
- Large Language Models Know What Makes Exemplary Contexts. Quanyu Long, Jianda Chen, Wenya Wang, Sinno Jialin Pan. Arxiv 2024.
- RAGChecker: A Fine-grained Framework for Diagnosing Retrieval-Augmented Generation. Dongyu Ru, Lin Qiu, Xiangkun Hu, Tianhang Zhang, Peng Shi, Shuaichen Chang, Cheng Jiayang, Cunxiang Wang, Shichao Sun, Huanyu Li, Zizhao Zhang, Binjie Wang, Jiarong Jiang, Tong He, Zhiguo Wang, Pengfei Liu, Yue Zhang, Zheng Zhang. Arxiv 2024.
- Writing in the Margins: Better Inference Pattern for Long Context Retrieval. Melisa Russak, Umar Jamil, Christopher Bryant, Kiran Kamble, Axel Magnuson, Mateusz Russak, Waseem AlShikh. Arxiv 2024.
- MemLong: Memory-Augmented Retrieval for Long Text Modeling. Weijie Liu, Zecheng Tang, Juntao Li, Kehai Chen, Min Zhang. Arxiv 2024.
-
In Defense of RAG in the Era of Long-Context Language Models. Tan Yu, Anbang Xu, Rama Akkiraju. Arxiv 2024.
-
MemoRAG: Moving towards Next-Gen RAG Via Memory-Inspired Knowledge Discovery. Hongjin Qian, Peitian Zhang, Zheng Liu, Kelong Mao, Zhicheng Dou. Arxiv 2024.
-
You Only Use Reactive Attention Slice For Long Context Retrieval. Yun Joon Soh, Hanxian Huang, Yuandong Tian, Jishen Zhao. Arxiv 2024.
-
SMART-RAG: Selection using Determinantal Matrices for Augmented Retrieval. Jiatao Li, Xinyu Hu, Xiaojun Wan. Arxiv 2024.
-
Lighter And Better: Towards Flexible Context Adaptation For Retrieval Augmented Generation. Zheng Liu, Chenyuan Wu, Ninglu Shao, Shitao Xiao, Chaozhuo Li, Defu Lian. CIKM 2024.
-
Bridging Context Gaps: Leveraging Coreference Resolution for Long Contextual Understanding. Yanming Liu, Xinyue Peng, Jiannan Cao, Shi Bo, Yanxin Shen, Xuhong Zhang, Sheng Cheng, Xun Wang, Jianwei Yin, Tianyu Du. Arxiv 2024.
-
ALR2: A Retrieve-then-Reason Framework for Long-context Question Answering. Huayang Li, Pat Verga, Priyanka Sen, Bowen Yang, Vijay Viswanathan, Patrick Lewis, Taro Watanabe, Yixuan Su. Arxiv 2024.
-
Inference Scaling for Long-Context Retrieval Augmented Generation. Zhenrui Yue, Honglei Zhuang, Aijun Bai, Kai Hui, Rolf Jagerman, Hansi Zeng, Zhen Qin, Dong Wang, Xuanhui Wang, Michael Bendersky. Arxiv 2024.
-
GARLIC: LLM-Guided Dynamic Progress Control with Hierarchical Weighted Graph for Long Document QA. Xinyu Wang, Yanzheng Xiang, Lin Gui, Yulan He. Arxiv 2024.
-
Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG. Bowen Jin, Jinsung Yoon, Jiawei Han, Sercan O. Arik. Arxiv 2024.
-
Astute RAG: Overcoming Imperfect Retrieval Augmentation and Knowledge Conflicts for Large Language Models. Fei Wang, Xingchen Wan, Ruoxi Sun, Jiefeng Chen, Sercan Ö. Arık. Arxiv 2024.
-
SEGMENT+: Long Text Processing with Short-Context Language Models. Wei Shi, Shuang Li, Kerun Yu, Jinglei Chen, Zujie Liang, Xinhui Wu, Yuxi Qian, Feng Wei, Bo Zheng, Jiaqing Liang, Jiangjie Chen, Yanghua Xiao. Arxiv 2024.
- Graph of Records: Boosting Retrieval Augmented Generation for Long-context Summarization with Graphs. Haozhen Zhang, Tao Feng, Jiaxuan You. Arxiv 2024.
-
ChuLo: Chunk-Level Key Information Representation for Long Document Processing. Yan Li, Caren Han, Yue Dai, Feiqi Cao. Arxiv 2024.
-
TurboRAG: Accelerating Retrieval-Augmented Generation with Precomputed KV Caches for Chunked Text. Songshuo Lu, Hua Wang, Yutian Rong, Zhi Chen, Yaohua Tang. Arxiv 2024.
-
LLM×MapReduce: Simplified Long-Sequence Processing using Large Language Models. Zihan Zhou, Chong Li, Xinyi Chen, Shuo Wang, Yu Chao, Zhili Li, Haoyu Wang, Rongqiao An, Qi Shi, Zhixing Tan, Xu Han, Xiaodong Shi, Zhiyuan Liu, Maosong Sun. Arxiv 2024.
-
Enhancing Long Context Performance in LLMs Through Inner Loop Query Mechanism. Yimin Tang, Yurong Xu, Ning Yan, Masood Mortazavi. NeurIPS 2024.
-
LongRAG: A Dual-Perspective Retrieval-Augmented Generation Paradigm for Long-Context Question Answering. Qingfei Zhao, Ruobing Wang, Yukuo Cen, Daren Zha, Shicheng Tan, Yuxiao Dong, Jie Tang. Arxiv 2024.
- Reducing Distraction in Long-Context Language Models by Focused Learning. Zijun Wu, Bingyuan Liu, Ran Yan, Lei Chen, Thomas Delteil. Arxiv 2024.
-
LongAgent: Scaling Language Models to 128k Context through Multi-Agent Collaboration. Jun Zhao, Can Zu, Hao Xu, Yi Lu, Wei He, Yiwen Ding, Tao Gui, Qi Zhang, Xuanjing Huang. Arxiv 2024.
-
A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis. Izzeddin Gur, Hiroki Furuta, Austin V Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust. ICLR 2024 Oral.
-
PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents. Simeng Sun, Yang Liu, Shuohang Wang, Dan Iter, Chenguang Zhu, Mohit Iyyer. EACL 2024.
- AMAGO: Scalable In-Context Reinforcement Learning for Adaptive Agents. Jake Grigsby, Linxi Fan, Yuke Zhu. ICLR 2024 Spotlight.
-
Chain of Agents: Large Language Models Collaborating on Long-Context Tasks. Yusen Zhang, Ruoxi Sun, Yanfei Chen, Tomas Pfister, Rui Zhang, Sercan Ö. Arik. Arxiv 2024.
-
GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models. Shilong Li, Yancheng He, Hangyu Guo, Xingyuan Bu, Ge Bai, Jie Liu, Jiaheng Liu, Xingwei Qu, Yangguang Li, Wanli Ouyang, Wenbo Su, Bo Zheng. Arxiv 2024.
-
Synergistic Multi-Agent Framework with Trajectory Learning for Knowledge-Intensive Tasks. Shengbin Yue, Siyuan Wang, Wei Chen, Xuanjing Huang, Zhongyu Wei. Arxiv 2024.
-
Optimus-1: Hybrid Multimodal Memory Empowered Agents Excel in Long-Horizon Tasks. Zaijing Li, Yuquan Xie, Rui Shao, Gongwei Chen, Dongmei Jiang, Liqiang Nie. Arxiv 2024.
- Adapting Language Models to Compress Contexts. Alexis Chevalier, Alexander Wettig, Anirudh Ajith, Danqi Chen. Arxiv 2023.
- Compressing Context to Enhance Inference Efficiency of Large Language Models. Yucheng Li, Bo Dong, Chenghua Lin, Frank Guerin. Arxiv 2023.
- LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models. Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, Lili Qiu. Arxiv 2023.
- LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression. Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, Lili Qiu. Arxiv 2023.
-
System 2 Attention (is something you might need too). Jason Weston, Sainbayar Sukhbaatar. Arxiv 2023.
-
DSFormer: Effective Compression of Text-Transformers by Dense-Sparse Weight Factorization. Rahul Chand, Yashoteja Prabhu, Pratyush Kumar. Arxiv 2023.
-
Soaring from 4K to 400K: Extending LLM's Context with Activation Beacon. Peitian Zhang, Zheng Liu, Shitao Xiao, Ninglu Shao, Qiwei Ye, Zhicheng Dou. Arxiv 2024.
- Flexibly Scaling Large Language Models Contexts Through Extensible Tokenization. Ninglu Shao, Shitao Xiao, Zheng Liu, Peitian Zhang. Arxiv 2024.
- Say More with Less: Understanding Prompt Learning Behaviors through Gist Compression. Xinze Li, Zhenghao Liu, Chenyan Xiong, Shi Yu, Yukun Yan, Shuo Wang, Ge Yu. Arxiv 2024.
-
Learning to Compress Prompt in Natural Language Formats. Yu-Neng Chuang, Tianwei Xing, Chia-Yuan Chang, Zirui Liu, Xun Chen, Xia Hu. Arxiv 2024.
-
Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference. Piotr Nawrot, Adrian Łańcucki, Marcin Chochowski, David Tarjan, Edoardo M. Ponti. Arxiv 2024.
-
LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression. Zhuoshi Pan, Qianhui Wu, Huiqiang Jiang, Menglin Xia, Xufang Luo, Jue Zhang, Qingwei Lin, Victor Rühle, Yuqing Yang, Chin-Yew Lin, H. Vicky Zhao, Lili Qiu, Dongmei Zhang. Arxiv 2024.
- PCToolkit: A Unified Plug-and-Play Prompt Compression Toolkit of Large Language Models. Jinyi Li, Yihuai Lan, Lei Wang, Hao Wang. Arxiv 2024.
- Compressed Context Memory for Online Language Model Interaction. Jang-Hyun Kim, Junyoung Yeom, Sangdoo Yun, Hyun Oh Song. ICLR 2024.
-
Compressing Large Language Models by Streamlining the Unimportant Layer. Xiaodong Chen, Yuxuan Hu, Jing Zhang. Arxiv 2024.
-
PROMPT-SAW: Leveraging Relation-Aware Graphs for Textual Prompt Compression. Muhammad Asif Ali, Zhengping Li, Shu Yang, Keyuan Cheng, Yang Cao, Tianhao Huang, Lijie Hu, Lu Yu, Di Wang. Arxiv 2024.
-
Training LLMs over Neurally Compressed Text. Brian Lester, Jaehoon Lee, Alex Alemi, Jeffrey Pennington, Adam Roberts, Jascha Sohl-Dickstein, Noah Constant. Arxiv 2024.
-
Rethinking Kullback-Leibler Divergence in Knowledge Distillation for Large Language Models. Taiqiang Wu, Chaofan Tao, Jiahao Wang, Zhe Zhao, Ngai Wong. Arxiv 2024.
-
Adapting LLMs for Efficient Context Processing through Soft Prompt Compression. Cangqing Wang, Yutian Yang, Ruisi Li, Dan Sun, Ruicong Cai, Yuzhu Zhang, Chengqian Fu, Lillian Floyd. Arxiv 2024.
-
Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs. Suyu Ge, Yunan Zhang, Liyuan Liu, Minjia Zhang, Jiawei Han, Jianfeng Gao. ICLR 2024 Oral.
-
LLoCO: Learning Long Contexts Offline. Sijun Tan, Xiuyu Li, Shishir Patil, Ziyang Wu, Tianjun Zhang, Kurt Keutzer, Joseph E. Gonzalez, Raluca Ada Popa. Arxiv 2024.
- In-Context Learning State Vector with Inner and Momentum Optimization. Dongfang Li, Zhenyu Liu, Xinshuo Hu, Zetian Sun, Baotian Hu, Min Zhang. Arxiv 2024.
-
Compressing Long Context for Enhancing RAG with AMR-based Concept Distillation. Kaize Shi, Xueyao Sun, Qing Li, Guandong Xu. Arxiv 2024.
-
Improving Long Text Understanding with Knowledge Distilled from Summarization Model. Yan Liu, Yazheng Yang, Xiaokang Chen. Arxiv 2024.
-
OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning. Dan Qiao, Yi Su, Pinzheng Wang, Jing Ye, Wenjing Xie, Yuechi Zhou, Yuyang Ding, Zecheng Tang, Jikai Wang, Yixin Ji, Yue Wang, Pei Guo, Zechen Sun, Zikang Zhang, Juntao Li, Pingfu Chao, Wenliang Chen, Guohong Fu, Guodong Zhou, Qiaoming Zhu, Min Zhang. Arxiv 2024.
- Feature-based Low-Rank Compression of Large Language Models via Bayesian Optimization. Yixin Ji, Yang Xiang, Juntao Li, Wei Chen, Zhongyi Liu, Kehai Chen, Min Zhang. Arxiv 2024.
- Imagination Augmented Generation: Learning to Imagine Richer Context for Question Answering over Large Language Models. Huanxuan Liao, Shizhu He, Yao Xu, Yuanzhe Zhang, Kang Liu, Shengping Liu, Jun Zhao. Arxiv 2024.
- Your Transformer is Secretly Linear. Anton Razzhigaev, Matvey Mikhalchuk, Elizaveta Goncharova, Nikolai Gerasimenko, Ivan Oseledets, Denis Dimitrov, Andrey Kuznetsov. Arxiv 2024.
- xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token. Xin Cheng, Xun Wang, Xingxing Zhang, Tao Ge, Si-Qing Chen, Furu Wei, Huishuai Zhang, Dongyan Zhao. Arxiv 2024.
-
SelfCP: Compressing Long Prompt to 1/12 Using the Frozen Large Language Model Itself. Jun Gao. Arxiv 2024.
-
Compressing Lengthy Context With UltraGist. Peitian Zhang, Zheng Liu, Shitao Xiao, Ninglu Shao, Qiwei Ye, Zhicheng Dou. Arxiv 2024.
-
XL3M: A Training-free Framework for LLM Length Extension Based on Segment-wise Inference. Shengnan Wang, Youhui Bai, Lin Zhang, Pingyi Zhou, Shixiong Zhao, Gong Zhang, Sen Wang, Renhai Chen, Hua Xu, Hongwei Sun. Arxiv 2024.
-
In-context Autoencoder for Context Compression in a Large Language Model. Tao Ge, Hu Jing, Lei Wang, Xun Wang, Si-Qing Chen, Furu Wei. ICLR 2024.
- Retaining Key Information under High Compression Ratios: Query-Guided Compressor for LLMs. Zhiwei Cao, Qian Cao, Yu Lu, Ningxin Peng, Luyang Huang, Shanbo Cheng, Jinsong Su. Arxiv 2024.
- Recurrent Context Compression: Efficiently Expanding the Context Window of LLM. Chensen Huang, Guibo Zhu, Xuepeng Wang, Yifei Luo, Guojing Ge, Haoran Chen, Dong Yi, Jinqiao Wang. Arxiv 2024.
- LoCoCo: Dropping In Convolutions for Long Context Compression. Ruisi Cai, Yuandong Tian, Zhangyang Wang, Beidi Chen. Arxiv 2024.
-
Evaluating Zero-Shot Long-Context LLM Compression. Chenyu Wang, Yihan Wang. Arxiv 2024.
-
InstructCMP: Length Control in Sentence Compression through Instruction-based Large Language Models. Juseon-Do, Jingun Kwon, Hidetaka Kamigaito, Manabu Okumura. Arxiv 2024.
- AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration. Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei-Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, Song Han. MLSys 2024 Best Paper Award.
-
In-Context Former: Lightning-fast Compressing Context for Large Language Model. Xiangfeng Wang, Zaiyi Chen, Zheyong Xie, Tong Xu, Yongyi He, Enhong Chen. Arxiv 2024.
-
UIO-LLMs: Unbiased Incremental Optimization for Long-Context LLMs. Wenhao Li, Mingbao Lin, Yunshan Zhong, Shuicheng Yan, Rongrong Ji. Arxiv 2024.
-
PromptIntern: Saving Inference Costs by Internalizing Recurrent Prompt during Large Language Model Fine-tuning. Jiaru Zou, Mengyu Zhou, Tao Li, Shi Han, Dongmei Zhang. Arxiv 2024.
-
Concise and Precise Context Compression for Tool-Using Language Models. Yang Xu, Yunlong Feng, Honglin Mu, Yutai Hou, Yitong Li, Xinghao Wang, Wanjun Zhong, Zhongyang Li, Dandan Tu, Qingfu Zhu, Min Zhang, Wanxiang Che. Arxiv 2024.
-
Context Embeddings for Efficient Answer Generation in RAG. David Rau, Shuai Wang, Hervé Déjean, Stéphane Clinchant. Arxiv 2024.
-
Characterizing Prompt Compression Methods for Long Context Inference. Siddharth Jha, Lutfi Eren Erdogan, Sehoon Kim, Kurt Keutzer, Amir Gholami. Arxiv 2024.
-
Fundamental Limits of Prompt Compression: A Rate-Distortion Framework for Black-Box Language Models. Adway Girish, Alliot Nagle, Marco Bondaschi, Michael Gastpar, Ashok Vardhan Makkuva, Hyeji Kim. Arxiv 2024.
-
QUITO: Accelerating Long-Context Reasoning through Query-Guided Context Compression. Wenshan Wang, Yihang Wang, Yixing Fan, Huaming Liao, Jiafeng Guo. Arxiv 2024.
-
SentenceVAE: Faster, Longer and More Accurate Inference with Next-sentence Prediction for Large Language Models. Hongjun An, Yifan Chen, Xiaozhen Qiao, Zhe Sun, Xuelong Li. Arxiv 2024.
-
QUITO-X: An Information Bottleneck-based Compression Algorithm with Cross-Attention. Yihang Wang, Xu Huang, Bowen Tian, Yixing Fan, Jiafeng Guo. Arxiv 2024.
-
AdaComp: Extractive Context Compression with Adaptive Predictor for Retrieval-Augmented Large Language Models. Qianchi Zhang, Hainan Zhang, Liang Pang, Hongwei Zheng, Zhiming Zheng. Arxiv 2024.
-
Prompt Compression with Context-Aware Sentence Encoding for Fast and Improved LLM Inference. Barys Liskavets, Maxim Ushakov, Shuvendu Roy, Mark Klibanov, Ali Etemad, Shane Luke. Arxiv 2024.
- Familiarity-aware Evidence Compression for Retrieval Augmented Generation. Dongwon Jung, Qin Liu, Tenghao Huang, Ben Zhou, Muhao Chen. Arxiv 2024.
-
TACO-RL: Task Aware Prompt Compression Optimization with Reinforcement Learning. Shivam Shandilya, Menglin Xia, Supriyo Ghosh, Huiqiang Jiang, Jue Zhang, Qianhui Wu, Victor Rühle. Arxiv 2024.
-
Parse Trees Guided LLM Prompt Compression. Wenhao Mao, Chengbin Hou, Tianyu Zhang, Xinyu Lin, Ke Tang, Hairong Lv. Arxiv 2024.
-
FineZip: Pushing the Limits of Large Language Models for Practical Lossless Text Compression. Fazal Mittu, Yihuan Bu, Akshat Gupta, Ashok Devireddy, Alp Eren Ozdarendeli, Anant Singh, Gopala Anumanchipalli. Arxiv 2024.
-
Perception Compressor:A training-free prompt compression method in long context scenarios. Jiwei Tang, Jin Xu, Tingwei Lu, Hai Lin, Yiming Zhao, Hai-Tao Zheng. Arxiv 2024.
-
From Reading to Compressing: Exploring the Multi-document Reader for Prompt Compression. Eunseong Choi, Sunkyung Lee, Minjin Choi, June Park, Jongwuk Lee. EMNLP 2024.
-
Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability. Tsz Ting Chung, Leyang Cui, Lemao Liu, Xinting Huang, Shuming Shi, Dit-Yan Yeung. EMNLP 2024.
-
Style-Compress: An LLM-Based Prompt Compression Framework Considering Task-Specific Styles. Xiao Pu, Tianxing He, Xiaojun Wan. EMNLP 2024.
-
SpeechPrune: Context-aware Token Pruning for Speech Information Retrieval. Yueqian Lin, Yuzhe Fu, Jingyang Zhang, Yudong Liu, Jianyi Zhang, Jingwei Sun, Hai "Helen" Li, Yiran Chen. Arxiv 2024.
-
FTP: A Fine-grained Token-wise Pruner for Large Language Models via Token Routing. Zekai Li, Jintu Zheng, Ji Liu, Han Liu, Haowei Zhu, Zeping Li, Fuwei Yang, Haiduo Huang, Jinzhang Peng, Dong Li, Lu Tian, Emad Barsoum. Arxiv 2024.
-
CSR:Achieving 1 Bit Key-Value Cache via Sparse Representation. Hongxuan Zhang, Yao Zhao, Jiaqi Zheng, Chenyi Zhuang, Jinjie Gu, Guihai Chen. AAAI 2025.
-
EXIT: Context-Aware Extractive Compression for Enhancing Retrieval-Augmented Generation. Taeho Hwang, Sukmin Cho, Soyeong Jeong, Hoyun Song, SeungYoon Han, Jong C. Park. Arxiv 2024.
- EasyAnimate: A High-Performance Long Video Generation Method based on Transformer Architecture. Jiaqi Xu, Xinyi Zou, Kunzhe Huang, Yunkuo Chen, Bo Liu, MengLi Cheng, Xing Shi, Jun Huang. Arxiv 2024.
-
VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos. Ziyang Wang, Shoubin Yu, Elias Stengel-Eskin, Jaehong Yoon, Feng Cheng, Gedas Bertasius, Mohit Bansal. Arxiv 2024.
-
PostDoc: Generating Poster from a Long Multimodal Document Using Deep Submodular Optimization. Vijay Jaisankar, Sambaran Bandyopadhyay, Kalp Vyas, Varre Chaitanya, Shwetha Somasundaram. Arxiv 2024.
-
Investigating Video Reasoning Capability of Large Language Models with Tropes in Movies. Hung-Ting Su, Chun-Tong Chao, Ya-Ching Hsu, Xudong Lin, Yulei Niu, Hung-Yi Lee, Winston H. Hsu. Arxiv 2024.
- Towards Event-oriented Long Video Understanding. Yifan Du, Kun Zhou, Yuqi Huo, Yifan Li, Wayne Xin Zhao, Haoyu Lu, Zijia Zhao, Bingning Wang, Weipeng Chen, Ji-Rong Wen. Arxiv 2024.
-
An End-to-End Speech Summarization Using Large Language Model. Hengchao Shang, Zongyao Li, Jiaxin Guo, Shaojun Li, Zhiqiang Rao, Yuanchang Luo, Daimeng Wei, Hao Yang. Arxiv 2024.
-
KeyVideoLLM: Towards Large-scale Video Keyframe Selection. Hao Liang, Jiapeng Li, Tianyi Bai, Chong Chen, Conghui He, Bin Cui, Wentao Zhang. Arxiv 2024.
-
OmChat: A Recipe to Train Multimodal Language Models with Strong Long Context and Video Understanding. Tiancheng Zhao, Qianqian Zhang, Kyusong Lee, Peng Liu, Lu Zhang, Chunxin Fang, Jiajia Liao, Kelei Jiang, Yibo Ma, Ruochen Xu. Arxiv 2024.
-
MATE: Meet At The Embedding -- Connecting Images with Long Texts. Young Kyun Jang, Junmo Kang, Yong Jae Lee, Donghyun Kim. Arxiv 2024.
-
mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models. Jiabo Ye, Haiyang Xu, Haowei Liu, Anwen Hu, Ming Yan, Qi Qian, Ji Zhang, Fei Huang, Jingren Zhou. Arxiv 2024.
- LongVILA: Scaling Long-Context Visual Language Models for Long Videos. Fuzhao Xue, Yukang Chen, Dacheng Li, Qinghao Hu, Ligeng Zhu, Xiuyu Li, Yunhao Fang, Haotian Tang, Shang Yang, Zhijian Liu, Ethan He, Hongxu Yin, Pavlo Molchanov, Jan Kautz, Linxi Fan, Yuke Zhu, Yao Lu, Song Han. Arxiv 2024.
-
DreamFactory: Pioneering Multi-Scene Long Video Generation with a Multi-Agent Framework. Zhifei Xie, Daniel Tang, Dingwei Tan, Jacques Klein, Tegawend F. Bissyand, Saad Ezzini. Arxiv 2024.
-
Bridging Episodes and Semantics: A Novel Framework for Long-Form Video Understanding. Gueter Josmy Faure, Jia-Fong Yeh, Min-Hung Chen, Hung-Ting Su, Winston H. Hsu, Shang-Hong Lai. ECCV 2024 Workshop.
- VideoLLaMB: Long-context Video Understanding with Recurrent Memory Bridges. Yuxuan Wang, Cihang Xie, Yang Liu, Zilong Zheng. Arxiv 2024.
-
Longer is (Not Necessarily) Stronger: Punctuated Long-Sequence Training for Enhanced Speech Recognition and Translation. Nithin Rao Koluguri, Travis Bartley, Hainan Xu, Oleksii Hrinchuk, Jagadeesh Balam, Boris Ginsburg, Georg Kucsko. Arxiv 2024.
-
LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture. Xidong Wang, Dingjie Song, Shunian Chen, Chen Zhang, Benyou Wang. Arxiv 2024.
-
VideoCLIP-XL: Advancing Long Description Understanding for Video CLIP Models. Jiapeng Wang, Chengyu Wang, Kunzhe Huang, Jun Huang, Lianwen Jin. Arxiv 2024.
-
Rethinking Visual Dependency in Long-Context Reasoning for Large Vision-Language Models. Yucheng Zhou, Zhi Rao, Jun Wan, Jianbing Shen. Arxiv 2024.
-
SlowFast-VGen: Slow-Fast Learning for Action-Driven Long Video Generation. Yining Hong, Beide Liu, Maxine Wu, Yuanhao Zhai, Kai-Wei Chang, Lingjie Li, Kevin Lin, Chung-Ching Lin, Jianfeng Wang, Zhengyuan Yang, Yingnian Wu, Lijuan Wang. Arxiv 2024.
- LLM2CLIP: Powerful Language Model Unlock Richer Visual Representation. Weiquan Huang, Aoqi Wu, Yifan Yang, Xufang Luo, Yuqing Yang, Liang Hu, Qi Dai, Xiyang Dai, Dongdong Chen, Chong Luo, Lili Qiu. NeurIPS 2024.
- ReVisionLLM: Recursive Vision-Language Model for Temporal Grounding in Hour-Long Videos. Tanveer Hannan, Md Mohaiminul Islam, Jindong Gu, Thomas Seidl, Gedas Bertasius. Arxiv 2024.
- T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs. Shukang Yin, Chaoyou Fu, Sirui Zhao, Yunhang Shen, Chunjiang Ge, Yan Yang, Zuwei Long, Yuhan Dai, Tong Xu, Xing Sun, Ran He, Caifeng Shan, Enhong Chen. Arxiv 2024.
- Owl-1: Omni World Model for Consistent Long Video Generation. Yuanhui Huang, Wenzhao Zheng, Yuan Gao, Xin Tao, Pengfei Wan, Di Zhang, Jie Zhou, Jiwen Lu. Arxiv 2024.
- VCA: Video Curious Agent for Long Video Understanding. Zeyuan Yang, Delin Chen, Xueyang Yu, Maohao Shen, Chuang Gan. Arxiv 2024.
- Long Range Arena : A Benchmark for Efficient Transformers. Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, Donald Metzler. ICLR 2021.
- LOT: A Story-Centric Benchmark for Evaluating Chinese Long Text Understanding and Generation. Jian Guan, Zhuoer Feng, Yamei Chen, Ruilin He, Xiaoxi Mao, Changjie Fan, Minlie Huang. TACL 2022.
- SCROLLS: Standardized CompaRison Over Long Language Sequences. Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, Omer Levy. EMNLP 2022.
- MuLD: The Multitask Long Document Benchmark. George Hudson, Noura Al Moubayed. LREC 2022.
- Lost in the Middle: How Language Models Use Long Contexts. Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang. Arxiv 2023.
- L-Eval: Instituting Standardized Evaluation for Long Context Language Models. Chenxin An, Shansan Gong, Ming Zhong, Mukai Li, Jun Zhang, Lingpeng Kong, Xipeng Qiu. Arxiv 2023.
- LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding. Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang, Juanzi Li. Arxiv 2023.
-
Content Reduction, Surprisal and Information Density Estimation for Long Documents. Shaoxiong Ji, Wei Sun, Pekka Marttinen. Arxiv 2023.
-
BAMBOO: A Comprehensive Benchmark for Evaluating Long Text Modeling Capacities of Large Language Models. Zican Dong, Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen. Arxiv 2023.
-
Retrieval meets Long Context Large Language Models. Peng Xu, Wei Ping, Xianchao Wu, Lawrence McAfee, Chen Zhu, Zihan Liu, Sandeep Subramanian, Evelina Bakhturina, Mohammad Shoeybi, Bryan Catanzaro. Arxiv 2023.
-
LooGLE: Long Context Evaluation for Long-Context Language Models. Jiaqi Li, Mengmeng Wang, Zilong Zheng, Muhan Zhang. Arxiv 2023.
-
The Impact of Reasoning Step Length on Large Language Models. Mingyu Jin, Qinkai Yu, Dong shu, Haiyan Zhao, Wenyue Hua, Yanda Meng, Yongfeng Zhang, Mengnan Du. Arxiv 2024.
-
DocFinQA: A Long-Context Financial Reasoning Dataset. Varshini Reddy, Rik Koncel-Kedziorski, Viet Dac Lai, Chris Tanner. Arxiv 2024.
-
LongFin: A Multimodal Document Understanding Model for Long Financial Domain Documents. Ahmed Masry, Amir Hajian. Arxiv 2024.
-
PROXYQA: An Alternative Framework for Evaluating Long-Form Text Generation with Large Language Models. Haochen Tan, Zhijiang Guo, Zhan Shi, Lu Xu, Zhili Liu, Xiaoguang Li, Yasheng Wang, Lifeng Shang, Qun Liu, Linqi Song. Arxiv 2024.
-
LongHealth: A Question Answering Benchmark with Long Clinical Documents. Lisa Adams, Felix Busch, Tianyu Han, Jean-Baptiste Excoffier, Matthieu Ortala, Alexander Löser, Hugo JWL. Aerts, Jakob Nikolas Kather, Daniel Truhn, Keno Bressem. Arxiv 2024.
-
Long-form evaluation of model editing. Domenic Rosati, Robie Gonzales, Jinkun Chen, Xuemin Yu, Melis Erkan, Yahya Kayani, Satya Deepika Chavatapalli, Frank Rudzicz, Hassan Sajjad. Arxiv 2024.
-
In Search of Needles in a 10M Haystack: Recurrent Memory Finds What LLMs Miss. Yuri Kuratov, Aydar Bulatov, Petr Anokhin, Dmitry Sorokin, Artyom Sorokin, Mikhail Burtsev. Arxiv 2024.
-
∞Bench: Extending Long Context Evaluation Beyond 100K Tokens. Xinrong Zhang, Yingfa Chen, Shengding Hu, Zihang Xu, Junhao Chen, Moo Khai Hao, Xu Han, Zhen Leng Thai, Shuo Wang, Zhiyuan Liu, Maosong Sun. Arxiv 2024.
-
Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models. Mosh Levy, Alon Jacoby, Yoav Goldberg. Arxiv 2024.
- Evaluating Very Long-Term Conversational Memory of LLM Agents. Adyasha Maharana, Dong-Ho Lee, Sergey Tulyakov, Mohit Bansal, Francesco Barbieri, Yuwei Fang. Arxiv 2024.
- Language Models as Science Tutors. Alexis Chevalier, Jiayi Geng, Alexander Wettig, Howard Chen, Sebastian Mizera, Toni Annala, Max Jameson Aragon, Arturo Rodríguez Fanlo, Simon Frieder, Simon Machado, Akshara Prabhakar, Ellie Thieu, Jiachen T. Wang, Zirui Wang, Xindi Wu, Mengzhou Xia, Wenhan Jia, Jiatong Yu, Jun-Jie Zhu, Zhiyong Jason Ren, Sanjeev Arora, Danqi Chen. Arxiv 2024.
- Needle in a haystack - pressure testing llms. Kamradt, G. Github 2024.
- In Search of Needles in a 11M Haystack: Recurrent Memory Finds What LLMs Miss. Yuri Kuratov, Aydar Bulatov, Petr Anokhin, Dmitry Sorokin, Artyom Sorokin, Mikhail Burtsev. Arxiv 2024.
- LV-Eval: A Balanced Long-Context Benchmark with 5 Length Levels Up to 256K. Tao Yuan, Xuefei Ning, Dong Zhou, Zhijie Yang, Shiyao Li, Minghui Zhuang, Zheyue Tan, Zhuyu Yao, Dahua Lin, Boxun Li, Guohao Dai, Shengen Yan, Yu Wang. Arxiv 2024.
- Counting-Stars: A Simple, Efficient, and Reasonable Strategy for Evaluating Long-Context Large Language Models. Mingyang Song, Mao Zheng, Xuan Luo. Arxiv 2024.
- NovelQA: A Benchmark for Long-Range Novel Question Answering. Cunxiang Wang, Ruoxi Ning, Boqi Pan, Tonghui Wu, Qipeng Guo, Cheng Deng, Guangsheng Bao, Qian Wang, Yue Zhang. Arxiv 2024.
- Long-form factuality in large language models. Jerry Wei, Chengrun Yang, Xinying Song, Yifeng Lu, Nathan Hu, Dustin Tran, Daiyi Peng, Ruibo Liu, Da Huang, Cosmo Du, Quoc V. Le. Arxiv 2024.
-
LUQ: Long-text Uncertainty Quantification for LLMs. JCaiqi Zhang, Fangyu Liu, Marco Basaldella, Nigel Collier. Arxiv 2024.
-
CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models. Zexuan Qiu, Jingjing Li, Shijue Huang, Wanjun Zhong, Irwin King. Arxiv 2024.
- Long-context LLMs Struggle with Long In-context Learning. Tianle Li, Ge Zhang, Quy Duc Do, Xiang Yue, Wenhu Chen. Arxiv 2024.
- CLAPNQ: Cohesive Long-form Answers from Passages in Natural Questions for RAG systems. Sara Rosenthal, Avirup Sil, Radu Florian, Salim Roukos. Arxiv 2024.
- XL2Bench: A Benchmark for Extremely Long Context Understanding with Long-range Dependencies. Xuanfan Ni, Hengyi Cai, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, Piji Li. Arxiv 2024.
-
Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors. Ido Amos, Jonathan Berant, Ankit Gupta. ICLR 2024 Oral.
-
Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks. Chonghua Wang, Haodong Duan, Songyang Zhang, Dahua Lin, Kai Chen. Arxiv 2024.
- RULER: What's the Real Context Size of Your Long-Context Language Models?. Cheng-Ping Hsieh, Simeng Sun, Samuel Kriman, Shantanu Acharya, Dima Rekesh, Fei Jia, Boris Ginsburg. Arxiv 2024.
- LongEmbed: Extending Embedding Models for Long Context Retrieval. Dawei Zhu, Liang Wang, Nan Yang, Yifan Song, Wenhao Wu, Furu Wei, Sujian Li. Arxiv 2024.
- Make Your LLM Fully Utilize the Context. Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng, Jian-Guang Lou. Arxiv 2024.
- S3Eval: A Synthetic, Scalable, Systematic Evaluation Suite for Large Language Models. Fangyu Lei, Qian Liu, Yiming Huang, Shizhu He, Jun Zhao, Kang Liu. NAACL 2024.
- In-Context Learning with Long-Context Models: An In-Depth Exploration. Amanda Bertsch, Maor Ivgi, Uri Alon, Jonathan Berant, Matthew R. Gormley, Graham Neubig. Arxiv 2024.
-
Many-shot Jailbreaking. Anthropic 2024.
-
DOLOMITES: Domain-Specific Long-Form Methodical Tasks. Chaitanya Malaviya, Priyanka Agrawal, Kuzman Ganchev, Pranesh Srinivasan, Fantine Huot, Jonathan Berant, Mark Yatskar, Dipanjan Das, Mirella Lapata, Chris Alberti. Arxiv 2024.
-
Challenges in Deploying Long-Context Transformers: A Theoretical Peak Performance Analysis. Yao Fu. Arxiv 2024.
-
FinTextQA: A Dataset for Long-form Financial Question Answering. Jian Chen, Peilin Zhou, Yining Hua, Yingxin Loh, Kehui Chen, Ziyuan Li, Bing Zhu, Junwei Liang. Arxiv 2024.
-
A Multi-Perspective Analysis of Memorization in Large Language Models. Bowen Chen, Namgi Han, Yusuke Miyao. Arxiv 2024.
-
OLAPH: Improving Factuality in Biomedical Long-form Question Answering. Minbyul Jeong, Hyeon Hwang, Chanwoong Yoon, Taewhoo Lee, Jaewoo Kang. Arxiv 2024.
- Can LLMs Solve longer Math Word Problems Better?. Xin Xu, Tong Xiao, Zitong Chao, Zhenya Huang, Can Yang, Yang Wang. Arxiv 2024.
-
Base of RoPE Bounds Context Length. Xin Men, Mingyu Xu, Bingning Wang, Qingyu Zhang, Hongyu Lin, Xianpei Han, Weipeng Chen. Arxiv 2024.
-
Many-shot In-Context Learning. Rishabh Agarwal, Avi Singh, Lei M. Zhang, Bernd Bohnet, Luis Rosias, Stephanie Chan, Biao Zhang, Ankesh Anand, Zaheer Abbas, Azade Nova, John D. Co-Reyes, Eric Chu, Feryal Behbahani, Aleksandra Faust, Hugo Larochelle. Arxiv 2024.
-
Long Context is Not Long at All: A Prospector of Long-Dependency Data for Large Language Models. Longze Chen, Ziqiang Liu, Wanwei He, Yunshui Li, Run Luo, Min Yang. Arxiv 2024.
- Language Models Need Inductive Biases to Count Inductively. Yingshan Chang, Yonatan Bisk. Arxiv 2024.
-
Analyzing Temporal Complex Events with Large Language Models? A Benchmark towards Temporal, Long Context Understanding. Zhihan Zhang, Yixin Cao, Chenchen Ye, Yunshan Ma, Lizi Liao, Tat-Seng Chua. Arxiv 2024.
-
CRAG -- Comprehensive RAG Benchmark. Xiao Yang, Kai Sun, Hao Xin, Yushi Sun, Nikita Bhalla, Xiangsen Chen, Sajal Choudhary, Rongze Daniel Gui, Ziran Will Jiang, Ziyu Jiang, Lingkun Kong, Brian Moran, Jiaqi Wang, Yifan Ethan Xu, An Yan, Chenyu Yang, Eting Yuan, Hanwen Zha, Nan Tang, Lei Chen, Nicolas Scheffer, Yue Liu, Nirav Shah, Rakesh Wanga, Anuj Kumar, Wen-tau Yih, Xin Luna Dong. Arxiv 2024.
- An Empirical Study of Mamba-based Language Models. Roger Waleffe, Wonmin Byeon, Duncan Riach, Brandon Norick, Vijay Korthikanti, Tri Dao, Albert Gu, Ali Hatamizadeh, Sudhakar Singh, Deepak Narayanan, Garvit Kulshreshtha, Vartika Singh, Jared Casper, Jan Kautz, Mohammad Shoeybi, Bryan Catanzaro. Arxiv 2024.
- BABILong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack. Yuri Kuratov, Aydar Bulatov, Petr Anokhin, Ivan Rodkin, Dmitry Sorokin, Artyom Sorokin, Mikhail Burtsev. Arxiv 2024.
- Can Many-Shot In-Context Learning Help Long-Context LLM Judges? See More, Judge Better!. Mingyang Song, Mao Zheng, Xuan Luo. Arxiv 2024.
-
What Kinds of Tokens Benefit from Distant Text? An Analysis on Long Context Language Modeling. Yutong Hu, Quzhe Huang, Kangcheng Luo, Yansong Feng. Arxiv 2024.
-
Understanding the RoPE Extensions of Long-Context LLMs: An Attention Perspective. Meizhi Zhong, Chen Zhang, Yikun Lei, Xikai Liu, Yan Gao, Yao Hu, Kehai Chen, Min Zhang. Arxiv 2024.
-
Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More?. Jinhyuk Lee, Anthony Chen, Zhuyun Dai, Dheeru Dua, Devendra Singh Sachan, Michael Boratko, Yi Luan, Sébastien M. R. Arnold, Vincent Perot, Siddharth Dalmia, Hexiang Hu, Xudong Lin, Panupong Pasupat, Aida Amini, Jeremy R. Cole, Sebastian Riedel, Iftekhar Naim, Ming-Wei Chang, Kelvin Guu. Arxiv 2024.
- Insights into LLM Long-Context Failures: When Transformers Know but Don't Tell. Taiming Lu, Muhan Gao, Kuai Yu, Adam Byerly, Daniel Khashabi. Arxiv 2024.
- MedOdyssey: A Medical Domain Benchmark for Long Context Evaluation Up to 200K Tokens. Yongqi Fan, Hongli Sun, Kui Xue, Xiaofan Zhang, Shaoting Zhang, Tong Ruan. Arxiv 2024.
-
USDC: A Dataset of $\underline{U}$ser $\underline{S}$tance and $\underline{D}$ogmatism in Long $\underline{C}$onversations. Mounika Marreddy, Subba Reddy Oota, Venkata Charan Chinni, Manish Gupta, Lucie Flek. Arxiv 2024.
-
Found in the Middle: Calibrating Positional Attention Bias Improves Long Context Utilization. Cheng-Yu Hsieh, Yung-Sung Chuang, Chun-Liang Li, Zifeng Wang, Long T. Le, Abhishek Kumar, James Glass, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, Tomas Pfister. Arxiv 2024.
-
One Thousand and One Pairs: A "novel" challenge for long-context language models. Marzena Karpinska, Katherine Thai, Kyle Lo, Tanya Goyal, Mohit Iyyer. Arxiv 2024.
-
LongIns: A Challenging Long-context Instruction-based Exam for LLMs. Shawn Gavin, Tuney Zheng, Jiaheng Liu, Quehry Que, Noah Wang, Jian Yang, Chenchen Zhang, Wenhao Huang, Wenhu Chen, Ge Zhang. Arxiv 2024.
-
Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA. Minzheng Wang, Longze Chen, Cheng Fu, Shengyi Liao, Xinghua Zhang, Bingli Wu, Haiyang Yu, Nan Xu, Lei Zhang, Run Luo, Yunshui Li, Min Yang, Fei Huang, Yongbin Li. Arxiv 2024.
- VERISCORE: Evaluating the factuality of verifiable claims in long-form text generation. Yixiao Song, Yekyung Kim, Mohit Iyyer. Arxiv 2024.
- ToolBeHonest: A Multi-level Hallucination Diagnostic Benchmark for Tool-Augmented Large Language Models. Yuxiang Zhang, Jing Chen, Junjie Wang, Yaxin Liu, Cheng Yang, Chufan Shi, Xinyu Zhu, Zihao Lin, Hanwen Wan, Yujiu Yang, Tetsuya Sakai, Tian Feng, Hayato Yamana. Arxiv 2024.
- KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches. Jiayi Yuan, Hongyi Liu, Shaochen (Henry)Zhong, Yu-Neng Chuang, Songchen Li, Guanchu Wang, Duy Le, Hongye Jin, Vipin Chaudhary, Zhaozhuo Xu, Zirui Liu, Xia Hu. Arxiv 2024.
-
Is It Really Long Context if All You Need Is Retrieval? Towards Genuinely Difficult Long Context NLP. Omer Goldman, Alon Jacovi, Aviv Slobodkin, Aviya Maimon, Ido Dagan, Reut Tsarfaty. Arxiv 2024.
-
Summary of a Haystack: A Challenge to Long-Context LLMs and RAG Systems. Philippe Laban, Alexander R. Fabbri, Caiming Xiong, Chien-Sheng Wu. Arxiv 2024.
-
Entity-Level Sentiment: More than the Sum of Its Parts. Egil Rønningstad, Roman Klinger, Erik Velldal, Lilja Øvrelid. Arxiv 2024.
-
Evaluating Language Model Context Windows: A "Working Memory" Test and Inference-time Correction. Amanda Dsouza, Christopher Glaze, Changho Shin, Frederic Sala. Arxiv 2024.
-
RAG vs. Long Context: Examining Frontier Large Language Models for Environmental Review Document Comprehension. Hung Phan, Anurag Acharya, Sarthak Chaturvedi, Shivam Sharma, Mike Parker, Dan Nally, Ali Jannesari, Karl Pazdernik, Mahantesh Halappanavar, Sai Munikoti, Sameera Horawalavithana. Arxiv 2024.
-
Attribute or Abstain: Large Language Models as Long Document Assistants. Jan Buchmann, Xiao Liu, Iryna Gurevych. Arxiv 2024.
-
How Well Can a Long Sequence Model Model Long Sequences? Comparing Architechtural Inductive Biases on Long-Context Abilities. Jerry Huang. Arxiv 2024.
-
DOCBENCH: A Benchmark for Evaluating LLM-based Document Reading Systems. Anni Zou, Wenhao Yu, Hongming Zhang, Kaixin Ma, Deng Cai, Zhuosheng Zhang, Hai Zhao, Dong Yu. Arxiv 2024.
- NeedleBench: Can LLMs Do Retrieval and Reasoning in 1 Million Context Window?. Mo Li, Songyang Zhang, Yunxin Liu, Kai Chen. Arxiv 2024.
- LongLaMP: A Benchmark for Personalized Long-form Text Generation. Ishita Kumar, Snigdha Viswanathan, Sushrita Yerra, Alireza Salemi, Ryan A. Rossi, Franck Dernoncourt, Hanieh Deilamsalehy, Xiang Chen, Ruiyi Zhang, Shubham Agarwal, Nedim Lipka, Hamed Zamani. Arxiv 2024.
- RAG-QA Arena: Evaluating Domain Robustness for Long-form Retrieval Augmented Question Answering. Rujun Han, Yuhao Zhang, Peng Qi, Yumo Xu, Jenyuan Wang, Lan Liu, William Yang Wang, Bonan Min, Vittorio Castelli. Arxiv 2024.
-
Attention Is All You Need But You Don't Need All Of It For Inference of Large Language Models. Georgy Tyukin, Gbetondji J-S Dovonon, Jean Kaddour, Pasquale Minervini. ICML 2024 TF2M workshop.
-
Stress-Testing Long-Context Language Models with Lifelong ICL and Task Haystack. Xiaoyue Xu, Qinyuan Ye, Xiang Ren. Arxiv 2024.
-
WildHallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity Queries. Wenting Zhao, Tanya Goyal, Yu Ying Chiu, Liwei Jiang, Benjamin Newman, Abhilasha Ravichander, Khyathi Chandu, Ronan Le Bras, Claire Cardie, Yuntian Deng, Yejin Choi. Arxiv 2024.
-
Retrieval Augmented Generation or Long-Context LLMs? A Comprehensive Study and Hybrid Approach. Zhuowan Li, Cheng Li, Mingyang Zhang, Qiaozhu Mei, Michael Bendersky. Arxiv 2024.
-
Evaluating Long Range Dependency Handling in Code Generation Models using Multi-Step Key Retrieval. Yannick Assogba, Donghao Ren. Arxiv 2024.
-
Long Input Benchmark for Russian Analysis. Igor Churin, Murat Apishev, Maria Tikhonova, Denis Shevelev, Aydar Bulatov, Yuri Kuratov, Sergej Averkiev, Alena Fenogenova. Arxiv 2024.
-
CoverBench: A Challenging Benchmark for Complex Claim Verification. Alon Jacovi, Moran Ambar, Eyal Ben-David, Uri Shaham, Amir Feder, Mor Geva, Dror Marcus, Avi Caciularu. Arxiv 2024.
- LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs. Yushi Bai, Jiajie Zhang, Xin Lv, Linzhi Zheng, Siqi Zhu, Lei Hou, Yuxiao Dong, Jie Tang, Juanzi Li. Arxiv 2024.
- Multilingual Needle in a Haystack: Investigating Long-Context Behavior of Multilingual Large Language Models. Amey Hengle, Prasoon Bajpai, Soham Dan, Tanmoy Chakraborty. Arxiv 2024.
- LongGenBench: Benchmarking Long-Form Generation in Long Context LLMs. Yuhao Wu, Ming Shan Hee, Zhiqing Hu, Roy Ka-Wei Lee. Arxiv 2024.
- What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices. Zhi Chen, Qiguang Chen, Libo Qin, Qipeng Guo, Haijun Lv, Yicheng Zou, Wanxiang Che, Hang Yan, Kai Chen, Dahua Lin. Arxiv 2024.
-
Retrieval Or Holistic Understanding? Dolce: Differentiate Our Long Context Evaluation Tasks. Zi Yang. Arxiv 2024.
-
A Controlled Study on Long Context Extension and Generalization in LLMs. Yi Lu, Jing Nathan Yan, Songlin Yang, Justin T. Chiu, Siyu Ren, Fei Yuan, Wenting Zhao, Zhiyong Wu, Alexander M. Rush. Arxiv 2024.
- RAD-Bench: Evaluating Large Language Models Capabilities in Retrieval Augmented Dialogues. Tzu-Lin Kuo, Feng-Ting Liao, Mu-Wei Hsieh, Fu-Chieh Chang, Po-Chun Hsu, Da-Shan Shiu. Arxiv 2024.
- Fact, Fetch, and Reason: A Unified Evaluation of Retrieval-Augmented Generation. Satyapriya Krishna, Kalpesh Krishna, Anhad Mohananey, Steven Schwarcz, Adam Stambler, Shyam Upadhyay, Manaal Faruqui. Arxiv 2024.
-
Michelangelo: Long Context Evaluations Beyond Haystacks via Latent Structure Queries. Kiran Vodrahalli, Santiago Ontanon, Nilesh Tripuraneni, Kelvin Xu, Sanil Jain, Rakesh Shivanna, Jeffrey Hui, Nishanth Dikkala, Mehran Kazemi, Bahare Fatemi, Rohan Anil, Ethan Dyer, Siamak Shakeri, Roopali Vij, Harsh Mehta, Vinay Ramasesh, Quoc Le, Ed Chi, Yifeng Lu, Orhan Firat, Angeliki Lazaridou, Jean-Baptiste Lespiau, Nithya Attaluri, Kate Olszewska. Arxiv 2024.
-
DetectiveQA: Evaluating Long-Context Reasoning on Detective Novels. Zhe Xu, Jiasheng Ye, Xiangyang Liu, Tianxiang Sun, Xiaoran Liu, Qipeng Guo, Linlin Li, Qun Liu, Xuanjing Huang, Xipeng Qiu. Arxiv 2024.
-
LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA. Jiajie Zhang, Yushi Bai, Xin Lv, Wanjun Gu, Danqing Liu, Minhao Zou, Shulin Cao, Lei Hou, Yuxiao Dong, Ling Feng, Juanzi Li. Arxiv 2024.
- HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models. Haoran Que, Feiyu Duan, Liqun He, Yutao Mou, Wangchunshu Zhou, Jiaheng Liu, Wenge Rong, Zekun Moore Wang, Jian Yang, Ge Zhang, Junran Peng, Zhaoxiang Zhang, Songyang Zhang, Kai Chen. Arxiv 2024.
-
Multilingual Evaluation of Long Context Retrieval and Reasoning. Ameeta Agrawal, Andy Dang, Sina Bagheri Nezhad, Rhitabrat Pokharel, Russell Scheinberg. Arxiv 2024.
-
L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding? Zecheng Tang, Keyan Zhou, Juntao Li, Baibei Ji, Jianye Hou, Min Zhang. Arxiv 2024.
- HELMET: How to Evaluate Long-Context Language Models Effectively and Thoroughly. Howard Yen, Tianyu Gao, Minmin Hou, Ke Ding, Daniel Fleischer, Peter Izasak, Moshe Wasserblat, Danqi Chen. Arxiv 2024.
-
MathHay: An Automated Benchmark for Long-Context Mathematical Reasoning in LLMs. Lei Wang, Shan Dong, Yuhui Xu, Hanze Dong, Yalu Wang, Amrita Saha, Ee-Peng Lim, Caiming Xiong, Doyen Sahoo. Arxiv 2024.
-
LongGenBench: Long-context Generation Benchmark. Xiang Liu, Peijie Dong, Xuming Hu, Xiaowen Chu. EMNLP 2024.
-
Hyper-multi-step: The Truth Behind Difficult Long-context Tasks. Yijiong Yu. Arxiv 2024.
- Holistic Reasoning with Long-Context LMs: A Benchmark for Database Operations on Massive Textual Data. Seiji Maekawa, Hayate Iso, Nikita Bhutani. Arxiv 2024.
-
How much do contextualized representations encode long-range context?. Simeng Sun, Cheng-Ping Hsieh. Arxiv 2024.
-
LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory. Di Wu, Hongwei Wang, Wenhao Yu, Yuwei Zhang, Kai-Wei Chang, Dong Yu. Arxiv 2024.
- When Attention Sink Emerges in Language Models: An Empirical View. Xiangming Gu, Tianyu Pang, Chao Du, Qian Liu, Fengzhuo Zhang, Cunxiao Du, Ye Wang, Min Lin. Arxiv 2024.
-
Minimum Tuning to Unlock Long Output from LLMs with High Quality Data as the Key. Yingda Chen, Xingjun Wang, Jintao Huang, Yunlin Mao, Daoze Zhang, Yuze Zhao. Arxiv 2024.
-
Distance between Relevant Information Pieces Causes Bias in Long-Context LLMs. Runchu Tian, Yanghao Li, Yuepeng Fu, Siyang Deng, Qinyu Luo, Cheng Qian, Shuo Wang, Xin Cong, Zhong Zhang, Yesai Wu, Yankai Lin, Huadong Wang, Xiaojiang Liu. Arxiv 2024.
- ETHIC: Evaluating Large Language Models on Long-Context Tasks with High Information Coverage. Taewhoo Lee, Chanwoong Yoon, Kyochul Jang, Donghyeon Lee, Minju Song, Hyunjae Kim, Jaewoo Kang. Arxiv 2024.
-
Long2RAG: Evaluating Long-Context & Long-Form Retrieval-Augmented Generation with Key Point Recall. Zehan Qi, Rongwu Xu, Zhijiang Guo, Cunxiang Wang, Hao Zhang, Wei Xu. EMNLP 2024.
-
Needle Threading: Can LLMs Follow Threads through Near-Million-Scale Haystacks?. Jonathan Roberts, Kai Han, Samuel Albanie. Arxiv 2024.
- Retrieval or Global Context Understanding? On Many-Shot In-Context Learning for Long-Context Evaluation. Kaijian Zou, Muhammad Khalifa, Lu Wang. Arxiv 2024.
-
LIFBench: Evaluating the Instruction Following Performance and Stability of Large Language Models in Long-Context Scenarios. Xiaodong Wu, Minhao Wang, Yichen Liu, Xiaoming Shi, He Yan, Xiangju Lu, Junmin Zhu, Wei Zhang. Arxiv 2024.
-
Spider 2.0: Evaluating Language Models on Real-World Enterprise Text-to-SQL Workflows. Fangyu Lei, Jixuan Chen, Yuxiao Ye, Ruisheng Cao, Dongchan Shin, Hongjin Su, Zhaoqing Suo, Hongcheng Gao, Wenjing Hu, Pengcheng Yin, Victor Zhong, Caiming Xiong, Ruoxi Sun, Qian Liu, Sida Wang, Tao Yu. Arxiv 2024.
- A Benchmark for Long-Form Medical Question Answering. Pedram Hosseini, Jessica M. Sin, Bing Ren, Bryceton G. Thomas, Elnaz Nouri, Ali Farahanchi, Saeed Hassanpour. NeurIPS 2024.
- DENIAHL: In-Context Features Influence LLM Needle-In-A-Haystack Abilities. Hui Dai, Dan Pechi, Xinyi Yang, Garvit Banga, Raghav Mantri. Arxiv 2024.
-
LCFO: Long Context and Long Form Output Dataset and Benchmarking. Marta R. Costa-jussà, Pierre Andrews, Mariano Coria Meglioli, Joy Chen, Joe Chuang, David Dale, Christophe Ropers, Alexandre Mourachko, Eduardo Sánchez, Holger Schwenk, Tuan Tran, Arina Turkatenko, Carleigh Wood. Arxiv 2024.
-
SCBench: A KV Cache-Centric Analysis of Long-Context Methods. Yucheng Li, Huiqiang Jiang, Qianhui Wu, Xufang Luo, Surin Ahn, Chengruidong Zhang, Amir H. Abdi, Dongsheng Li, Jianfeng Gao, Yuqing Yang, Lili Qiu. Arxiv 2024.
- MileBench: Benchmarking MLLMs in Long Context. Dingjie Song, Shunian Chen, Guiming Hardy Chen, Fei Yu, Xiang Wan, Benyou Wang. Arxiv 2024.
- Many-Shot In-Context Learning in Multimodal Foundation Models. Yixing Jiang, Jeremy Irvin, Ji Hun Wang, Muhammad Ahmed Chaudhry, Jonathan H. Chen, Andrew Y. Ng. Arxiv 2024.
- MLVU: A Comprehensive Benchmark for Multi-Task Long Video Understanding. Junjie Zhou, Yan Shu, Bo Zhao, Boya Wu, Shitao Xiao, Xi Yang, Yongping Xiong, Bo Zhang, Tiejun Huang, Zheng Liu. Arxiv 2024.
- RepoQA: Evaluating Long Context Code Understanding. Jiawei Liu, Jia Le Tian, Vijay Daita, Yuxiang Wei, Yifeng Ding, Yuhan Katherine Wang, Jun Yang, Lingming Zhang. Arxiv 2024.
- Short Film Dataset (SFD): A Benchmark for Story-Level Video Understanding. Ridouane Ghermi, Xi Wang, Vicky Kalogeiton, Ivan Laptev. Arxiv 2024.
- Multimodal Needle in a Haystack: Benchmarking Long-Context Capability of Multimodal Large Language Models. Hengyi Wang, Haizhou Shi, Shiwei Tan, Weiyi Qin, Wenyuan Wang, Tunyu Zhang, Akshay Nambi, Tanuja Ganu, Hao Wang. Arxiv 2024.
- Losing Visual Needles in Image Haystacks: Vision Language Models are Easily Distracted in Short and Long Contexts. Aditya Sharma, Michael Saxon, William Yang Wang. Arxiv 2024.
- MMLongBench-Doc: Benchmarking Long-context Document Understanding with Visualizations. Yubo Ma, Yuhang Zang, Liangyu Chen, Meiqi Chen, Yizhu Jiao, Xinze Li, Xinyuan Lu, Ziyu Liu, Yan Ma, Xiaoyi Dong, Pan Zhang, Liangming Pan, Yu-Gang Jiang, Jiaqi Wang, Yixin Cao, Aixin Sun. Arxiv 2024.
- InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output. Pan Zhang, Xiaoyi Dong, Yuhang Zang, Yuhang Cao, Rui Qian, Lin Chen, Qipeng Guo, Haodong Duan, Bin Wang, Linke Ouyang, Songyang Zhang, Wenwei Zhang, Yining Li, Yang Gao, Peng Sun, Xinyue Zhang, Wei Li, Jingwen Li, Wenhai Wang, Hang Yan, Conghui He, Xingcheng Zhang, Kai Chen, Jifeng Dai, Yu Qiao, Dahua Lin, Jiaqi Wang. Arxiv 2024.
- Stark: Social Long-Term Multi-Modal Conversation with Persona Commonsense Knowledge. Young-Jun Lee, Dokyong Lee, Junyoung Youn, Kyeongjin Oh, Byungsoo Ko, Jonghwan Hyeon, Ho-Jin Choi. Arxiv 2024.
-
SPIQA: A Dataset for Multimodal Question Answering on Scientific Papers. Shraman Pramanick, Rama Chellappa, Subhashini Venugopalan. Arxiv 2024.
-
LongVideoBench: A Benchmark for Long-context Interleaved Video-Language Understanding. Haoning Wu, Dongxu Li, Bei Chen, Junnan Li. Arxiv 2024.
- mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval. Xin Zhang, Yanzhao Zhang, Dingkun Long, Wen Xie, Ziqi Dai, Jialong Tang, Huan Lin, Baosong Yang, Pengjun Xie, Fei Huang, Meishan Zhang, Wenjie Li, Min Zhang. Arxiv 2024.
- MovieSum: An Abstractive Summarization Dataset for Movie Screenplays. Rohit Saxena, Frank Keller. Arxiv 2024.
- SEED-Story: Multimodal Long Story Generation with Large Language Model. Shuai Yang, Yuying Ge, Yang Li, Yukang Chen, Yixiao Ge, Ying Shan, Yingcong Chen. Arxiv 2024.
- M-Longdoc: A Benchmark For Multimodal Super-Long Document Understanding And A Retrieval-Aware Tuning Framework. Yew Ken Chia, Liying Cheng, Hou Pong Chan, Chaoqun Liu, Maojia Song, Sharifah Mahani Aljunied, Soujanya Poria, Lidong Bing. Arxiv 2024.
-
LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long Videos. Tiantian Geng, Jinrui Zhang, Qingni Wang, Teng Wang, Jinming Duan, Feng Zheng. Arxiv 2024.
-
LMAct: A Benchmark for In-Context Imitation Learning with Long Multimodal Demonstrations. Anian Ruoss, Fabio Pardo, Harris Chan, Bonnie Li, Volodymyr Mnih, Tim Genewein. Arxiv 2024.
- Neptune: The Long Orbit to Benchmarking Long Video Understanding. Arsha Nagrani, Mingda Zhang, Ramin Mehran, Rachel Hornung, Nitesh Bharadwaj Gundavarapu, Nilpa Jha, Austin Myers, Xingyi Zhou, Boqing Gong, Cordelia Schmid, Mikhail Sirotenko, Yukun Zhu, Tobias Weyand. Arxiv 2024.
- VisDoM: Multi-Document QA with Visually Rich Elements Using Multimodal Retrieval-Augmented Generation. Manan Suri, Puneet Mathur, Franck Dernoncourt, Kanika Goswami, Ryan A. Rossi, Dinesh Manocha. Arxiv 2024.
-
Integrating Planning into Single-Turn Long-Form Text Generation. Yi Liang, You Wu, Honglei Zhuang, Li Chen, Jiaming Shen, Yiling Jia, Zhen Qin, Sumit Sanghai, Xuanhui Wang, Carl Yang, Michael Bendersky. Arxiv 2024.
-
Minimum Tuning to Unlock Long Output from LLMs with High Quality Data as the Key. Yingda Chen, Xingjun Wang, Jintao Huang, Yunlin Mao, Daoze Zhang, Yuze Zhao. Arxiv 2024.
-
LongGenBench: Long-context Generation Benchmark. Xiang Liu, Peijie Dong, Xuming Hu, Xiaowen Chu. EMNLP 2024.
-
LoGU: Long-form Generation with Uncertainty Expressions. Ruihan Yang, Caiqi Zhang, Zhisong Zhang, Xinting Huang, Sen Yang, Nigel Collier, Dong Yu, Deqing Yang. Arxiv 2024.
-
Large Language Models Still Exhibit Bias in Long Text. Wonje Jeung, Dongjae Jeon, Ashkan Yousefpour, Jonghyun Choi. Arxiv 2024.
-
Suri: Multi-constraint Instruction Following for Long-form Text Generation. Chau Minh Pham, Simeng Sun, Mohit Iyyer. Arxiv 2024.
- LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs. Yushi Bai, Jiajie Zhang, Xin Lv, Linzhi Zheng, Siqi Zhu, Lei Hou, Yuxiao Dong, Jie Tang, Juanzi Li. Arxiv 2024.
- Language Models can Self-Lengthen to Generate Long Texts. Shanghaoran Quan, Tianyi Tang, Bowen Yu, An Yang, Dayiheng Liu, Bofei Gao, Jianhong Tu, Yichang Zhang, Jingren Zhou, Junyang Lin. Arxiv 2024.
-
Extending Context is Hard…but not Impossible†. kaiokendev. 2023.
-
NTK-Aware Scaled RoPE. u/bloc97 . 2023.
-
The Secret Sauce behind 100K context window in LLMs: all tricks in one place. Galina Alperovich. 2023.
-
Transformer升级之路:7、长度外推性与局部注意力. 苏剑林(Jianlin Su). 2023.
-
Transformer升级之路:9、一种全局长度外推的新思路. 苏剑林(Jianlin Su). 2023.
-
Transformer升级之路:12、无限外推的ReRoPE. 苏剑林(Jianlin Su). 2023.
-
Transformer升级之路:14、当HWFA遇见ReRoPE. 苏剑林(Jianlin Su). 2023.
-
Transformer升级之路:15、Key归一化助力长度外推. 苏剑林(Jianlin Su). 2023.
-
Transformer升级之路:16、“复盘”长度外推技术. 苏剑林(Jianlin Su). 2024.
-
Introducing RAG 2.0. Contextual AI Team. 2024.
-
How Do Language Models put Attention Weights over Long Context?. Yao Fu. 2024.
-
An open-source and open-access RAG platform. Yunfan Gao. 2024.
-
Many-shot Jailbreaking. Anthropic. 2024.
-
Full Stack Transformer Inference Optimization Season 2: Deploying Long-Context Models. Yao Fu. 2024.
-
缓存与效果的极限拉扯:从MHA、MQA、GQA到MLA. 苏剑林(Jianlin Su). 2024.
-
Towards 100x Speedup: Full Stack Transformer Inference Optimization. Yao Fu. 2024.
-
2024.5 A Side-by-Side Comparison of the Long Context of Various LLMs (128k articles). SomeoneKong. 2024.
- 2024.5 A Side-by-Side Comparison of the Long Context of Various LLMs (32k articles). SomeoneKong. 2024.
-
Transformer升级之路:18、RoPE的底数设计原则. 苏剑林(Jianlin Su). 2024.
-
Generalizing an LLM from 8k to 1M Context using Qwen-Agent. Qwen Team. 2024.
-
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision. Jay Shah, Ganesh Bikshandi, Ying Zhang, Vijay Thakkar, Pradeep Ramani, Tri Dao. 2024.
Please contact me if I miss your names in the list, I will add you back ASAP!