Local LLM-assisted text completion.
- Auto-suggest on cursor movement in
Insertmode - Toggle the suggestion manually by pressing
Ctrl+F - Accept a suggestion with
Tab - Accept the first line of a suggestion with
Shift+Tab - Control max text generation time
- Configure scope of context around the cursor
- Ring context with chunks from open and edited files and yanked text
- Supports very large contexts even on low-end hardware via smart context reuse
- Speculative FIM support
- Speculative Decoding support
- Display performance stats
-
vim-plug
Plug 'ggml-org/llama.vim' -
Vundle
cd ~/.vim/bundle git clone https://github.com/ggml-org/llama.vim
Then add
Plugin 'llama.vim'to your .vimrc in thevundle#begin()section. -
lazy.nvim
{ 'ggml-org/llama.vim', }
You can customize llama.vim by setting the g:llama_config variable.
Examples:
-
Disable the inline info:
" put before llama.vim loads let g:llama_config = { 'show_info': 0 }
-
Same thing but setting directly
let g:llama_config.show_info = v:false
-
Disable auto FIM (Fill-In-the-Middle) completion with lazy.nvim
{ 'ggml-org/llama.vim', init = function() vim.g.llama_config = { auto_fim = false, } end, } -
Changing accept line keymap
let g:llama_config.keymap_accept_full = "<C-S>"
Please refer to :help llama_config or the source
for the full list of options.
The plugin requires a llama.cpp server instance to be running at g:llama_config.endpoint.
brew install llama.cppwinget install llama.cppEither build from source or use the latest binaries: https://github.com/ggml-org/llama.cpp/releases
Here are recommended settings, depending on the amount of VRAM that you have:
-
More than 64GB VRAM:
llama-server --fim-qwen-30b-default
-
More than 16GB VRAM:
llama-server --fim-qwen-7b-default
-
Less than 16GB VRAM:
llama-server --fim-qwen-3b-default
-
Less than 8GB VRAM:
llama-server --fim-qwen-1.5b-default
Use :help llama for more details.
The plugin requires FIM-compatible models: HF collection
The orange text is the generated suggestion. The green text contains performance stats for the FIM request: the currently used context is 15186 tokens and the maximum is 32768. There are 30 chunks in the ring buffer with extra context (out of 64). So far, 1 chunk has been evicted in the current session and there are 0 chunks in queue. The newly computed prompt tokens for this request were 260 and the generated tokens were 24. It took 1245 ms to generate this suggestion after entering the letter c on the current line.
llama.vim-0-lq.mp4
Demonstrates that the global context is accumulated and maintained across different files and showcases the overall latency when working in a large codebase.
The plugin aims to be very simple and lightweight and at the same time to provide high-quality and performant local FIM completions, even on consumer-grade hardware. Read more on how this is achieved in the following links:
- Initial implementation and technical description: ggml-org/llama.cpp#9787
- Classic Vim support: ggml-org/llama.cpp#9995

