← Back to articles

Supercharging Neovim with LLM Integration

I wanted LLM integration in my new Neovim setup. Until now I was still using old-fashioned LLM chat in my web browser, which is so 2024. Thinking of all these people vibe coding all day with Cursor, Windsurf (the $3B editor), VSCode, while I’m still using copy paste, made me eager to modernize my development environment.

I’m lucky that my company provides an Open WebUI interface connected to all the most recent Large Language Models like Claude Sonnet 3.7, Gemini 2.5, OpenAI o3.

In addition to a nice browser interface very similar to what everybody experiences with ChatGPT, Open WebUI also exposes the same API as OpenAI.

With this powerful API available, the next logical step was clear: Let’s integrate with Neovim.

At minimum, I expect the ability to chat with my LLM and ask it to write new code or alter some code I have selected.

Plugin selection

There are numerous LLM integration plugins for Neovim.

I actually used an LLM to quickly write github-stars-fetcher, a tool that allowed me to list the most popular plugins with their star counts. Here they are, along with brief comments on each.

I decided to try Codecompanion, which appears to be the one with the most active development.

Setup CodeCompanion

My current environment:

At the time I’m writing this article, CodeCompanion version is 14.9.1.

I added this line to my Lazy plugins :

require("lazy").setup({
  ...
  { "olimorris/codecompanion.nvim", dependencies = { "nvim-lua/plenary.nvim", "nvim-treesitter/nvim-treesitter"}},
}, {})

As well as this setup block.

I define an adapter with name llm_dev and type openai_compatible (as a reminder, Open WebUI API is OpenAI compatible).

require("codecompanion").setup({
  adapters = {
    llm_dev = function()
      return require("codecompanion.adapters").extend("openai_compatible", {
        env = {
          url = "http://localhost:8124/api",
          models_endpoint = "/models",
          chat_url = "/chat/completions",
        }
      })
    end,
  },
  strategies = {
    chat = {
      adapter = "llm_dev",
    },
    inline = {
      adapter = "llm_dev",
    },
    cmd = {
      adapter = "llm_dev",
    }
  }
})

As you can see, my API endpoint is on http://localhost:8124/api. I actually expose a local reverse proxy on my machine which is responsible for passing the JWT token I extracted from the HTTP requests I make in my web browser in Open WebUI.

You can also perfectly target directly the Open WebUI API from your configuration and use the api_key setting. I actually need this reverse proxy setting because I need to pass through additional security measures specific to my company.

This is the Caddyfile I use (thanks to its original author) :

http://localhost:8124 {
	reverse_proxy <https://openwebui.local> {
		header_up Authorization "Bearer <token>"
		# Additional security measures
	}
}

With this configuration in place, CodeCompanion is ready to use with my local Open WebUI instance.

Usage

Now that CodeCompanion is set up, there are three main things you can achieve.

Chat

Accessible through :CodeCompanionChat.

I encourage you to set up a keyboard shortcut, I personally chose <leader>c.

It will open a new buffer where you can type your prompt then finish with Ctrl+S (while still in insert mode).

There is no indicator that your request has been taken, so wait a little time and hope it works.

CodeCompanion chat mode

Inline

This is where you can interact directly with your code.

Option 1: go somewhere in a source file, run :CodeCompanion <prompt> to ask for something, and let it insert new code at that location.

CodeCompanion inline mode

In the screenshot above, you can see the prompt I used on the last row.

The command will create a diff view with the proposed changes on the left, the previous state on the right. At this point you can just close the side you don’t want to keep.

Option 2: you can visually select a part of the file (with the V key) and execute :CodeCompanion <prompt> to ask for a change on this specific part.

Cmd

Finally, the :CodeCompanionCmd allows you to generate actual Vim command from a prompt.

:CodeCompanionCmd prefix XXX to each line of the file would translate to :%s/^/XXX /g

So almost 50 years after the initial release of vi, anyone will be able to exit it 🎉

:CodeCompanionCmd exit vi:q

Conclusion

This LLM integration actually works very well, and I find myself using it more and more.

However, remember not to fully rely on AI, and always treat the generated code as coming from a junior developer which needs careful review 🙂