Connect LLMs to tracelane
In order to use AI features in a self-hosted tracelane installation, you need to connect a supported large language model (LLM) provider. Tracelane currently supports the following LLM providers:
We highly recommend using Ollama! It allows you to run LLMs locally on your own hardware without relying on external services. This way your data does not even leave your infrastructure.
You can enable AI features by setting the LLM_PROVIDER variable to the provider you want to use.
OpenAI
Section titled “OpenAI”If you choose to use OpenAI as your LLM provider, you need to set the OPENAI_API_KEY environment variable with your OpenAI API key.
You can obtain an API key by creating an account on the OpenAI platform and generating a new key in the API section.
Tracelane will automatically use the following models for inference:
| Model | Use case |
|---|---|
| GPT 4.1 | Improve single requirement |
| GPT 4.1 or GPT 5.2 | AI Assist Chat |
| GPT-5 Nano | Generate Chat Title |
Ollama
Section titled “Ollama”If you choose to use Ollama as your LLM provider, you need to set the OLLAMA_BASE_URL environment variable with the URL of your Ollama server.
By default, Ollama runs on port 11434.
Ollama has the power to run all kinds of LLMs, the choice of model is completely up to you.
Tracelane will automatically discover all models which are installed in your Ollama instance and will let the user choose which model to use for inference in the AI Assist Chat feature.
However, you need to tell tracelane which model you want to use for improving single requirements and generating chat titles. You can do this by setting the following environment variables:
| Env variable name | Use case |
|---|---|
OLLAMA_MODEL_IMPROVE | Improve single requirement |
OLLAMA_MODEL_CHAT_TITLE | Generate Chat Title |
For a full overview of all envronment variables, see the configuration page