-
Notifications
You must be signed in to change notification settings - Fork 0
llm providers
github-actions[bot] edited this page Aug 22, 2025
·
1 revision
SQLumAI can summarize profiles via a local or remote LLM.
-
LLM_PROVIDER:ollamaoropenai(generic OpenAI-compatible). Default: none (disabled) -
LLM_MODEL: model name (e.g.,llama3.2,gpt-4o-mini). Default:llama3.2 -
LLM_ENDPOINT:- Ollama:
http://ollama:11434(in Compose) orhttp://localhost:11434 - OpenAI-compatible: e.g.,
https://api.openai.com/v1/chat/completions
- Ollama:
-
LLM_SEND_EXTERNAL:true|false— choose carefully if sending data off-host (default false in.env.example)
- Compose launches an
ollamaservice and sets env for the proxy. - First pull can take time; warm up the model:
make integration-up
make llm-pull MODEL=llama3.2- Generate a summary:
docker exec proxy python scripts/llm_summarize_profiles.py- Set the following in
.envor environment:
export LLM_PROVIDER=openai
export LLM_MODEL=gpt-4o-mini
export LLM_ENDPOINT=https://api.openai.com/v1/chat/completions
export OPENAI_API_KEY=... # if your endpoint requires Authorization header- Adjust
scripts/llm_summarize_profiles.pyif your endpoint needs auth headers.
- Keep
LLM_SEND_EXTERNAL=falseby default. - Use Ollama for on-host inference to avoid sending data externally.