Ollama
Run LLM tests locally — no API key, no cost.
pip install pytest-llmtest[ollama]
ollama pull llama3.2
ollama serveUsage
@llm_test(
expect.is_not_empty(),
provider="ollama",
model="llama3.2",
)
def test_local(llm):
output = llm("What is 2+2?")
assert "4" in output.contentSupported models
Any model in your local Ollama: llama3.2, mistral, codellama, phi3, gemma2, deepseek-r1
ollama list # see available modelsCustom host
export OLLAMA_HOST=http://my-server:11434When to use
- Local development — no API costs, no rate limits
- CI/CD — run tests without API keys
- Privacy — data never leaves your machine
- Offline — no internet needed
Last updated on