Skip to Content

Ollama

Run LLM tests locally — no API key, no cost.

pip install pytest-llmtest[ollama] ollama pull llama3.2 ollama serve

Usage

@llm_test( expect.is_not_empty(), provider="ollama", model="llama3.2", ) def test_local(llm): output = llm("What is 2+2?") assert "4" in output.content

Supported models

Any model in your local Ollama: llama3.2, mistral, codellama, phi3, gemma2, deepseek-r1

ollama list # see available models

Custom host

export OLLAMA_HOST=http://my-server:11434

When to use

  • Local development — no API costs, no rate limits
  • CI/CD — run tests without API keys
  • Privacy — data never leaves your machine
  • Offline — no internet needed
Last updated on