Skip to Content

llm Fixture

Standard pytest fixture for calling LLMs without the decorator.

Basic usage

def test_with_fixture(llm): output = llm("Say hello", model="gpt-5-mini") assert "hello" in output.content.lower() assert output.latency_ms < 5000 assert output.cost_estimate_usd < 0.01

Parameters

ParameterTypeDefaultDescription
promptstrThe prompt to send
modelstrNoneModel name
system_promptstrNoneSystem prompt
temperaturefloatNoneSampling temperature
max_tokensintNoneMax output tokens
toolslist[dict]NoneTool/function definitions
retriesint0Retry count
retry_delayfloat1.0Seconds between retries
retry_ifCallableNoneCondition to trigger retry

Retry with condition

def test_with_retry(llm): output = llm( "Name a European capital", model="gpt-5-mini", retries=3, retry_if=lambda out: "Paris" not in out.content, ) assert "Paris" in output.content

Tool calling

def test_tools(llm): tools = [{ "type": "function", "function": { "name": "get_weather", "description": "Get weather for a city", "parameters": { "type": "object", "properties": {"city": {"type": "string"}}, }, }, }] output = llm("Weather in Paris?", model="gpt-5-mini", tools=tools) assert output.tool_calls
Last updated on