llm Fixture
Standard pytest fixture for calling LLMs without the decorator.
Basic usage
def test_with_fixture(llm):
output = llm("Say hello", model="gpt-5-mini")
assert "hello" in output.content.lower()
assert output.latency_ms < 5000
assert output.cost_estimate_usd < 0.01Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
prompt | str | — | The prompt to send |
model | str | None | Model name |
system_prompt | str | None | System prompt |
temperature | float | None | Sampling temperature |
max_tokens | int | None | Max output tokens |
tools | list[dict] | None | Tool/function definitions |
retries | int | 0 | Retry count |
retry_delay | float | 1.0 | Seconds between retries |
retry_if | Callable | None | Condition to trigger retry |
Retry with condition
def test_with_retry(llm):
output = llm(
"Name a European capital",
model="gpt-5-mini",
retries=3,
retry_if=lambda out: "Paris" not in out.content,
)
assert "Paris" in output.contentTool calling
def test_tools(llm):
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a city",
"parameters": {
"type": "object",
"properties": {"city": {"type": "string"}},
},
},
}]
output = llm("Weather in Paris?", model="gpt-5-mini", tools=tools)
assert output.tool_callsLast updated on