Skip to content

LLM Integration

PyRapide's LLMEventAdapter captures large language model interactions as causally-linked events. Every request is connected to its response, every streaming chunk is linked to its parent request, and errors are traced back to the prompt that triggered them.


LLMEventTypes

The adapter emits four event types:

  • LLMEventTypes.REQUEST: an outgoing prompt or completion request
  • LLMEventTypes.RESPONSE: the full response from the model
  • LLMEventTypes.STREAM_CHUNK: an individual chunk during streaming responses
  • LLMEventTypes.ERROR: a model-level or API-level error

LLMEventAdapter

Wrap any LLM client to emit causal events:

llm_adapter.py python
from pyrapide import LLMEventAdapter, LLMEventTypes

# Wrap an LLM client
adapter = LLMEventAdapter("openai-gpt4", openai_client)

# Use with a StreamProcessor
processor.add_source("llm", adapter)

Tracing LLM Chains

When an LLM response triggers a tool call (via MCP or otherwise), and that tool call triggers another LLM request, the full chain is preserved causally:

llm_chain.py python
from pyrapide import LLMEventAdapter, MCPEventAdapter
from pyrapide import StreamProcessor, must_match

processor = StreamProcessor()
processor.add_source("llm", LLMEventAdapter("gpt4", llm_client))
processor.add_source("tools", MCPEventAdapter("tools", mcp_client))

# Every LLM request must produce a response or error
processor.enforce(must_match(
    trigger=LLMEventTypes.REQUEST,
    response=(LLMEventTypes.RESPONSE, LLMEventTypes.ERROR),
    name="llm_must_respond"
))

await processor.run()
i Note
The LLMEventAdapter works alongside the MCPEventAdapter. When an LLM response causes a tool call, PyRapide automatically links the response event to the tool call event, creating a cross-adapter causal chain.

Next Steps