Skip to content

Real-World Use Cases

The framework adapters above get you connected in three lines. But the real payoff is what you can ask once events are flowing through a causal DAG. These four use cases show the full picture: a real problem, the right framework, the PyRapide integration, and the questions that become answerable.

LangGraph

1. Personal Finance Agent

The problem

You want an AI agent that categorizes your expenses, detects unusual spending, and suggests budget adjustments — but when it flags a transaction as “suspicious,” you need to know exactly which prior transactions, rules, and reasoning steps led to that conclusion. A black-box classifier is not enough when your money is on the line.

Why LangGraph

Expense categorization is a stateful, multi-step graph — ingest transactions, apply rules, detect anomalies, suggest actions. LangGraph’s explicit node/edge structure maps cleanly to a causal DAG.

How PyRapide adds value

Every categorization decision, anomaly flag, and budget suggestion is a node in the causal graph. When the agent flags a $47 charge as suspicious, you trace backward through the exact sequence: which rule fired, what historical pattern it compared against, and which transactions formed the baseline.

finance_agent.py python
from pyrapide.agent_templates.langgraph import LangGraphAdapter
from pyrapide import EventProcessor, queries, Pattern
from pyrapide.constraints import must_match

adapter = LangGraphAdapter("personal-finance")
processor = EventProcessor()
processor.add_source("langgraph", adapter)

# Run the finance graph with tracing
result = graph.invoke(
    {"transactions": this_months_transactions},
    config={"callbacks": [adapter.callback_handler]}
)

# Constraint: every anomaly flag must trace to a real transaction
must_match(
    Pattern.match("transaction_ingested") >> Pattern.match("anomaly_flagged"),
    name="anomaly_provenance"
).check(processor.poset)

# Query: what caused this specific flag?
flag = processor.get_events(name="anomaly_flagged")[0]
cause_chain = queries.backward_slice(processor.poset, flag)
print("Decision trail:")
for e in cause_chain:
    print(f"  [{e.timestamp}] {e.name}: {e.payload}")

What you can now ask

  • “Why was my $47 Uber Eats charge flagged as suspicious?”backward_slice from the flag event reveals the comparison baseline and the threshold that triggered it.
  • “What downstream actions did this anomaly cause?”impact_set from the flag shows whether it triggered a budget alert, a category reassignment, or both.
  • “Did any categorizations happen without seeing the original transaction?” — The must_match constraint catches orphaned decisions automatically.
  • “Show me every decision that depended on my dining spending history.”forward_slice from dining-category events reveals the full downstream impact.
CrewAI

2. Smart Calendar Assistant

The problem

You have a CrewAI crew where one agent handles scheduling requests and another checks for conflicts. When a meeting gets double-booked, you need to know: did the scheduler ignore a conflict warning, or did the conflict checker fail to flag it? Finger-pointing between agents is not useful — you need the causal record.

Why CrewAI

Calendar management involves distinct roles (scheduling, conflict detection, notification) that map naturally to CrewAI’s agent-and-task model with step-level callbacks.

How PyRapide adds value

Every scheduling decision, conflict check, and notification is causally linked. When a double-booking occurs, the DAG shows whether the conflict checker ran before or after the booking, whether its output reached the scheduler, and whether the scheduler’s decision was causally downstream of the warning.

calendar_assistant.py python
from pyrapide.agent_templates.crewai import CrewAIAdapter
from pyrapide import EventProcessor, queries, Pattern
from pyrapide.constraints import never

adapter = CrewAIAdapter("calendar-assistant")
processor = EventProcessor()
processor.add_source("crewai", adapter)

crew = Crew(
    agents=[scheduler, conflict_checker, notifier],
    tasks=[intake_task, check_task, book_task, notify_task],
    step_callback=adapter.on_step,
    task_callback=adapter.on_task
)
crew.kickoff()

# Constraint: a booking must never follow an unresolved conflict warning
never(
    Pattern.match("conflict_detected") >> Pattern.match("slot_booked").where(
        lambda m: m.events[0].payload["slot"] == m.events[-1].payload["slot"]
    ),
    name="no_booking_after_conflict"
).check(processor.poset)

# Find all parallel scheduling decisions (race conditions)
parallel_bookings = queries.parallel_events(
    processor.poset, lambda e: e.name == "slot_booked"
)
for a, b in parallel_bookings:
    print(f"Race condition: '{a.payload['meeting']}' and '{b.payload['meeting']}' "
          f"both booked {a.payload['slot']}")

What you can now ask

  • “Why did Tuesday 2pm get double-booked?”parallel_events reveals the two booking decisions were causally independent (neither knew about the other).
  • “Did the conflict checker run before or after the booking?” — The partial order in the DAG gives a definitive answer, even if wall-clock timestamps are ambiguous.
  • “Which meetings are affected if I cancel this one?”impact_set from the booking event shows downstream notifications, reminders, and dependent meetings.
  • “Show me every booking that bypassed conflict checking.” — The never constraint violation report lists them all.
AutoGen

3. Professional Research Hub

The problem

A consulting firm uses an AutoGen multi-agent system to research client industries: one agent gathers data, another analyzes trends, a third writes the report. When a client challenges a claim like “market share grew 12% in Q3,” the firm needs to produce the full provenance chain — from the final sentence in the report, back through the analysis that produced the number, to the raw data source. “The AI said so” is not a defensible answer.

Why AutoGen

Multi-agent research involves dynamic conversations — the analyst might ask the researcher follow-up questions, the writer might request clarifications. AutoGen’s conversational runtime handles this naturally.

How PyRapide adds value

Every message, tool call, and data retrieval in the AutoGen conversation becomes a causally-linked event. The provenance chain is not reconstructed after the fact — it is built in real time as the agents work.

research_hub.py python
from pyrapide.agent_templates.autogen import AutoGenAdapter
from pyrapide import EventProcessor, queries, Pattern
from pyrapide.constraints import must_match

adapter = AutoGenAdapter("research-hub", runtime=runtime)
processor = EventProcessor()
processor.add_source("autogen", adapter)

# After the research conversation completes...
# Constraint: every claim in the report must trace to a data retrieval
must_match(
    Pattern.match("data_retrieved") >> Pattern.match("claim_written"),
    name="claim_provenance"
).check(processor.poset)

# Client asks: "Where did the 12% figure come from?"
claim_event = next(
    e for e in processor.get_events(name="claim_written")
    if "12%" in e.payload.get("text", "")
)
provenance = queries.backward_slice(processor.poset, claim_event)
for step in provenance:
    print(f"  {step.name}: {step.payload.get('summary', str(step.payload))[:100]}")
# claim_written <- trend_analyzed <- data_retrieved <- search_query_issued

What you can now ask

  • “What is the full provenance of the '12% growth' claim?”backward_slice produces the complete chain from report sentence to raw data source.
  • “Which data sources influenced the executive summary?”backward_slice from the summary event, filtered to data_retrieved events, lists every source.
  • “If this data source turns out to be wrong, which claims are affected?”forward_slice from the data event shows every downstream analysis and claim.
  • “Did any claims get written without supporting data?” — The must_match constraint catches unsupported claims at runtime.
LlamaIndex

4. Scientific Literature Review

The problem

A research team uses a RAG pipeline to synthesize findings across hundreds of papers. When the synthesis says “recent studies suggest X,” the team needs to know exactly which papers, which passages, and how the retrieval scores compared — not just a footnote, but the full causal chain from query to answer.

Why LlamaIndex

LlamaIndex’s query engine, retrieval pipeline, and synthesis components are purpose-built for RAG over document corpora, with fine-grained events at every stage.

How PyRapide adds value

The adapter captures every query decomposition, chunk retrieval (with similarity scores and source metadata), and synthesis step. The causal DAG links each sentence in the synthesized answer to the specific retrieved passages that informed it.

lit_review.py python
from pyrapide.agent_templates.llamaindex import LlamaIndexAdapter
from pyrapide import EventProcessor, queries, Pattern
from pyrapide.constraints import must_match

adapter = LlamaIndexAdapter("lit-review")
processor = EventProcessor()
processor.add_source("llamaindex", adapter)
adapter.attach()

# Run a research query
response = query_engine.query(
    "What are the mechanisms of action for GLP-1 receptor agonists?"
)

# Constraint: every synthesis must cite at least one retrieved chunk
must_match(
    Pattern.match("chunk_retrieved") >> Pattern.match("synthesis_completed"),
    name="citation_required"
).check(processor.poset)

# Full citation chain for the synthesized answer
synthesis = processor.get_events(name="synthesis_completed")[-1]
chain = queries.backward_slice(processor.poset, synthesis)
chunks = [e for e in chain if e.name == "chunk_retrieved"]
sub_qs = [e for e in chain if e.name == "sub_question_generated"]

print(f"Synthesis drew from {len(chunks)} passages via {len(sub_qs)} sub-questions:")
for chunk in chunks:
    print(f"  [{chunk.payload['source']}] score={chunk.payload['similarity']:.3f}")
    print(f"    \"{chunk.payload['text'][:120]}...\"")

What you can now ask

  • “Which papers contributed to the answer about GLP-1 mechanisms?” — The retrieved chunk events in the backward_slice carry full source metadata.
  • “What was the retrieval score for each cited passage?” — Every chunk_retrieved event includes the similarity score, so you can assess retrieval confidence.
  • “If I add a new paper to the corpus, which past answers might change?”forward_slice from the ingestion event shows which queries and syntheses depend on overlapping topics.
  • “Did the synthesis use any passage with a similarity score below 0.7?” — Filter the causal chain to find low-confidence retrievals that still influenced the final answer.