Vibe Coders Quick Start
Already building with an agentic framework? You don't need to learn PyRapide from scratch. Pick your framework, copy the code, and start seeing why your agents do what they do.
AutoGen
Install
$ pip install pyrapide[autogen]
Integration
from pyrapide.agent_templates.autogen import AutoGenAdapter
adapter = AutoGenAdapter("my-app", runtime=runtime)
processor.add_source("autogen", adapter)
Events Captured
- • Agent messages
- • Tool calls
- • Function returns
- • Group-chat routing decisions
- • Nested conversation spawns
Causal Links
- • Message-to-response chains
- • Tool-call-to-result edges
- • Delegation paths across agents
Full integration guide: pyrapide/agent_templates/autogen/INTEGRATION.md
Real-World Mini-Examples
These four examples show how a vibe coder would wire up PyRapide in a real project -- not just "hello world," but the kind of thing you'd actually ship.
Personal Accounting
LangGraphA LangGraph agent that categorizes bank transactions into budget categories. The constraint: every categorization must trace back to the original transaction event, so you can always explain why a charge was labeled 'Dining' instead of 'Groceries.'
from pyrapide.agent_templates.langgraph import LangGraphAdapter
from pyrapide import EventProcessor, queries
adapter = LangGraphAdapter("expense-tracker")
processor = EventProcessor()
processor.add_source("langgraph", adapter)
# Run the categorization graph
result = graph.invoke(
{"transactions": monthly_statement},
config={"callbacks": [adapter.callback_handler]}
)
# For any categorized expense, trace the full causal chain
for event in processor.get_events(name="categorized"):
chain = queries.backward_slice(processor.poset, event)
assert any(e.name == "transaction_ingested" for e in chain), \
f"Categorization {event.id} has no originating transaction!"
Calendar Management
CrewAIA CrewAI crew with a scheduler agent and a conflict-checker agent. When two meetings end up double-booked, queries.parallel_events() pinpoints the exact moment both agents made conflicting decisions simultaneously.
from pyrapide.agent_templates.crewai import CrewAIAdapter
from pyrapide import EventProcessor, queries
adapter = CrewAIAdapter("calendar-crew")
processor = EventProcessor()
processor.add_source("crewai", adapter)
crew = Crew(
agents=[scheduler_agent, conflict_checker_agent],
tasks=[schedule_task, verify_task],
step_callback=adapter.on_step,
task_callback=adapter.on_task
)
crew.kickoff()
# Find scheduling decisions that happened in parallel (potential conflicts)
conflicts = queries.parallel_events(
processor.poset,
lambda e: e.name == "slot_booked"
)
for pair in conflicts:
print(f"Conflict: {pair[0].payload['time']} booked by two agents simultaneously")
Professional Services
AutoGenAn AutoGen multi-agent system where a researcher agent gathers client data, an analyst agent produces insights, and a writer agent drafts a report. When a client questions a claim in the report, queries.backward_slice() provides the full decision provenance.
from pyrapide.agent_templates.autogen import AutoGenAdapter
from pyrapide import EventProcessor, queries
adapter = AutoGenAdapter("client-research", runtime=runtime)
processor = EventProcessor()
processor.add_source("autogen", adapter)
# After the multi-agent conversation completes...
report_event = processor.get_events(name="report_drafted")[-1]
# Trace the provenance of any claim in the report
provenance = queries.backward_slice(processor.poset, report_event)
for event in provenance:
print(f" {event.name}: {event.payload.get('summary', '')[:80]}")
# Output: report_drafted <- insight_generated <- data_retrieved <- query_issued
Science & Technology Research
LlamaIndexA LlamaIndex RAG agent that queries a corpus of academic papers. The causal chain traces from the user's question, through the retrieval of specific paper chunks, to the final synthesized answer -- so every claim in the synthesis cites its source passage.
from pyrapide.agent_templates.llamaindex import LlamaIndexAdapter
from pyrapide import EventProcessor, queries
adapter = LlamaIndexAdapter("lit-review")
processor = EventProcessor()
processor.add_source("llamaindex", adapter)
adapter.attach()
# Query the research corpus
response = query_engine.query("What are the latest advances in protein folding?")
# Trace from synthesis back through retrieval to the original query
synthesis_event = processor.get_events(name="synthesis_completed")[-1]
chain = queries.backward_slice(processor.poset, synthesis_event)
retrieved = [e for e in chain if e.name == "chunk_retrieved"]
print(f"Answer synthesized from {len(retrieved)} source passages:")
for r in retrieved:
print(f" - {r.payload['source']}, score={r.payload['similarity']:.3f}")