React to what’s happening in your agent — participant joins, transcriptions, LLM responses, errors, and more. Subscribe to events using the @agent.events.subscribe decorator.
Subscribing to Events
Use the @agent.events.subscribe decorator with a type hint to specify which event you want. Handlers must be async functions:
from vision_agents.core.events import CallSessionParticipantJoinedEvent
@agent.events.subscribe
async def handle_participant_joined(event: CallSessionParticipantJoinedEvent):
if event.participant.user.id == "agent":
return # Skip agent's own join event
await agent.simple_response(f"Hello {event.participant.user.name}!")
Common Events
| Event | When | Import |
|---|
CallSessionParticipantJoinedEvent | User joins call | vision_agents.core.events |
CallSessionParticipantLeftEvent | User leaves call | vision_agents.core.events |
STTTranscriptEvent | Speech transcribed | vision_agents.core.stt.events |
LLMResponseCompletedEvent | LLM finishes response | vision_agents.core.llm.events |
TurnStartedEvent / TurnEndedEvent | Speaker turn changes | vision_agents.core.turn_detection.events |
ToolStartEvent / ToolEndEvent | Function calling | vision_agents.core.llm.events |
Example: Greeting Participants
from vision_agents.core import Agent, User
from vision_agents.core.events import (
CallSessionParticipantJoinedEvent,
CallSessionParticipantLeftEvent,
)
from vision_agents.plugins import openai, getstream, deepgram, elevenlabs
agent = Agent(
edge=getstream.Edge(),
agent_user=User(name="Assistant", id="agent"),
instructions="You're a helpful voice assistant.",
llm=openai.LLM(model="gpt-4o-mini"),
tts=elevenlabs.TTS(),
stt=deepgram.STT(),
)
@agent.events.subscribe
async def on_join(event: CallSessionParticipantJoinedEvent):
if event.participant.user.id != "agent":
await agent.simple_response(f"Welcome, {event.participant.user.name}!")
@agent.events.subscribe
async def on_leave(event: CallSessionParticipantLeftEvent):
if event.participant.user.id != "agent":
await agent.simple_response(f"Goodbye, {event.participant.user.name}!")
Component Events
Subscribe to events from specific components. Each component (LLM, STT, TTS, etc.) emits events when they process data:
from vision_agents.core.stt.events import STTTranscriptEvent
from vision_agents.core.llm.events import LLMResponseCompletedEvent
@agent.events.subscribe
async def on_transcript(event: STTTranscriptEvent):
print(f"User said: {event.text}")
print(f"Confidence: {event.confidence}")
print(f"Language: {event.language}")
@agent.events.subscribe
async def on_response(event: LLMResponseCompletedEvent):
print(f"Agent said: {event.text}")
print(f"Tokens used: {event.total_tokens}")
Realtime LLM Events
For Realtime LLMs (like OpenAI Realtime), use transcription events to capture what was said:
from vision_agents.core.llm.events import (
RealtimeUserSpeechTranscriptionEvent,
RealtimeAgentSpeechTranscriptionEvent,
)
@agent.events.subscribe
async def on_user_speech(event: RealtimeUserSpeechTranscriptionEvent):
print(f"User: {event.text}")
@agent.events.subscribe
async def on_agent_speech(event: RealtimeAgentSpeechTranscriptionEvent):
print(f"Agent: {event.text}")
Turn Detection Events
Track when speakers start and finish talking:
from vision_agents.core.turn_detection.events import TurnStartedEvent, TurnEndedEvent
@agent.events.subscribe
async def on_turn_started(event: TurnStartedEvent):
print(f"Speaker started talking (confidence: {event.confidence})")
@agent.events.subscribe
async def on_turn_ended(event: TurnEndedEvent):
print(f"Speaker finished (duration: {event.duration_ms}ms)")
Monitor function calling with tool events:
from vision_agents.core.llm.events import ToolStartEvent, ToolEndEvent
@agent.events.subscribe
async def on_tool_start(event: ToolStartEvent):
print(f"Calling tool: {event.tool_name}")
print(f"Arguments: {event.arguments}")
@agent.events.subscribe
async def on_tool_end(event: ToolEndEvent):
if event.success:
print(f"Tool {event.tool_name} completed in {event.execution_time_ms}ms")
else:
print(f"Tool {event.tool_name} failed: {event.error}")
Error Handling
Each component has its own error event type:
from vision_agents.core.stt.events import STTErrorEvent
from vision_agents.core.tts.events import TTSErrorEvent
from vision_agents.core.llm.events import LLMErrorEvent, RealtimeErrorEvent
@agent.events.subscribe
async def on_stt_error(event: STTErrorEvent):
print(f"STT error: {event.error_message}")
if event.is_recoverable:
print(f"Retry count: {event.retry_count}")
@agent.events.subscribe
async def on_llm_error(event: LLMErrorEvent):
print(f"LLM error: {event.error_message}")
print(f"Context: {event.context}")
Multiple Event Types
Handle related events in one handler using union types:
@agent.events.subscribe
async def on_participant_change(
event: CallSessionParticipantJoinedEvent | CallSessionParticipantLeftEvent
):
action = "joined" if isinstance(event, CallSessionParticipantJoinedEvent) else "left"
print(f"{event.participant.user.name} {action}")
Use the | operator (Python 3.10+) or Union from typing for older versions.
Best Practices
Filter agent events — Avoid loops by checking the event source:
if event.participant.user.id == "agent":
return
Keep handlers focused — One handler per concern:
@agent.events.subscribe
async def log_transcripts(event: STTTranscriptEvent):
logger.info(f"Transcript: {event.text}")
@agent.events.subscribe
async def detect_keywords(event: STTTranscriptEvent):
if "help" in event.text.lower():
await agent.simple_response("How can I help?")
Use async handlers — Event handlers must be async functions. Non-async handlers will raise an error.
Access common event fields — All events have these base fields:
event.type — Event type identifier (e.g., "plugin.stt_transcript")
event.event_id — Unique ID for this event instance
event.timestamp — When the event was created (UTC)
event.session_id — Current session identifier
Next Steps