For the base class API reference, see Processors Class.
How Video Processing Works
When a participant publishes video, the agent creates aVideoForwarder that reads frames from the track. This forwarder is shared with all video processors via the process_video() method. Each processor registers a frame handler with its desired FPS.
Building a Custom Processor
A minimal video processor that logs frames:- Set
nameas a class attribute - Store a reference to
shared_forwarderto remove handlers later - Register handlers with
add_frame_handler(callback, fps, name) - Clean up in
stop_processing()(called when tracks are removed) andclose()
Publishing Transformed Video
UseVideoProcessorPublisher with QueuedVideoTrack to publish transformed video back to the call:
QueuedVideoTrack— Writable video track. Calladd_frame()to queue frames for publishing.VideoForwarder— Distributes incoming frames to handlers at independent FPS rates.
Emitting Custom Events
Useattach_agent() to register custom events that other parts of your application can subscribe to:
YOLO Pose Detection
The Ultralytics plugin providesYOLOPoseProcessor for real-time pose detection with skeleton overlays:
HeyGen Avatars
The HeyGen plugin providesAvatarPublisher for lip-syncing AI avatars that speak agent responses.
Use cases: Virtual presenters, customer service avatars, interactive tutors.
See HeyGen for setup and examples.

