Decart provides real-time AI video transformation with style transfer and virtual try-on. Transform video streams into animated styles, apply reference-image costumes, or any custom prompt-based visual effect using models like Lucy.
Vision Agents requires a Stream account
for real-time transport. Most providers offer free tiers to get started.
Installation
uv add "vision-agents[decart]"
Quick start
from vision_agents.core import Agent, User
from vision_agents.plugins import decart, gemini, deepgram, elevenlabs, getstream
processor = decart.RestylingProcessor(
initial_prompt = "Studio Ghibli animation style" ,
model = "lucy_2_rt" ,
)
agent = Agent(
edge = getstream.Edge(),
agent_user = User( name = "Styled AI" ),
instructions = "Be helpful" ,
llm = gemini.Realtime(),
stt = deepgram.STT(),
tts = elevenlabs.TTS(),
processors = [processor],
)
Set DECART_API_KEY in your environment or pass api_key directly.
Parameters
Name Type Default Description modelstr"lucy_2_rt"Decart model initial_promptstr"Cyberpunk city"Style prompt for visual transformation initial_imagebytes | str | PathNoneOptional reference image for first connect (bytes, file path, http(s) URL, data URI, or raw base64) enhanceboolTrueWhether to enhance the prompt mirrorboolTrueMirror mode for front-facing cameras widthint1280Output video width heightint720Output video height api_keystrNoneAPI key (defaults to DECART_API_KEY env var)
Dynamic style changes
Update the video style during a call via function calling:
@llm.register_function ( description = "Changes the video style" )
async def change_style ( prompt : str ) -> str :
await processor.update_prompt(prompt)
return f "Style changed to: { prompt } "
Reference images
For models like Lucy that accept a reference image, pass it at construction time and/or swap it atomically with a prompt using update_state:
processor = decart.RestylingProcessor(
model = "lucy_2_rt" ,
initial_prompt = "A person wearing a superhero costume" ,
initial_image = "./costumes/superhero.png" ,
)
# Atomically change prompt + reference image
await processor.update_state(
prompt = "A person wearing a wizard robe" ,
image = "./costumes/wizard.png" ,
)
# Image-only update
await processor.update_state( image = b "<raw image bytes>" )
initial_image and update_state(image=...) accept bytes, a local file path, an http(s) URL, a data: URI, or a raw base64 string.
Next steps
Build a Voice Agent Get started with voice
Build a Video Agent Add video processing