Creating an AI human that speaks directly to users is powerful. But creating one that can seamlessly blend into any product demo, presentation, or interface?
That’s next-level.
In this guide, we’ll show you how to build a conversational video AI with a transparent background using Tavus’s Conversational Video Interface (CVI) and our open-source example repo. This setup lets you overlay your replica anywhere—no video editing, no green screen studio, just clean, real-time compositing inside the browser.
Whether you're building product tours, interactive coaching overlays, or just want your AI human floating in a branded UI, this is the fastest way to bring it to life.
Why go transparent?
A replica on a green‑screen (chroma‑key) background lets you overlay your AI human on slides, product demos, or any branded scene without post‑production. The result feels more like a live newscast than a static video frame — perfect for coaching overlays, product walk‑throughs, or AR‑style widgets.
Prerequisites
Clone the official example
git clone https://github.com/Tavus-Engineering/tavus-examples.git
cd tavus-examples/examples/cvi-transparent-background
npm install
The README in the repo lists the same steps and includes a one‑click StackBlitz link if you prefer the cloud sandbox. GitHub
How the example works
Step‑by‑step guide
1. Add your API key
On first run the app prompts for a key. You can generate one in your Tavus dashboard → Settings → API Keys.
2. Start the dev server
npm run dev
# Vite dev server on http://localhost:5173 (default)
3. Create the conversation
The UI calls:
fetch("https://tavusapi.com/v2/conversations", {
method: "POST",
headers: {
"Content-Type": "application/json",
"x-api-key": YOUR_KEY,
},
body: JSON.stringify({
persona_id: "p48fdf065d6b", // demo persona
properties: {
apply_greenscreen: true // 👈 enables transparent BG
}
})
})
The API returns conversation_url
; Daily joins that room, and your replica appears against a solid key color. docs.tavus.ioGitHub
4. Chroma‑key the feed
The shader in src/App.tsx
sets the key color to rgb(3,255,156)
and a threshold of 0.3
. Tweak these two uniforms to fine‑tune spill or match a different background hue.
5. Overlay the canvas
Because the WebGL canvas has premultipliedAlpha: false
and the fragment shader writes transparent pixels, you can place it over any DOM element or background‑video.
Customising beyond the starter
Production considerations
Bandwidth
Using WebGL chroma‑keying directly in the browser offloads processing from your backend and keeps server costs minimal. However, since the shader runs on the client side, it’s important to test performance on a range of devices—especially lower-end laptops or mobile browsers. While modern hardware handles it easily, integrated GPUs can struggle with sustained 60 fps if too many effects stack up.
Latency
Tavus uses Daily's Selective Forwarding Unit (SFU) to handle real-time video streaming with sub-200 ms latency in most regions. This makes the conversational video experience feel responsive and natural. To preserve that snappy feel, aim to keep your shader work minimal and avoid heavy compositing or animation in the render loop.
Fallback
Some users will have WebGL disabled, whether due to device limitations, browser settings, or privacy restrictions. To ensure graceful degradation, you can offer a non-transparent version of the conversation by setting apply_greenscreen: false
. This will render the replica against a solid background and avoid client-side chroma keying altogether.
Next steps
Swap the green screen for a custom video background
If you’d rather place your AI human in a specific environment instead of overlaying them manually, set the background_video_url
property when creating the conversation. This renders your replica directly in front of any MP4 or HLS stream—think branded motion backgrounds, product UI footage, or dynamic ambient loops.
Chain tool-calling with Persona
To go beyond conversation and introduce interactivity, you can enable tool-calling by adding a Persona layer to your replica. This allows your AI human to trigger functions—like submitting a form, clicking a CTA, or pulling external data—in response to the conversation flow. The result feels more like a dynamic agent than a static avatar.
Deploy to Vercel or Netlify
The example app in the repo is built on Vite, so it’s deployment-ready out of the box. Just connect your repo, set your VITE_TAVUS_API_KEY
as an environment variable in the dashboard, and push to main. Both Vercel and Netlify support zero-config CI/CD for this kind of project, so you’ll be live in minutes.