AI • SaaS • Gaming
AI • SaaS • Gaming
Intelligent Streaming Agent (Streamlabs x NVIDIA x Inworld)
Intelligent Streaming Agent (Streamlabs x NVIDIA x Inworld)
Intelligent Streaming Agent (Streamlabs x NVIDIA x Inworld)
Reimagining how millions of creators produce, engage, and troubleshoot live with an AI-powered co-host built in collaboration with NVIDIA and Inworld AI.
Reimagining how millions of creators produce, engage, and troubleshoot live with an AI-powered co-host built in collaboration with NVIDIA and Inworld AI.
Reimagining how millions of creators produce, engage, and troubleshoot live with an AI-powered co-host built in collaboration with NVIDIA and Inworld AI.






My Role
My Role
My Role
Lead Designer
Lead Designer
Lead Designer
Industry
Industry
Industry
AI, Creator Economy
AI, Creator Economy
AI,
Creator Economy
Timeline
Timeline
Timeline
2024 - 2025
2024 - 2025
2024 - 2025
Context
Between 2024–2025, I led design for Streamlabs’ Intelligent Streaming Agent, a first-of-its-kind AI-powered co-host, producer, and tech support assistant - developed in collaboration with NVIDIA and Inworld AI under the Logitech portfolio.
Our goal was ambitious: to help streamers focus on creativity while AI handled production, engagement, and technical setup.
Streamlabs had long been the all-in-one streaming platform for creators, but as production standards rose and competition for audience attention intensified, streamers struggled to multitask - balancing gameplay, chat interaction, and stream management. The Intelligent Streaming Agent was our answer: a real-time, multimodal AI that could see, hear, and act.
Context
Between 2024–2025, I led design for Streamlabs’ Intelligent Streaming Agent, a first-of-its-kind AI-powered co-host, producer, and tech support assistant - developed in collaboration with NVIDIA and Inworld AI under the Logitech portfolio.
Our goal was ambitious: to help streamers focus on creativity while AI handled production, engagement, and technical setup.
Streamlabs had long been the all-in-one streaming platform for creators, but as production standards rose and competition for audience attention intensified, streamers struggled to multitask - balancing gameplay, chat interaction, and stream management. The Intelligent Streaming Agent was our answer: a real-time, multimodal AI that could see, hear, and act.
Context
Between 2024–2025, I led design for Streamlabs’ Intelligent Streaming Agent, a first-of-its-kind AI-powered co-host, producer, and tech support assistant - developed in collaboration with NVIDIA and Inworld AI under the Logitech portfolio.
Our goal was ambitious: to help streamers focus on creativity while AI handled production, engagement, and technical setup.
Streamlabs had long been the all-in-one streaming platform for creators, but as production standards rose and competition for audience attention intensified, streamers struggled to multitask - balancing gameplay, chat interaction, and stream management. The Intelligent Streaming Agent was our answer: a real-time, multimodal AI that could see, hear, and act.
1st
1st
1st
AI Streaming Co-Host in the world
AI Streaming Co-Host in the world
3
3
3
Companies collaborated together
Companies collaborated together
Companies collaborated together
CES 2025
CES 2025
CES 2025
Flagship innovation
Flagship innovation
Flagship innovation
My role & team
As Lead Product Designer, I owned the end-to-end product experience - from early ideation through CES 2025 demo and Streamlabs App Store launch.
I collaborated closely with:
PMs and engineers at Streamlabs,
NVIDIA (vision and Audio2Face models),
Inworld AI (LLM-driven reasoning and personality engine).
I was responsible for defining the user experience, avatar interaction model, and all visual and conversational frameworks that shaped how creators engaged with their AI assistant.
My role & team
As Lead Product Designer, I owned the end-to-end product experience - from early ideation through CES 2025 demo and Streamlabs App Store launch.
I collaborated closely with:
PMs and engineers at Streamlabs,
NVIDIA (vision and Audio2Face models),
Inworld AI (LLM-driven reasoning and personality engine).
I was responsible for defining the user experience, avatar interaction model, and all visual and conversational frameworks that shaped how creators engaged with their AI assistant.
My role & team
As Lead Product Designer, I owned the end-to-end product experience - from early ideation through CES 2025 demo and Streamlabs App Store launch.
I collaborated closely with:
PMs and engineers at Streamlabs,
NVIDIA (vision and Audio2Face models),
Inworld AI (LLM-driven reasoning and personality engine).
I was responsible for defining the user experience, avatar interaction model, and all visual and conversational frameworks that shaped how creators engaged with their AI assistant.
The challenge
Livestreaming is inherently complex: creators juggle scene switching, camera framing, overlays, donations, and audience interaction - all while performing live. Many new streamers quit early due to technical friction or lack of engagement.
We identified three critical pain points:
Cognitive overload – too many tasks to manage while live.
Audience engagement – hard to keep energy and personality consistent.
Technical friction – frequent setup and audio issues breaking flow.
Our opportunity: build an AI that feels like a human teammate, combining producer automation, chat engagement, and tech support into one intelligent agent.
The challenge
Livestreaming is inherently complex: creators juggle scene switching, camera framing, overlays, donations, and audience interaction - all while performing live. Many new streamers quit early due to technical friction or lack of engagement.
We identified three critical pain points:
Cognitive overload – too many tasks to manage while live.
Audience engagement – hard to keep energy and personality consistent.
Technical friction – frequent setup and audio issues breaking flow.
Our opportunity: build an AI that feels like a human teammate, combining producer automation, chat engagement, and tech support into one intelligent agent.
The challenge
Livestreaming is inherently complex: creators juggle scene switching, camera framing, overlays, donations, and audience interaction - all while performing live. Many new streamers quit early due to technical friction or lack of engagement.
We identified three critical pain points:
Cognitive overload – too many tasks to manage while live.
Audience engagement – hard to keep energy and personality consistent.
Technical friction – frequent setup and audio issues breaking flow.
Our opportunity: build an AI that feels like a human teammate, combining producer automation, chat engagement, and tech support into one intelligent agent.
Discovery & Research
I drove a structured discovery phase to align three organizations - Streamlabs, NVIDIA, and Inworld - around a single shared product vision. Because each team owned a different technical pillar (vision, reasoning engine, rendering), my first priority was establishing a unified understanding of user problems, system constraints, and viable paths forward.
Discovery & Research
I drove a structured discovery phase to align three organizations - Streamlabs, NVIDIA, and Inworld - around a single shared product vision. Because each team owned a different technical pillar (vision, reasoning engine, rendering), my first priority was establishing a unified understanding of user problems, system constraints, and viable paths forward.
Discovery & Research
I drove a structured discovery phase to align three organizations - Streamlabs, NVIDIA, and Inworld - around a single shared product vision. Because each team owned a different technical pillar (vision, reasoning engine, rendering), my first priority was establishing a unified understanding of user problems, system constraints, and viable paths forward.
My Role & Team
As Lead Product Designer, I owned the end-to-end product experience - from early ideation through CES 2025 demo and Streamlabs App Store launch.
I collaborated closely with:
PMs and engineers at Streamlabs,
NVIDIA (vision and Audio2Face models),
Inworld AI (LLM-driven reasoning and personality engine).
I was responsible for defining the user experience, avatar interaction model, and all visual and conversational frameworks that shaped how creators engaged with their AI assistant.
The Challenge
Livestreaming is inherently complex: creators juggle scene switching, camera framing, overlays, donations, and audience interaction - all while performing live. Many new streamers quit early due to technical friction or lack of engagement.
We identified three critical pain points:
Cognitive overload – too many tasks to manage while live.
Audience engagement – hard to keep energy and personality consistent.
Technical friction – frequent setup and audio issues breaking flow.
Our opportunity: build an AI that feels like a human teammate, combining producer automation, chat engagement, and tech support into one intelligent agent.
Discovery & Research
I drove a structured discovery phase to align three organizations - Streamlabs, NVIDIA, and Inworld - around a single shared product vision. Because each team owned a different technical pillar (vision, reasoning engine, rendering), my first priority was establishing a unified understanding of user problems, system constraints, and viable paths forward.






User interviews & workflow analysis
I conducted interviews with both new and experienced streamers to understand:
cognitive load moments during gameplay
which production tasks cause the most disruption
how they prefer assistance from AI versus manual control
comfort levels with on-screen avatars vs. background automation
common technical failures (audio, scenes, replay buffer, overlays)
This research informed the three-role model: Co-Host, Producer, and Tech Support.
User interviews & workflow analysis
I conducted interviews with both new and experienced streamers to understand:
cognitive load moments during gameplay
which production tasks cause the most disruption
how they prefer assistance from AI versus manual control
comfort levels with on-screen avatars vs. background automation
common technical failures (audio, scenes, replay buffer, overlays)
This research informed the three-role model: Co-Host, Producer, and Tech Support.
User interviews & workflow analysis
I conducted interviews with both new and experienced streamers to understand:
cognitive load moments during gameplay
which production tasks cause the most disruption
how they prefer assistance from AI versus manual control
comfort levels with on-screen avatars vs. background automation
common technical failures (audio, scenes, replay buffer, overlays)
This research informed the three-role model: Co-Host, Producer, and Tech Support.
Competitive & technological landscape review
I analyzed:
existing co-host avatar tools
chatbots and automation workflows
game vision systems
OBS/Streamlabs plugin limitations
GPU and performance constraints
This established early technical boundaries so that design would be feasible across hardware tiers - from high-end NVIDIA GPUs to mid-range systems.
Competitive & technological landscape review
I analyzed:
existing co-host avatar tools
chatbots and automation workflows
game vision systems
OBS/Streamlabs plugin limitations
GPU and performance constraints
This established early technical boundaries so that design would be feasible across hardware tiers - from high-end NVIDIA GPUs to mid-range systems.
Competitive & technological landscape review
I analyzed:
existing co-host avatar tools
chatbots and automation workflows
game vision systems
OBS/Streamlabs plugin limitations
GPU and performance constraints
This established early technical boundaries so that design would be feasible across hardware tiers - from high-end NVIDIA GPUs to mid-range systems.
Cross-functional workshops & alignment
I ran a recurring set of joint workshops with Streamlabs, NVIDIA, and Inworld to turn insights into concrete product direction:
NVIDIA Workshops (Vision + Audio2Face)
Identified which in-game events the vision model could reliably detect (kills, damage, victory, lobby states).
Defined latency requirements for real-time reactions.
Set performance targets to keep the agent under ~3% GPU usage.
Aligned on feasibility of avatar expressions and emotional states.
This directly shaped the production automation triggers and avatar reaction system.
Inworld Workshops (LLM Reasoning + Personality Engine)
Mapped how triggers (gameplay, chat, speech, events) should route to LLM reasoning.
Designed the conversational structure: jokes, reactions, contextual comments, chat summarization.
Defined the personality customization model (sliders, presets).
Established guardrails and safety constraints around tone, interruptions, and misinformation.
This shaped the behavioral framework, interaction model, and safety logic.
Streamlabs Engineering Workshops
Defined the interaction architecture between Desktop, OBS Plugin, Cloudbot, Inworld, and NVIDIA.
Built the action matrix: what AI can/cannot do without user confirmation.
Set the permissions and onboarding flow requirements.
Prioritized feature rollout for CES MVP vs. launch version.
This influenced the final scope, feature sequencing, and UX constraints.
Cross-functional workshops & alignment
I ran a recurring set of joint workshops with Streamlabs, NVIDIA, and Inworld to turn insights into concrete product direction:
NVIDIA Workshops (Vision + Audio2Face)
Identified which in-game events the vision model could reliably detect (kills, damage, victory, lobby states).
Defined latency requirements for real-time reactions.
Set performance targets to keep the agent under ~3% GPU usage.
Aligned on feasibility of avatar expressions and emotional states.
This directly shaped the production automation triggers and avatar reaction system.
Inworld Workshops (LLM Reasoning + Personality Engine)
Mapped how triggers (gameplay, chat, speech, events) should route to LLM reasoning.
Designed the conversational structure: jokes, reactions, contextual comments, chat summarization.
Defined the personality customization model (sliders, presets).
Established guardrails and safety constraints around tone, interruptions, and misinformation.
This shaped the behavioral framework, interaction model, and safety logic.
Streamlabs Engineering Workshops
Defined the interaction architecture between Desktop, OBS Plugin, Cloudbot, Inworld, and NVIDIA.
Built the action matrix: what AI can/cannot do without user confirmation.
Set the permissions and onboarding flow requirements.
Prioritized feature rollout for CES MVP vs. launch version.
This influenced the final scope, feature sequencing, and UX constraints.
Cross-functional workshops & alignment
I ran a recurring set of joint workshops with Streamlabs, NVIDIA, and Inworld to turn insights into concrete product direction:
NVIDIA Workshops (Vision + Audio2Face)
Identified which in-game events the vision model could reliably detect (kills, damage, victory, lobby states).
Defined latency requirements for real-time reactions.
Set performance targets to keep the agent under ~3% GPU usage.
Aligned on feasibility of avatar expressions and emotional states.
This directly shaped the production automation triggers and avatar reaction system.
Inworld Workshops (LLM Reasoning + Personality Engine)
Mapped how triggers (gameplay, chat, speech, events) should route to LLM reasoning.
Designed the conversational structure: jokes, reactions, contextual comments, chat summarization.
Defined the personality customization model (sliders, presets).
Established guardrails and safety constraints around tone, interruptions, and misinformation.
This shaped the behavioral framework, interaction model, and safety logic.
Streamlabs Engineering Workshops
Defined the interaction architecture between Desktop, OBS Plugin, Cloudbot, Inworld, and NVIDIA.
Built the action matrix: what AI can/cannot do without user confirmation.
Set the permissions and onboarding flow requirements.
Prioritized feature rollout for CES MVP vs. launch version.
This influenced the final scope, feature sequencing, and UX constraints.
Cross-functional workshops & alignment
I ran a recurring set of joint workshops with Streamlabs, NVIDIA, and Inworld to turn insights into concrete product direction:
NVIDIA Workshops (Vision + Audio2Face)
Identified which in-game events the vision model could reliably detect (kills, damage, victory, lobby states).
Defined latency requirements for real-time reactions.
Set performance targets to keep the agent under ~3% GPU usage.
Aligned on feasibility of avatar expressions and emotional states.
This directly shaped the production automation triggers and avatar reaction system.
Inworld Workshops (LLM Reasoning + Personality Engine)
Mapped how triggers (gameplay, chat, speech, events) should route to LLM reasoning.
Designed the conversational structure: jokes, reactions, contextual comments, chat summarization.
Defined the personality customization model (sliders, presets).
Established guardrails and safety constraints around tone, interruptions, and misinformation.
This shaped the behavioral framework, interaction model, and safety logic.
Streamlabs Engineering Workshops
Defined the interaction architecture between Desktop, OBS Plugin, Cloudbot, Inworld, and NVIDIA.
Built the action matrix: what AI can/cannot do without user confirmation.
Set the permissions and onboarding flow requirements.
Prioritized feature rollout for CES MVP vs. launch version.
This influenced the final scope, feature sequencing, and UX constraints.
How this shaped product decisions
The discovery and alignment process resulted in several major product-defining decisions:
AI needed 3 modes: visible, semi-visible, and invisible.
→ Led to design of avatar view, background-only mode, and chat-only mode.All automations must be user-controlled through explicit triggers.
→ Defined the trigger-action system and safety confirmations.In-game reactions should be subtle but meaningful.
→ Influenced avatar animations, emotional tags, and commentary style.Tech support must be intelligent but never make hardware assumptions.
→ Shaped the technical support guidance behavior.Low-end machines still needed to use the assistant.
→ Drove inclusion of a “headless mode” without 3D rendering.
This stage defined the entire product blueprint - what was possible, what was desirable, and what would actually work for creators.
How this shaped product decisions
The discovery and alignment process resulted in several major product-defining decisions:
AI needed 3 modes: visible, semi-visible, and invisible.
→ Led to design of avatar view, background-only mode, and chat-only mode.All automations must be user-controlled through explicit triggers.
→ Defined the trigger-action system and safety confirmations.In-game reactions should be subtle but meaningful.
→ Influenced avatar animations, emotional tags, and commentary style.Tech support must be intelligent but never make hardware assumptions.
→ Shaped the technical support guidance behavior.Low-end machines still needed to use the assistant.
→ Drove inclusion of a “headless mode” without 3D rendering.
This stage defined the entire product blueprint - what was possible, what was desirable, and what would actually work for creators.
How this shaped product decisions
The discovery and alignment process resulted in several major product-defining decisions:
AI needed 3 modes: visible, semi-visible, and invisible.
→ Led to design of avatar view, background-only mode, and chat-only mode.All automations must be user-controlled through explicit triggers.
→ Defined the trigger-action system and safety confirmations.In-game reactions should be subtle but meaningful.
→ Influenced avatar animations, emotional tags, and commentary style.Tech support must be intelligent but never make hardware assumptions.
→ Shaped the technical support guidance behavior.Low-end machines still needed to use the assistant.
→ Drove inclusion of a “headless mode” without 3D rendering.
This stage defined the entire product blueprint - what was possible, what was desirable, and what would actually work for creators.




















What I delivered
What I delivered
What I delivered
1. Unified product architecture
1. Unified product architecture
1. Unified product architecture
Created the core framework now used across Streamlabs AI:
AI Co-Host → reacts, jokes, summarizes chat
AI Producer → switches scenes, triggers replays, adds effects
AI Tech Support → fixes issues (“You’re muted - want me to unmute you?”)
This model became the narrative for CES and shaped all engineering work.
Introduced design tokens to link Figma and CSS variables for instant parity
Established naming and variant standards to remove ambiguity
Embedded accessibility and responsiveness rules into every component
Defined a governance model for version control, reviews, and contributions
Created the core framework now used across Streamlabs AI:
AI Co-Host → reacts, jokes, summarizes chat
AI Producer → switches scenes, triggers replays, adds effects
AI Tech Support → fixes issues (“You’re muted - want me to unmute you?”)
This model became the narrative for CES and shaped all engineering work.
2. Simple UX for a very complex system
2. Simple UX for a very complex system
2. Simple UX for a very complex system
Designed the complete experience in Figma:
Guided onboarding + permission flows
Avatar/Personality setup (tone, humor, visibility, voice)
Trigger–action automation UI (“If X happens → Do Y”)
Safety controls for when AI must ask before acting
Behavioral rules for pacing, silence, interruptions, and reactions
Outcome: creators feel in control, even when AI is doing a lot.
Designed the complete experience in Figma:
Guided onboarding + permission flows
Avatar/Personality setup (tone, humor, visibility, voice)
Trigger–action automation UI (“If X happens → Do Y”)
Safety controls for when AI must ask before acting
Behavioral rules for pacing, silence, interruptions, and reactions
Outcome: creators feel in control, even when AI is doing a lot.
Designed the complete experience in Figma:
Guided onboarding + permission flows
Avatar/Personality setup (tone, humor, visibility, voice)
Trigger–action automation UI (“If X happens → Do Y”)
Safety controls for when AI must ask before acting
Behavioral rules for pacing, silence, interruptions, and reactions
Outcome: creators feel in control, even when AI is doing a lot.
3. Cross-functional influence
3. Cross-functional influence
3. Cross-functional influence
Led workshops with:
NVIDIA → what vision + emotional rendering could handle
Inworld → reasoning limits, personality design, interruption rules
Streamlabs Engineering → Desktop/OBS integration, capabilities, permissions
These sessions shaped:
what the agent could detect
when it could act automatically
performance thresholds
the final MVP scope for CES
Led workshops with:
NVIDIA → what vision + emotional rendering could handle
Inworld → reasoning limits, personality design, interruption rules
Streamlabs Engineering → Desktop/OBS integration, capabilities, permissions
These sessions shaped:
what the agent could detect
when it could act automatically
performance thresholds
the final MVP scope for CES
Led workshops with:
NVIDIA → what vision + emotional rendering could handle
Inworld → reasoning limits, personality design, interruption rules
Streamlabs Engineering → Desktop/OBS integration, capabilities, permissions
These sessions shaped:
what the agent could detect
when it could act automatically
performance thresholds
the final MVP scope for CES
















Impact
Impact
Impact
Premiered at CES 2025 as one of Logitech’s flagship innovations
Featured in a keynote as a next-gen AI use case
Integrated into Streamlabs Ultra (1,000 interactions/month)
40% faster stream setup for new creators
25% fewer support tickets in early pilot
Established Streamlabs as the first AI-first livestreaming platform
Premiered at CES 2025 as one of Logitech’s flagship innovations
Featured in a keynote as a next-gen AI use case
Integrated into Streamlabs Ultra (1,000 interactions/month)
40% faster stream setup for new creators
25% fewer support tickets in early pilot
Established Streamlabs as the first AI-first livestreaming platform
Premiered at CES 2025 as one of Logitech’s flagship innovations
Featured in a keynote as a next-gen AI use case
Integrated into Streamlabs Ultra (1,000 interactions/month)
40% faster stream setup for new creators
25% fewer support tickets in early pilot
Established Streamlabs as the first AI-first livestreaming platform
Why it matters
Why it matters
Why it matters
This project pushed the boundary of human-AI collaboration in live performance.
I shaped how creators interact with AI in real time - balancing automation, personality, and trust in a high-pressure environment.
This project pushed the boundary of human-AI collaboration in live performance.
I shaped how creators interact with AI in real time - balancing automation, personality, and trust in a high-pressure environment.
This project pushed the boundary of human-AI collaboration in live performance.
I shaped how creators interact with AI in real time - balancing automation, personality, and trust in a high-pressure environment.




