What if your AI could listen, respond, see, and carry out complex tasks — just from your voice?
In this session, Becky introduces Model Control Protocols (MCP) and the Multimodal MCP Client. A powerful, open-source framework that allows anyone to upgrade their AI assistant with natural voice control, visual inputs, and automation.
Join us for a live demo showing how to find and plug in community-built MCP servers and walk through the core features that enable smarter, voice-first workflows:
🎙️ Voice & Multimodal Intelligence
Natural Voice Control: Speak naturally to control AI workflows and execute commands
Multimodal Understanding: Process text, voice, and visual inputs simultaneously
Real-time Voice Synthesis: Get instant audio responses from your AI interactions
🔄 AI Workflow Orchestration
Extensible Tool System: Add custom tools and workflows through MCP
Workflow Automation: Chain multiple AI operations with voice commands
State Management: Robust handling of complex, multi-step AI interactions
Whether you’re exploring AI for the first time or looking for more control and creativity in your setup, this talk will leave you with practical tools and a fresh take on what AI can really do.
Speaker: Becky Still, Digital Boop Ltd