RONIN Desktop
A local-first meeting copilot for macOS. Real-time transcription, AI-powered suggestions, and post-meeting summaries—audio never leaves your Mac.
The floating 3-panel overlay sits on top of your video call — transcript, response suggestions, and contextual guidance in real time.
Meeting tools record everything and send it to the cloud. Your conversations, your negotiation strategies, your half-formed ideas—all uploaded to servers you don’t control. RONIN takes the opposite approach. Audio stays on your device. Transcription runs on your GPU. The cloud is optional, never required.
Instead of a passive recording, RONIN is an active copilot. It listens in real-time and surfaces suggested responses, follow-up questions, risk flags, and relevant facts from your prep notes—while the conversation is still happening.
Transcript
Responses
Guidance
Live Transcription
MLX Whisper running natively on Apple Silicon via Metal GPU acceleration. No cloud API calls, no latency.
Suggested Responses
Four tone-varied replies—direct, diplomatic, analytical, empathetic—generated in real-time as the conversation unfolds.
Contextual Guidance
Follow-up questions, risk flags when discussion conflicts with your goals, and relevant facts surfaced from your prep notes.
Post-Meeting Summary
Executive summary, key decisions, action items, and open questions—generated the moment the meeting ends.
Four LLM Providers
Apple Intelligence (fully on-device, macOS 26+), LM Studio (local), OpenAI, or Anthropic. Your choice, your control.
Floating Overlay
Resizable 3-panel window sits on top of Teams, Zoom, or any video call. Drag to resize panels. Compact mode when you need minimal footprint.
Every operator works differently. RONIN ships with six distinct themes—from the iconic Matrix green to a clean Modern dark, warm Amber CRT, blue Tactical HUD, earth-tone Field Manual, and high-alert Defcon red. Switch instantly from Settings.
Matrix
Modern
Amber
Tactical
Field
Defcon
Two paths, one principle. The backend path routes audio through a Python FastAPI server running MLX Whisper for transcription and a pluggable LLM client for copilot intelligence. The Apple Intelligence path keeps everything on-device.
SwiftUI Mac App
|- MeetingPrep -> LiveCopilot -> PostMeeting
|- AudioCaptureService (AVCaptureSession)
|- NativeCopilotService (Apple Intelligence path)
'- WebSocket connection
|
Python Backend (FastAPI)
|- MLX Whisper (transcription)
|- LLM Client (pluggable)
| |- LM Studio (local)
| |- OpenAI (cloud)
| |- Anthropic (cloud)
| '- none (transcription only)
'- Meeting State Manager