The interview is for everyone. This page is for people who want to know what actually happens between a caller dialling your number and an outcome landing in your portal β and who want to wire the underlying pieces themselves.
Every Flickki assistant is a real-time loop. A message arrives on a channel, the runtime spins up or joins an assistant, the LLM decides what to say and which tools to call, and the result lands on one of three outcomes β booked, escalated, or noted.
Tool calls are first-class. The LLM doesn't just talk β it books a slot, sends an SMS, pokes a webhook, transfers a call. Every invocation is logged, typed, and replayable.
A starter gives you a style, useful skills, recommended tools, and sane boundaries. Once you pick one, it's yours to fork, customise, version, and rewire however you want.
Swap the LLM. Add a skill. Attach your own tools. Paste in your domain glossary. Wire a webhook to your internal API. Check the whole thing into git and run it through a code review. Flickki doesn't care how fancy you get β it just runs the file.
--- name: Sam β front desk assistant business: Delaney Plumbing starter: library/front-desk-assistant voice: elevenlabs/nicole llm: claude-sonnet-4-6 skills: - message.take - urgency.triage - appointment.book glossary: - hydrojet - backflow preventer - Outer Sunset tools: - calendar.book - webhook.post # β our CRM escalate_when: - caller says "flooding" - caller asks for the owner by name ---
A quick walkthrough of the runtime path for any inbound conversation β a phone call, a WhatsApp message, a web chat session, an SMS. The same loop handles all of them.
A call, message, or chat arrives on one of your connected channels. The channel adapter normalises it into a single event shape.
The runtime creates or joins a session for that conversation and attaches the compiled assistant that matches the rule you set.
The assistant loads its voice, tone, glossary, and tool list, then plays or posts its opening line on the channel.
For calls, speech streams through speech-to-text. For chat, text flows straight in. The LLM sees a rolling transcript and the tool schema.
When the LLM decides to call a tool, the runtime invokes the server-side executor, captures the result, and feeds it back.
Voice replies stream out as audio through TTS, interruptible at word level. Chat replies post back as rich messages.
On hangup or thread close, minutes and message units are reconciled against your plan and your telco cost.
Full transcript, structured collected fields, and the tool-call log land in your portal. The run is searchable and exportable.
The interview is for normal people. If you already know what you're doing, Flickki gets out of your way and gives you a clean, versioned, pasteable source of truth.
Assistants are defined in Markdown with YAML front-matter. Paste one in, edit it, diff it, check it into git. The interview just writes the same file.
--- name: After-hours receptionist voice: elevenlabs/rachel llm: claude-sonnet-4-6 tools: [sms.send, calendar.find_slot] escalate_to: +1 415 555 0199 --- Warm, professional, patientβ¦
The webhook tool is a full escape hatch β point it at your Supabase function, your Zapier scenario, your internal API. Structured inputs and outputs mean the LLM knows exactly what it can do.
Every assistant keeps a history. Roll back, diff, replay a transcript against a new version. Catch regressions before they hit production.
SIP Connectors let you point at Twilio, Telnyx, or a self-hosted Asterisk you already run. Rotate credentials in one place. Observe per-connector health.
Flickki runs the file. Pick a channel, drop a Markdown assistant into the editor, and watch the runtime spin up a real Room with real tools attached.
Free sign up β