Vercel Zero-to-Agent Β· 2026Inside TamaGo
A durable, memory-rich AI companion that learns how you want to be talked to β and lets you forget anything. Built for the Vercel Zero-to-Agent hackathon, every layer powered by Vercel.
πA hackathon submission
TamaGo was built for Vercel's Zero-to-Agent hackathon, with a dual ambition: prove out two of the three official tracks in a single product that's actually fun to live with. Both tracks are claimed in earnest β the entire agent loop runs on Vercel WDK, the chat reaches into MCP servers we host on Vercel Functions, and the first landing mockup was sketched in v0 before being exported and iterated locally.
Track 1 Β· Vercel Workflow (WDK)
Four durable workflows run TamaGo's background life: a welcome flow on signup, a companion-tick that sleeps for 10 min β 6 h before deciding whether to message, an idle-checkin that warms up after silence, and a memory consolidator that merges raw chat into clean journal entries. Each step uses real `await sleep('3h')` semantics β surviving crashes, redeploys, and cold starts β and self-reschedules on every termination path so the loop heals from skips.
Track 2 Β· v0 + MCPs
The first landing was generated with v0 and exported to local. The chat consumes two MCP servers (`/api/mcp/weather`, `/api/mcp/time`) hosted as plain Vercel Functions. Skills are wired natively as AI SDK tools β the model decides when to call them, the UI renders a live `π consulting Weatherβ¦` chip while a tool runs, and a persistent skill chip stays on the completed turn for auditability.
π₯What it is
TamaGo is a tamagotchi-style AI agent that remembers you across days, devices, and reboots. It asks gentle questions to learn who you are, sends proactive check-ins when too much silence builds up, and consults real-world tools (weather, time) mid-conversation β visibly, so you can audit every external lookup.
βοΈHow it works
A two-layer memory keeps stable user-profile facts (your name, your pet, your city) separate from episodic concerns that resolve over time. A per-user intensity profile in Redis tracks how often you want to be reached out to, which topics to leave alone, and quietly decays old signals if they're not reinforced. Durable workflows on Vercel WDK schedule themselves in the background β sleep for hours, wake up, decide whether to message, respect your cooldowns, reschedule. When the chat needs a real-world signal, the AI SDK calls our MCP servers as native tools and the UI renders a live chip while the call runs. A small content classifier (same model as the chat, no extra provider) gates age-sensitive topics, asking your bracket the first time it comes up.
β²Powered end-to-end by Vercel
AI Gateway routes the LLM (xAI Grok via a single provider/model string). Vercel WDK runs the agent's durable workflow loop with real `await sleep('3h')` semantics. Vercel Functions host both the chat API and the MCP servers (weather, time) the agent calls into. Marketplace storage holds the snapshot. v0 sketched the first mockup before the project moved local. One platform, one git push, every preview deploy already production-grade.
The stack
Vercel AI Gateway
One key, one provider/model string. Routes to xAI Grok 4.1 fast for chat + meta extraction.
Vercel WDK
Durable workflows with `use workflow` + `use step`. The companion-tick + idle-checkin loops self-heal across skips.
Vercel Functions
The chat API streams NDJSON of typed events. The MCP servers (weather, time) are themselves Functions on the same deployment.
AI SDK 6 + @ai-sdk/mcp
streamText with `tools` from createMCPClient. The model decides when to call. The UI shows a live chip while each tool runs.
Marketplace storage
Upstash Redis (provisioned via Vercel Marketplace) holds the per-user snapshot β chat turns, memories, facts, state.
v0
The first landing mockup was sketched in v0, then exported and iterated locally.
dcouplr
The architectural kernel. Every capability β storage, LLM calls, workflow triggers, event bus β is registered as a typed semantic intent that callers resolve at runtime. Built by a friend; deserves more eyes.
Web Push
Service-worker-backed push notifications fire when the agent has something proactive to say (idle check-in, companion tick).
Next.js 16 App Router
React 19, Turbopack dev server, Server Actions for the auth flow, Edge-friendly runtime.
πBuilt by
Designed, written, and shipped end-to-end by Pedro Knigge. If you want to follow along or say hi, find me on X.
π@PedroKnigge