

Zonko Graveyard
last updated on 8th april 2026
Experiments we built, learned from, and consciously stopped pursuing.
Our philosophy: if the ecosystem is moving fast, we want to stay close to the frontier, know who is building what, and understand what is changing in real time. We then bring those insights together into holistic products instead of optimizing for one isolated feature. We do not always build to maximize users immediately. Often, we build because we are genuinely curious and want to understand a space deeply.
1. Agentic AI image editing tool
What we built: A system that could understand a prompt and auto-compose contextually accurate visuals, combining real-world references with generated content. Think: "Generate Ganpati Bappa celebrations on real Mumbai roads" or "Place a company banner on Hiranandani Gardens using real building context."
Why we built it: To get our hands dirty with image and video generation models across open-source and closed-source stacks. We wanted to understand how to route context on the fly.
What we actually learned: Long, detailed prompts often perform worse than short, generic ones. The bigger surprise was that the tool was capable, but without clear use cases surfaced, most users couldn't figure out what to do with it. This became one of our core principles, in consumer AI, discovery of use cases is a bigger problem than capability itself. Just being powerful isn't enough. You have to show people what's possible.
Where it went: The technical depth from this made building our on-brand image generator (below) dramatically faster.
2. AI-native messaging + assistant + network app
What we built: A product that tried to collapse messaging, assistant behavior, and a social layer into one place. In 1:1 chats, the agent could do real work inside the conversation itself, tracking lists and tasks, booking appointments, remembering context, and helping you move things forward. As an assistant, it could sit on top of your life and be useful across workflows. In group chats, it could summarize, coordinate, nudge, and act as a shared layer for the room.
Why we built it: Messaging is already where people spend time. We wanted to see what happens if the assistant does not live in a separate box, but inside the actual social graph and conversation flow where decisions, coordination, and relationships already exist.
What we actually learned: A lot was possible. The product could be genuinely useful in 1:1 conversations, assistant flows, and group contexts. But we did not see one large, unmistakable wow moment that made the whole thing feel inevitable. It was interesting in many ways, but not sharp enough in one way.
Where it went: It gave us much better taste around where agents feel natural, where they feel forced, and what kind of consumer behavior can actually compound.
3. On-brand asset generator
What we built: A tool that generates branded creatives from plain prompts by inferring a brand's visual and textual language. "Create a Republic Day visual for Zomato", and it just works.
Why it is in the graveyard: A few people tried it and liked it, but we chose to drop it.
4. StudyAnything.ai
This was the hardest one to kill.
What we built: You give it something you want to learn, it understands intent, generates a course so you can go deep, and lets you chat and learn via a voice teacher.
Why we built it: We keep learning different things inside the team. We wanted to build the tool we wished existed.
Why it is in the graveyard: People who tried it liked it, but repeat usage stayed low. We also saw an upstream problem: most users froze on the blank "what do you want to learn?" prompt. It reinforced that discovery often matters more than capability.
Where it went: Live @ studyanything.ai.
5. AI-native dating matchmaking MVP
What we built: An MVP where AI understands people and preferences to do meaningfully better matchmaking.
Why it is in the graveyard: The concept worked, but the market size didn't feel attractive enough for a company-level bet.
6. AI influencer video pipeline
What we built: A tool where creating a character and generating videos for that character was a one-click workflow. Built purely for internal use, to solve our own workflow problem of making video content at scale without manual production.
Frontier explorations
Alongside product bets, we regularly run focused sprints to build deep intuition across AI modalities. These are not products. They are deliberate investments in understanding what is possible, what is changing, and where the real levers are.
Voice stack: End-to-end voice pipelines, tool calling inside voice flows, hosting tradeoffs, and latency and cost optimization across the full stack (including 100x+ cost improvements from self-hosting).
Vibe coding platform: Explored what a fast, opinionated coding tool looks like built from scratch — UI, workflow, and how AI fits naturally into how people actually code.
Internal agent platform: Went deep on sandboxing, massively parallel subagents, and orchestrator systems for an internal background coding platform. The interesting direction was recursive language models — one model spawning, supervising, and synthesizing the work of many others inside isolated environments. Gave us real intuition for what breaks, and what compounds, when you go from one agent to actual multi-agent systems.
Music generation: Evaluated open-source music models in real workflows. Mapped what is possible and what is still broken.
Generative UI SDK: Built a developer SDK for generating UI from structured model outputs. Paused when Vercel launched JSON Renderer.
Two other ideas we're excited about, but not actively working on right now:
- AI-native cloud: Make building AI products radically simpler, spanning context management, memory, tool calling, model routing, and not just compute but VMs and cloud infrastructure primitives.
- AI-native hedge fund: A fund that is built from day one around AI-native workflows for research, execution, and iteration.
Why this page exists
Sometimes an experiment turns into a product. Sometimes it turns into a capability we use later. Sometimes it just turns into conviction about what's real and what isn't.
Three principles we've learned the hard way:
Discovery > Capability. In consumer AI, it doesn't matter how powerful your product is if people can't figure out what to do with it. Solving discovery is harder and more important than adding features.
Best model ≠ best strategy. In consumer apps where monetization takes time, cost structure is existential. The difference between the best closed-source model and a well-deployed open-source alternative can be 100-300x in cost. That can make or break a product.
Nothing compounds if you don't let it. Every experiment here deposited a technical capability, voice, tool calling, generative UI, cost optimization, agent architecture. We consciously design experiments so the learnings flow into what we're building next. And we kill things fast, most experiments on this page lasted weeks (and less), not months.
Active bets
- Daily status app.
- Rumi — assistant and companion for India, where we are running tens of experiments to figure out what it takes to bring AI to the masses.
- Layer — a new anytime-anywhere-available multiplayer coding tool, currently under development.
- Harbor — a product for OpenClaw, Hermes, and people working with multiple agents.
The throughline is the same: bring AI into places where people already spend time, and build products that feel obviously useful instead of artificially impressive.
Built by Zonko team since mid-December 2025.
We're hiring. If this is how you want to work, join us.