This website uses cookies

Read our Privacy policy and Terms of use for more information.

Sponsored by

Çelik here.

Eleven days. That's the gap between Anthropic shipping Claude Design on April 17 and the nexu-io team publishing Open Design, the same product, open-source, local-first, running on whichever coding agent is already on your laptop. Around 20,000 GitHub stars in its first week. The frontier-lab playbook used to give vendors a six-month moat. This week three independent moves told us the moat is now eleven days wide, and the durable thing isn't the tool. It's the skill library and the workflow.

If you ship a tool that wraps a frontier model, the OpenAI Symphony piece is the one to read first. If you build product or design at a CEE company, the Tallinn story is for you. If you sell into government, the Pentagon update will tell you why the buyer just stopped paying for any single vendor's badge.

The OSS Layer Just Caught Up

On April 17 Anthropic shipped Claude Design, its bundled prototype-and-deck builder, locked to Opus 4.7 and gated behind Pro at $20, Max at $100 to $200, Team at $25 to $30 a seat. On April 28 nexu-io published Open Design: same outputs (web/desktop/mobile prototypes, slides, images, video, decks, exports to HTML, PDF, PPTX, MP4, ZIP), local-first, agent-agnostic, AGPL. Eleven days.

The repo ships zero models of its own. It auto-detects 13 coding agents already on your laptop (Claude Code, Codex, Cursor Agent, Gemini CLI, Copilot CLI, Hermes, Kimi, Pi, Kiro, OpenCode, Qwen, Devin, Mistral Vibe) and wires whichever is local into a skill-driven design loop. The neura.market writeup reads it as the first cleanly-cloned Claude Labs product, and the repo backs that read with 19 composable skills and 71 brand-grade design systems stitched together inside the same eleven-day window.

Don't read this as "Anthropic versus a free clone." Read it as a position change. The unit of work that earned a vendor pricing power last year was the polished UI layered on a frontier model. This week that unit got commoditized in the time it takes a team to plan a sprint. What didn't get commoditized: the skill library (Open Design's 19 + 71 stack is bigger than what Anthropic shipped on day one), the agent that drives the workflow (yours, not the vendor's), and the local data you don't have to upload. The product is now the wrapper-of-wrappers, and the wrappers are getting freer every Friday.

The reader move this week: clone the Open Design repo, run it against your own agent on one real brief (a deck, a prototype, an internal one-pager), and time the loop end to end. Do that before May 18. If your existing line for "AI design tooling" is more than you'd pay an intern to maintain a forked skill library, you have a budget conversation, not a product comparison. And if you're shipping a closed product on top of an Anthropic or OpenAI API, write down which of your features survives an eleven-day clone and which doesn't. That's the only roadmap that matters next quarter.

Bottom line: The unit of value just moved from the polished UI on top of a frontier model to the skill library and the agent that drives it, and it moved in the time it takes a team to plan a sprint.

You don't need to be technical. Just informed.

Most AI newsletters are written for engineers. This one isn't.

The AI Report is read by 400,000+ executives, operators, and business leaders who want to know what's happening in AI — without wading through code, jargon, or hype.

Every weekday, we break down the AI stories that matter to your business: what's being deployed, what's actually working, and what it means for your team.

Free. 5 minutes. Straight to the point.

Join 400,000+ business leaders staying ahead of AI — without the technical overwhelm.

Tallinn Forks the State

While the global press argued about Open Design, Stefano Amorelli was committing to estonia-ai-kit from Tallinn: a single-developer, AGPL-licensed SDK that exposes Estonia's tax authority (EMTA), business register, statistics service, legal corpus, and LHV Bank to Claude, GPT, and any MCP-compatible agent. It's the most CEE thing I've read all month. One engineer, a country's worth of public APIs, an agent layer, no committee. Estonia's government committed €20M to open-source AI earlier this spring and approved 15 high-impact OSS projects at its April 8 council. Amorelli's repo is the kind of thing that fund makes possible at scale.

The framing matters. Eesti.ai's CEO Kirke Maar told Estonia Digital on April 29 that the country isn't trying to build a frontier model. It's building the workflow layer above whoever wins. That's Mirror Stack at the policy level. Bigger countries spent the past two years pouring capital into national champions and national models. Estonia is spending two years getting agents to talk fluently to the Tax Board. The first approach gives you a press release. The second one gives you a state that runs on agents in 2027.

If you're a CEE founder building anything that touches government data, this is the template. Fork estonia-ai-kit, swap in your country's APIs (Bulgaria's NRA, Poland's GUS, Romania's ONRC, Serbia's APR), publish the MCP servers under a permissive license, and pitch your local digital ministry in June. The window between "OSS hobby project" and "official MCP toolkit" is about six months in this regulatory climate. Estonia is six weeks in. The next country to land a comparable kit gets a free conversation with the Commission about CADA-aligned procurement.

Bottom line: While bigger countries spent two years funding national models, Estonia is spending six weeks giving its digital state agent-readable interfaces, and the small-country, single-developer template is the one that travels.

OpenAI Made Symphony Free

On April 28 OpenAI open-sourced Symphony, a reference implementation that maps every open ticket in Linear, Trello, or Jira to its own Codex agent workspace, runs them in parallel, and returns CI status, PR review feedback, and walkthrough videos as proof-of-work. Internal OpenAI teams reported a 5x increase in landed PRs in the first three weeks running it. OpenAI also said the part nobody expects from a frontier lab anymore: they will not maintain Symphony as a product. It's a SPEC.md you fork.

Read the strategic move underneath that release and the Mirror Stack thesis lands from the vendor's own seat. OpenAI looked at how fast Anthropic's Claude Design got cloned (eleven days, the lead story above) and concluded that any product they shipped on top of Codex would have the same shelf life. So they released the spec, removed the cloning incentive, and turned the orchestration layer into distribution for Codex API consumption instead of a paid wrapper. The implicit pricing argument: when the wrapper layer commoditizes inside two weeks, charging $20-30 a seat for it is a losing trade. Charging for the underlying tokens is not. That is the same conclusion the capital markets reached this week when Cursor's $1B revenue ran into ~$1.15B in API costs and Cursor responded by shipping its own Composer 2 model.

If you sell a tool that wraps a frontier model, treat Symphony as a market signal you cannot ignore. Audit which of your features are "orchestration that connects an existing system to an LLM," because that's the layer OpenAI just told the market is free now. Move your pricing onto whatever sits above the spec: a curated skill library, a workflow your customers can't fork in a weekend, or a data path that solves a compliance problem the spec doesn't. Run that audit before your next pricing review. By the time you walk into it, three of your competitors will have started the same conversation, and the question on the table will be whether your orchestration layer is your product or your customer-acquisition cost.

Bottom line: When the wrapper layer commoditizes inside two weeks, charging for orchestration is a losing trade, and OpenAI just confirmed it by giving its own implementation away.

Pentagon Picks Eight. Anthropic Out.

On May 1 the U.S. Department of War authorized eight firms to deploy AI on Impact Level 6/7 classified networks: AWS, Google, Microsoft, NVIDIA, OpenAI, SpaceX, Reflection, and Oracle (added hours later). Anthropic, the first company to put models on Pentagon classified systems, was excluded under the supply-chain risk designation Defense Secretary Hegseth formalized in March, even though a federal judge blocked enforcement of that designation in late March. The biggest classified buyer in the world told the market that on the highest-stakes networks, eight vendors are interchangeable, and the ninth, the one with the strongest safety reputation, is out for politics.

The template this sets is the load-bearing part. When a sovereign customer treats AI vendors as substitutable line items rather than a single moat-defended product, the buying logic flips. Procurement starts pricing the workflow, the data residency, and the integration cost, and stops paying a premium for a brand. That logic copies itself to NATO IT procurement, to EU GovTech, to every defense ministry that buys what the Pentagon buys. The shift is the same one Stories 1 and 3 told, now in policy: the tool is replaceable, the workflow is the moat, and the vendor with the prettiest model has zero pricing power when eight others can run the same skill library.

If you sell into government, directly or via a prime, your messaging next quarter has to reach a buyer who already assumed your model is interchangeable. Lead with the skill library (what your stack does that another vendor's stack doesn't), the data path (where it sits, who can audit it), and the workflow you've already shipped to a comparable customer. Lead with your model name and your demo and you'll lose the procurement conversation in the second meeting. CEE founders pitching ReArm-funded buyers: pull the May 1 Breaking Defense story into your deck before your next meeting. It's the cleanest one-page argument for skill-library-first selling that you can put in a slide.

Bottom line: When the biggest classified buyer treats AI vendors as substitutable line items, the brand premium evaporates and the workflow becomes the only thing left to charge for.

Short Signals

Four picks worth your attention, each readable in under a minute.

Cursor SDK ships, Composer 2 and the Security Reviewer with it. Cursor opened its agent runtime on May 1: same harness, same Composer 2 model, accessible in a few lines of TypeScript. Cross-repo edits, durable interactive canvases inside the Agents Window, and an always-on Security Reviewer that flags prompt injection on every PR. Useful if you're tired of paying for Cursor and want to rebuild it on top of Cursor.

Codex CLI 0.128.0 added /goal. Set a goal, set a token budget, walk away. The autonomous loop runs until either the agent judges the goal complete or the budget burns. Simon Willison's notes walk through the mechanics. This is what cloud autonomy looked like six months ago, now shipping in a CLI on a four-day cadence.

VoltAgent's awesome-agent-skills crossed 1,400 skills. A single registry of agent skills installable across Claude Code, Cursor, Codex CLI, Gemini CLI, and Antigravity. Skills are the cross-vendor package format now. If you're building agent infrastructure for your team, start here before writing one from scratch.

Mistral shipped Workflows and Medium 3.5 open-weight. On April 28 Mistral released its agent orchestration product alongside a 128B dense open-weight model on Hugging Face. Self-hosters now have a serious open-weight option for production agent workloads that doesn't route through OpenAI's or Anthropic's servers.

That’s it for today

Thanks for making it to the end! I put my hard work and dedication into every email I send, I hope you are enjoying it.

Btw if you want to get your brand in front of a fast-growing audience of founders, investors, innovators, and tech professionals from South-East Europe all the way to Europe and the US, Signal connects the dots between local and global opportunities, and your message can be part of the story. Send an email at [email protected].

See you on the next edition,
Çelik

Upcoming Events

Keep Reading