Issue no. 1: Checking in on the vibe coders

Issue no. 1: Checking in on the vibe coders
Loveable's most-upvoted community project this month

Welcome to issue no. 1 of the Implausible AI newsletter!

I’m Andrew Baker. I’m a software engineer, founder, and executive. These days I wear lots of hats, but one thread runs through all of them: making sense of AI, and the ways it will change how software is built and sold (and what “software” will mean, for that matter).

Refining my own point of view on AI has been critical for me as a founder and a consultant. My goal with this newsletter is to help you refine yours.

Each weekly issue will start off with a handful of recent links to reporting / tools / data I found insightful. Then I’ll dive deep into a specific AI topic, bringing you my own analysis for where we are today and where we might be headed next.

I’d love your thoughts on issue no. 1. Reply to this email or shoot me a note at andrew@implausible.ai.

Issue no. 1 is all about vibe coding, the vibe coders, and the companies in the cutthroat race to serve them. Let’s dive in!

Andrew’s picks

A handful of insightful links which earned a spot in my notes this week:

A prompt injection attack written in invisible ink

I loved this report from two security researchers who delivered malicious prompts to AI chatbots by hiding them inside seemingly-normal images.

How it works: the prompts are invisible in the image’s original resolution but reveal themselves when the image is scaled down to make it ready for the LLM. A fun new reason learn about steganography.

OpenAI launches AGENTS.md to standardize end-user instructions for coding agents

As someone who tries a lot of AI coding tools, I welcome this attempt at a vendor-agnostic standard for engineers to provide project-wide instructions to their coding agents.

Despite the growing list of logos, don’t hold your breath waiting for Anthropic to get onboard. Claude Code still has the mindshare lead, perhaps best illustrated by a quick GitHub search: 104k public repos have a CLAUDE.md vs. 13k for AGENTS.md. But thankfully, it looks like AGENTS.md has enough other brands onboard to avoid the fate foretold in this xkcd classic.

Inference ain’t easy!

Crafting next-gen AI models is no small feat. These past few weeks gave us good reminders that operating them at scale is no cakewalk either.

If you tried OpenAI’s much-anticipated gpt-oss model on launch day last month, it turned out your choice of inference provider mattered a lot. Not long after, Anthopic’s top models quietly got a little dumber for a few days because of a bug in their own inference stack.

Both these examples were resolved quickly. But it’s a good reminder that operating these models is still just plain hard, even for the pros. One of many reasons I advise against running your own inference stack unless you’ve got bona fide AI engineering talent on your team.

Open models reason less efficiently than their closed counterparts

I really liked this post from Nous Research last month examining how open vs. closed models compare in their reasoning efficiency.

Besides the technical details, it’s interesting to consider the incentives here: when open model providers ship a model which reasons inefficiently they don’t pay for those extra tokens, you do. Closed model providers, who operate their own inference infrastructure, have more incentive to strike the right balance and keep you a happy customer. Open model providers, on the other hand, probably care more about eking out a few additional points in the benchmarks via extended reasoning.

“Happiness = smiles minus frowns”

Apple co-founder Steve Wozniak chimed in on Slashdot with his answer to life, the universe, and everything. A refreshing take in an era increasingly defined by founder paranoia.

Checking in on the vibe coders

What’s the most economically valuable task someone can use an LLM for today?

Consumers’ answers might vary, but ask 100 people inside the tech industry and I think you would see a clear majority respond with the same answer: writing code.

2025 has marked a turning point for AI-assisted coding tools. After trying so many myself these past few years, I have certainly witnessed it firsthand. When I used last year’s tools and models, it often felt like pair programming with an eager, novice engineer. I wondered sometimes how much I should attribute the modest productivity gains to my new tool’s capabilities vs. its role as my rubber duck.

Fast forward to 2025. When I use Claude Code or Cursor with today’s models, I feel less like I’m babysitting an intern and more like I’m managing a professional engineer.

There remain a ton of caveats, to be sure. The tools still thrive in greenfield projects vs. larger codebases. They can all search the web and reference up-to-date technical documentation, but remain biased towards their dated, pre-trained knowledge of a particular language, framework, or dependency. And they’re prone to implementing hacky workarounds when things don’t go according to plan, a quality tied to LLMs’ inherent difficulty in saying “I don’t know.”

Since it’s well-established, however, that writing code can be pretty lucrative, LLMs’ newfound competence this year has shed new light on a fascinating question:

If an AI coding agent can author reasonable-quality code without close supervision, what’s the best way to put it work?

From where I stand, we seem to be heading down two parallel paths when it comes to authoring software with AI coding agents:

  • “vibe coding” - Do the work synchronously and iteratively. Closely supervised by a human, but not necessarily a professional engineer
  • “wide coding” (my term) - Do the work asynchronously and in parallel. Orchestrated by an engineer, but each agent is not closely monitored

Next week’s issue will be about “wide coding.” Today, we’re checking in on the state of “vibe coding.”

The vibes today

One obvious way to get value out of AI coding tools that don’t need close supervision from a professional engineer is… to use them without a professional engineer. And if that’s your goal, boy do you have options!

Andrej Karpathy was using Cursor when he coined the term “vibe coding” in a February tweet, describing the technique of authoring software primarily through natural language. And while professional engineers can and still do “vibe code”, today the term is most often used to describe building software with tools like Lovable, Replit, Bolt, or Vercel’s v0. Services like these aim to be a one-stop shop which anyone can use to build and deploy software.

With the latest models at their disposal and their own tooling refinements, these services can now make a stronger claim to their users that, with some effort, you’ll be able to bring your product vision to life. But how close is that horizon? And for what kind of product?

Based on my experience I’ve seen these tools do best at prototyping an idea, which becomes valuable as a high-fidelity mockup. I ran a half-day AI hackathon for a Product team which had Lovable licenses. One PM was a Lovable early adopter and had already implemented the company’s brand guidelines as a Lovable component library. That helped the whole team create more interesting and valuable prototypes that day.

At that same hackathon, I offered to pair program with any participant curious to try to make a contribution to their production codebase. None of them took me up on it, not even to make a website copy change. Maybe that’s because they (wisely) knew it would still be an uphill climb, or maybe it’s because vibe coding a prototype is just more fun. But it was my first personal data point for an important question:

How much of this market will be employees building prototypes and small internal tools vs. entrepreneurs building software for their customers?

So how are the vibe coders doing?

For the latter group of aspiring entrepreneurs, all it takes is one visit to r/vibecoding to see they’re not having a great time. This specific post recently made the rounds. When it comes to equipping a layperson to build valuable software, it doesn’t seem like we’re close.

Most aspiring builders seem to get stuck building their MVP. For those who do make it that far, there are new curveballs involved in operating your product. I have to imagine the absence of “My vibe-coded app got hacked!” headlines is because there are few vibe-coded apps worth hacking, and not because they rigorously implement security best practices.

What about the ones selling the vibes?

Meanwhile, operating one of these vibe coding services seems to be a tough business.

The Information reported this month that Replit’s gross margins ranged from 36% to negative 14% this year due to LLM inference costs — in Replit’s case, for Claude and Gemini. Lovable and StackBlitz (creator of Bolt) had margins in the 35-40% range. Perhaps the whitelabeled infrastructure providers like Supabase and Neon are the players doing the best shovel-selling in this space.

But the bigger issue might be the intense competition. Not just with each other, but from ChatGPT, Gemini, and Claude, all of which now support a similar prototyping use case through their increasingly capable built-in tools for generating code (not to mention Figma’s new entrant). It’s not impossible to imagine employees one day sharing prototypes and internal tools with each other which were built and deployed without ever leaving ChatGPT.

So where does vibe coding go from here?

The dream of “equip anyone on Earth to build valuable software” may still be beyond our grasp, and as a founder it hurts to see aspiring entrepreneurs struggle as they try anyway.

But I wanted to focus on this topic for my inaugural newsletter because, to me, that dream remains one of the ways things can “go right” for AI. How many products didn’t get built in the past decade because an aspiring founder didn’t have the skills to get started? And within companies, how many times did a great idea die because all an employee could do is file a ticket on a backlog?

I’d love to see what the industry looks like when the power to create valuable software is in many more hands than it is today.

Next week

Thanks for reading this week’s issue. What do you think? Just hit reply or send me a note at andrew@implausible.ai.

Next week’s issue will be about how professional engineers are using AI coding assistants to scale their work. Subscribe here to get it delivered straight to your inbox.

Subscribe to Implausible AI

Updates and analysis on where AI is headed next, delivered to your inbox every week.
jamie@example.com
Subscribe