Vibing Jigs

The superpower hiding in old-school know-how.

Vibecoding Jigs

Vibecoding can be extremely powerful for getting a working prototype up quickly, or even for managing day-to-day maintenance tasks in a simple environment. Claude is really good at helping me update my website. It’s a simple task—I write some text, Claude adds it to a file, we review it, and I push it to GitHub—and then a GitHub Action (Claude wrote it) publishes the material.

So it’s just me, writing… sometimes for the LLM, sometimes for other people… and we have a “ritual” we go through, a simple one, to make changes to my website. A blog is an easy enough system to get this bootstrapped with—GitHub Actions are pretty quick to build, and as long as I don’t add complexity to the site, I can keep adding to the list of articles indefinitely until I run out of disk space.

I’m not going to talk about the disk problem today—there are plenty of DevOps blogs that can show how to solve that. It’s not a big concern with a blog or application code anyway. What I am going to talk about is managing complexity, and how to get the most out of the LLM’s “complexity floor.”

So what about web apps, or games?

These things turn into interrelated balls of spaghetti very fast with undisciplined vibecoding. Why? Because the LLM is unconstrained. It doesn’t know where the boundaries are, so it creates dependencies everywhere, reaches into files it shouldn’t touch, and builds brittle, interrelated and highly coupled parts.

This can be controlled through context management to an extent—careful prompting, semantic boundaries, compressed intent. But context management only scales so far. Eventually you need physical isolation: separate projects, separate repos, hard boundaries the LLM literally cannot cross. From there, human intent weaves these isolated pieces back together into something coherent.

This is the discipline. Knowing how to compress your intent. Knowing when to stop with a tool. Knowing when to generate one-time, few-time, or constrained-scope applications that stay small enough to remain trustworthy in the presence of LLM variance.

This is one side of the coin—but what about the other side? The tools I need to build all of this?

The LLM can write a lot of tools very quickly—if the process calls for a transpilation step, that full transpilation is often only a few hours away instead of days or weeks. This means something new. For starters it means I’m not constrained to a platform’s choice of language anymore when I design my systems.

It also means the jigs we use to code, the project-specific or specialized tools we use in the background while we build, have a new cost matrix.

What’s a Jig?

In machining, a jig is a device that holds a workpiece in place and guides the tool. It doesn’t do the cutting—the mill or the drill does that. The jig’s job is to make the cut repeatable. The stock gets clamped, the jig constrains the degrees of freedom, and every piece comes out identical. The jig turns a skilled operation into a mechanical one.

In software, a jig is anything that makes a process repeatable. A code generator. A transpiler. A schema-to-boilerplate tool. A test harness that exercises edge cases. The jig doesn’t write the application—but it holds the work steady so the cuts come out clean.

Jigs are cheap.

A small throwaway module, or a generic module with constrained scope, can be thought of as a jig. Tools used to make tools. Sometimes they are ad hoc, temporary. They may exist for the lifetime of a project, a process, or just to build something larger. And this is where vibing really shines.

Anyone who knows how to build something that can generate code, generate objects based on constraints—knows how to build a jig. Ask AI to do it, and the jig appears.

Jigs can generate synthetic datasets for testing, produce boilerplate code from schemas, validate configurations against specifications, translate between languages or formats, balance game systems through simulation, or scaffold entire project structures from templates.

A simulation jig runs thousands of scenarios to surface edge cases. A translation jig converts Lua scripts to C++ or migrates an API from REST to GraphQL. A generator jig takes a database schema and emits typed models, validators, and API routes. These aren’t applications—they’re fixtures that keep the process intact in the face of uncertainty.

I used to write these kinds of things by hand. Back then I had to ask myself: am I going to use a Lua to C++ transpiler often enough to make it worth the 6-18 months of research and implementation time?

I don’t know. What I’ve found out now is—I can use an LLM to do this kind of work in the space of hours. These types of tools are available, nearly for free, within the scope of the complexity window offered by modern LLMs.

The Failure Mode

The primary mistake is letting LLMs do repeatable work—work that can be done reliably by a machine. Every time an LLM is asked to generate the same boilerplate, format the same output, apply the same transformation, tokens burn on something a jig could do for free. Worse, variance gets introduced where consistency is needed.

The discipline is knowing when to crystallize. A solution gets vibed into existence, validated, and then the question arises: will this be needed again? If yes, freeze it. Turn the vibe session into a jig. Now that operation is repeatable, and the LLM is freed up to do what it’s actually good at—novel synthesis, not mechanical reproduction.

This points toward something larger. Build enough jigs, crystallize enough process into repeatable tooling, and formal reliability starts to accumulate. Schema-driven development. Monoliths that scale horizontally. Configuration over code. The old deployment patterns work again—but now with LLM-assisted jig fabrication to get there faster.

It’s a superpower, and it comes from old-school know-how.