Two projects. Same tech sensibility. Radically different timelines.
| Project | Timeline | Lines of Code | Stack |
|---|---|---|---|
| Beef Arena | 2 years | ~18,000 C# | Unity |
| DomRTS | 1 week | ~30,000 Lua | Love2D |
Beef Arena shipped to Steam. DomRTS is an open-source RTS template ready for anyone to build on.
The difference wasn’t the engine. It wasn’t caffeine. It was methodology.
The first time I envisioned a career other than “baseball player” was when I brought home Mega Man 2 from Toys R Us. The bright colors, the music, the way each robot master had a whole world built around a single idea—it captured something in my young mind. I wanted to make games. That was the late 80s, and it led me into math and programming as core skills.
I made small games as a teenager. In college, I built a complete engine from vanilla Java—rendering PNG sprites to a canvas with a stable RK4 physics engine, level editor, modular enemies. I wanted to turn it into a product, but got hung up on things like consistent gamepad support in Java. The project stalled.
Years later, while working in networking, I attended a Unity hackathon and rebuilt Super Crate Box in 48 hours. I kept tinkering with Unity but never developed fluency with the engine. No big social circle pushing me forward. Other priorities.
Then in 2018, I made a small browser game called “beefarena” and deployed it to beefarena.com. S3 and Lambda for hosting, a leaderboard to show off what you could do dynamically for cheap. I demoed it at work. It was a hit with coworkers.
So I decided to turn it into something bigger.
Beef Arena Supreme started in December 2023. 166 commits later, it shipped in late 2025.
I took the original gameplay loop and wrapped it in a typical roguelike structure—procedural stage generation from hand-crafted, tested single-screen segments. That’s what took 2 years.
The numbers tell part of the story:
| Metric | Value |
|---|---|
| Enemy types | 77 |
| Scenes built | 57 (39 archived/scrapped) |
| Boss fights | 7 |
| Audio files | 376 |
| Animations | 186 |
Those 39 archived scenes are the real metric. That’s iteration. That’s finding out what doesn’t work by building it.
The hardest system wasn’t technical. It was the human-presentation layer—design. I spent a lot of time thinking about how to make the game look attractive as a product. Gameplay worked early. Making it look like something people would want to buy took much longer.
The git history has gaps. Long ones.
Those gaps weren’t idle time. That was me learning everything the LLM couldn’t help with yet:
Pixel art. I’d dabbled when I was younger—learned some rules for simple shapes that didn’t look too janky. This time I took a more disciplined approach: color palettes, contrast, readability. It greatly improved the curb appeal of the game.
Music composition. This is where I still like being human. AI-generated music is genuinely good now—production-ready background tracks that fit themes and stay out of the way. But I enjoy playing music, and I have a style that isn’t caught in the models. So I wrote all of Beef Arena’s music myself: improvise at the keyboard until something clicked, record MIDI into Ableton, quantize, then feed it back into my Prophet 12 to shape the soundscape. 4-12 hours per song to get something with that 80s John Carpenter vibe I wanted. A lot of people appreciated it. One streamer gave me shit about it being weird—but that’s the point. For future games, I can use LLMs to be more genre-versatile when I need to. But this was mine.
Shader programming. My pixel art style wasn’t authentic enough to pass as NES/SNES. But shaders let me add gradients, lighting effects, flashing sprites for hit feedback. Way easier than doing it by hand, and it sold the visual style.
LLM tooling itself. In early 2024, I built a Rust CLI for ChatGPT. It worked well—managed context, ran on the command line. But it didn’t do much beyond what the web interface offered. I got into a rabbit hole and eventually jumped out to focus on the game again.
This is the part most “AI productivity” content ignores. You can’t spec what you don’t understand. The multidisciplinary investment wasn’t separate from the coding work. It was prerequisite to it.
Fast forward to the sprint.
Same brain. Same domain knowledge. But now I was writing specifications, not implementations.
The difference:
| Before | After |
|---|---|
| “Write a function that moves the unit” | “Units use A* pathfinding on tile coordinates” |
| “The enemy should attack” | “Enemies follow state machine: Idle → Patrol → Aggro → Attack → Disengage” |
| “Add a health bar” | “UI elements use the damage palette for threats, restoration palette for healing” |
The second column is what I call intent density. The LLM doesn’t just transcribe—it derives. One constraint generates dozens of correct implementation decisions.
I asked Claude to create a “teapot demo.”
If you know computer graphics, you know what I was really asking for—the Utah teapot, the “hello world” of 3D rendering since 1975. Claude understood the reference and generated working implementations in React and Love2D simultaneously. Two completely different stacks, same cultural shorthand.
That session didn’t stop at a spinning teapot. It went: basic teapot → flight controls → environment with floor, sky, mountains → lane-based movement → refactor for cleanliness. A full design session, not a code generation task. (Here’s that session.)
That’s when I realized the LLM wasn’t just a code generator. It was a peer who knew the canon. A lifetime of ideas—games I wanted to build as a teenager, systems I’d designed in my head during my networking career, architectural patterns I’d seen work and fail—suddenly had a path out.
The breakthrough came with Dreamreach. I’d noticed that without guardrails, the LLM would eagerly plan to rewrite code it couldn’t see—code that was already written and working. I needed to give it more structure as context so it could “see the walls” where it didn’t need to go.
I took a 6-7k line spec and got a full working 3D game with lighting and shaders in four prompts. One big prompt generated a solid build. Three more cleaned up common implementation bugs.
That was the moment I felt free. Or thought I did—I’m still refining this. But the idea is clear: express features as iterations on the spec, and the LLM is consistent enough now to deliver. It didn’t feel like this even six months ago.
Here’s what surprised me: Love2D is a minimal framework. No visual editor. No asset pipeline. No entity system. Just Lua and a render loop.
Unity has all of that. Years of tooling. Massive ecosystem.
And yet.
| Beef Arena (Unity) | DomRTS (Love2D) | |
|---|---|---|
| Time | 2 years | 1 week |
| Lines | 18,000 | 30,000 |
| Commits | 166 | 80+ |
| Iteration style | 39 scrapped scenes | 6 benchmark retrospectives |
The LLM didn’t care about the stack. It cared about the specification.
The bottleneck was never the engine’s lack of features. The bottleneck was the bandwidth between my brain and the codebase. AI solved the bandwidth problem—but only because I’d already solved the knowledge problem.
“But is the code any good?”
Fair question. Here’s my answer: DomRTS has 6 retrospective documents that read like lab notes.
The code isn’t just generated. It’s measured, profiled, and deliberately optimized.
This only works because of the control group. Two years of manual implementation built the architectural intuition to know what “good” looks like. The LLM is a force multiplier. If your skills are zero, 100x zero is still zero.
The pathfinding bugs taught me this clearly. Thirty-five years of hobby and professional programming, several implementations of pathfinding and collision detection under my belt—I could constrain the AI. But I had to be specific. Sometimes I was essentially dictating what lines of code to write.
That’s when I realized: if I’m just telling it what to type, my abstraction isn’t working. The value isn’t in having the AI write code. It’s in having the AI derive correct code from good constraints.
The 2 years weren’t wasted. They were the compiler.
Every hour learning pixel art was an hour learning to see what I was asking for. Every hour debugging Unity’s physics was an hour learning what questions to ask about collision. Every hour writing music was an hour understanding why game feel matters.
The sprint didn’t replace that work. It leveraged it.
If you’re starting this path today: learn data structures. Learn automata. Learn some discrete math. The fundamentals matter more now, not less—because they’re what let you constrain the AI correctly.
More development. More games. A prolific portfolio of architectural and design work. That’s the goal.
This article is part of a series on AI-assisted development. See also: Intent Density, 10 Days, 75,000 Lines, Spec-Driven Development.