← back to projects

Tank Tracts: What I Learned Vibe Coding with Lovable

Docker Laravel
visit project

The term "vibe coding" has been tossed around for a while now. It basically just means using AI to create an app without needing to know how to code; that is, coding by "feel" rather than by skill. Depending on what article you read, vibe coding will allow anyone to create their own perfectly servicable apps.

But will it though?

Being able to write code without knowing how to write code has been one of the chief threats to my line of work by AI. What's the point of having decades of experience creating software if AI can do it on its own in just a few minutes?

I decided it was time to see what all the fuss has been about.

It is common for coders, when trying out a new language or framework, to start with a simple project. It seems that most coders have created their own todo app or project management app at some point. I decided to try out Lovable with something I've never tried creating before: a multiplayer game.

When I was in high school there was a networked game my friends and I would sometimes play on the computers in the lab (because we didn't all have tiny computers in our pockets back then). The game was called Bolo and it's unfriendly mechanics, terrain manipulation, and friendly-fire on teammates was the cause of much enjoyment and frustration back in the day. I figured if I was going to try letting AI do all the work I might as well have some fun with it.

So I used Lovable to build Tank Tracts, my interpretation of the game I played all those years ago.

What Lovable is

For those unfamiliar, Lovable is a browser-based platform that generates full-stack web applications from natural language descriptions. You type what you want, and it builds it all together: frontend, backend, the works. Under the hood, it's powered by AI models (including Claude) that interpret your prompts and produce working code. You can see a live preview of your app as it's being built, iterate by describing changes, and deploy with a click. No local development environment required.

What worked surprisingly well

The initial scaffolding was genuinely impressive. I described the basic concept: a top-down tank game where multiple players connect via browser, move around a map, and shoot at each other, and Lovable produced a playable prototype faster than I could have set up a project from scratch. The canvas rendering, the basic game loop, the player input handling, it set this all up on the first attempt.

Iterating on visual elements was fast and intuitive. "Make the tanks bigger." "Add a scoreboard in the top right." "Change the background to a dark gray." These kinds of prompts worked exactly as you'd hope. For UI and presentation work, vibe coding feels almost magical. Lovable was even able to generate sounds for me!

The platform also handled the WebSocket setup for multiplayer communication with less friction than I expected. Describing the basic flow of "when a player moves, broadcast their position to all other connected players" produced a reasonable implementation that actually worked. It wasn't production-grade, but it was functional, and it got there in minutes rather than hours.

What surprised me, and honestly concerns me a little

The speed at which Lovable generates working code is remarkable, and that's exactly what concerns me. It's easy to look at a working prototype and assume the underlying code is solid. In my experience, it often wasn't. The AI would produce code that worked but was structured in ways that would become problematic at scale or over time. Tightly coupled components, duplicated logic, inconsistent patterns across different parts of the codebase were some of the issues I found once I insepcted under the hood.

For someone without engineering experience, these issues would be invisible. The app runs, the features work, everything looks fine until you need to change something fundamental, and the whole thing resists modification because there's no coherent architecture underneath. This is the part that gives me pause about the "anyone can build an app" narrative. You can build it, yes. But can you maintain it? Can you debug it when something breaks in a way the AI didn't anticipate? That's a different question entirely.

I also noticed that Lovable would sometimes make changes I didn't ask for while implementing something I did ask for. A prompt to fix the scoring system might subtly alter the collision detection. Keeping track of what changed and why, something a developer does naturally through version control and code review, becomes harder when you're steering an AI that touches multiple files with each prompt.

Where it fell apart

The most telling moment came when I hit a bug in the multiplayer synchronization. Players were seeing inconsistent game state under certain timing conditions which could make the game unplayable. I described the problem to Lovable and asked it to fix it. It tried. And tried. And tried. Over 20 attempts, the AI would make a change, introduce a new problem, try to fix that, break something else, and loop. Each attempt was confident. Each attempt was wrong in a different way.

Eventually, I had to connect the project to GitHub, check out the code locally, read through what the AI had actually written, and fix it myself. The bug turned out to be a straightforward race condition in how player state updates were being processed; the kind of thing an experienced developer would recognize and resolve relatively quickly once they could see the code. But the AI couldn't reason about the timing of its own code. It could generate code that handled each step correctly in isolation, but it couldn't see the emergent behavior that arose from those steps interacting under real-world conditions.

This was the most valuable lesson of the entire experiment.

What I took away

Vibe coding with Lovable is genuinely impressive for getting from zero to a working prototype. The speed is real. For UI work, simple features, and well-understood patterns, it's a legitimate productivity multiplier. I can absolutely see it being transformative for non-technical founders who need to validate an idea quickly, or for developers who want to skip the boilerplate and get to the interesting parts faster.

But the experiment also made clear that AI doesn't eliminate the need for experienced software engineers. Instead it shifts where their expertise matters most. When things work, anyone can prompt their way to a result. When things break in subtle, systemic ways (eg: race conditions, architectural debt, emergent bugs from the interaction of multiple features) you need someone who can read code, reason about systems, and understand why something is failing, not just that it's failing.

The multiple failed fix attempts weren't a failure of AI in general. They were a precise illustration of the boundary between generating code and understanding it. Lovable is an extraordinary tool. But a tool is only as good as the judgment guiding it. While AI is changing the world (at least the world of software engineering), I now have a better idea of its strengths and weaknesses. It's still remarkable what one can do with AI, but it certainly isn't something to be trusted, or feared.

// contact

Let's Connect

Have a project, a role, or just want to connect? I'd love to hear from you.