When AI Promises Speed but Delivers Debugging Hell
Lessons learned from building Codescribble with a little help (and a lot of frustration) from an AI
I am launching a new app today: Codescribble. Codescribble is a basic shared text editor. Two or more people can open the same file and work on it at the same time, like a basic Google Doc. This focuses on quickly getting you and collaborators on the same document quickly.
Feel free to give Codescribble a try. It's live now, and I'd love to hear your thoughts!
In this article, I wanted to introduce Codescribble and walk through my development process. As the title implies, I used LLMs extensively, and they ended up causing a significant amount of frustration as I struggled to unwind the mess they made . The title is a little punchy, but in practice I attempted to use it with a technology I didn’t understand, leading to frustration and a few more grey hairs.
Why Codescribble?
Codescribble isn’t the first app in its class. I use Codeshare at work quite often for interviews, but I find the site unstable (there is nothing worse than it crashing during an interview). I wanted something I would have more control over and something that could be available when needed. I also thought that Codeshare’s pro version is quite expensive and the free version is ad-supported, so I wanted to do something cheaper. So while this isn’t an innovative solution, it is something I actually wanted to exist as I haven’t been able to find a suitable existing solution.
I came into this with a few goals:
Build my own Codeshare
Do it as quick as possible using LLMs
Create a cheap pro version
The Vision and Implementation
I decided to build the entire application using TypeScript for both frontend and backend, with Claude 3.5 Sonnet and Cursor as my primary development partner.
I wrote surprisingly little code myself. Instead, I provided Claude with a detailed requirements file outlining my vision, and let the AI handle the implementation details. It required me to prompt multiple times to get little bits done.
The early development process was straightforward: query Claude, test the result, repeat. This loop worked well in my local environment, and Claude handled about 80% of the implementation effectively. The AI understood my requirements and translated them into functional code with minimal intervention.
I want to emphasize that I had a working prototype with Claude basically without writing a single line of work.
Poor Quality Code
The real challenges emerged during deployment. After a few hours of work, I had what I considered to be a feature complete prototype and wanted to throw it up on the internet. Doing so revealed significant oversights. I’m quite underwhelmed with a number of the basic mistakes and shortcomings that were left in the code. Examples:
Hardcoded localhost references scattered throughout the codebase
Multiple methods for backend access, despite Claude having context of the current implementation. This means that in some cases it used axios, some cases not with a hardcoded localhost reference.
Sometimes React contexts were used for API calls, sometimes not. The inconsistency here was the problem really.
These sorts of errors were a little annoying, but straightforward to fix. A few of the problems required refactoring sections, but most of them were “annoying localhost assumptions”.
Deployment: The Unexpected Hurdle
Things started going off the rails when I started working on automated deployment scripts. My attempt to deploy Codescribble came from a prompt that was essentially “build me a docker deployment configuration”. I figured this would a simple task (putting aside the above problems), but it turned into a debacle.
Claude helpfully built me something, but it didn’t work out of the box, which is fine. The following is a walkthrough of my struggles. As a reminder, my goal was always do this as quick as possible and rely on the LLM to fix these problems.
First, I didn’t tell Claude I wanted to build on one machine and deploy the Docker containers on another.I wanted to run Codescribble on a small DigitalOcean droplet. I actually did try to build it on the droplet because I wasn’t thinking and crashed it when it ran out of RAM
From there, I had problem after problem around environment variables. I got the containers running, but the frontend didn’t have the url for the backend, as it was in an environment variable. I also couldn’t migrate the database because the migration script couldn’t access the env vars for the database.
I am not a Docker expert, and haven’t really struggled through understanding how these environment variables worked in the past. I figured Claude would be able to figure it out
I started reporting these problems with the migrations, thinking it would be an easy fix. In hindsight, I know that the problem is that `node-pg-migrate`, which Claude set me up with for migrations, needs DATABASE_URL to be set. I had thought we were missing PG_HOST, etc, because that’s what my code in general used for database access, so I posed this problem as being unable to access environment variables. Therefore, I was asking Claude to solve the wrong problem.
Here is where the rabbit hole started, where I started getting frustrated and being unable to get Claude to figure out how to solve my problems. In hindsight, it even said “hey your migrations might be the problem?'“, and I said “no no there’s no way”.
I ended up with a bunch of AI slop here, where I kept starting new contexts and getting new files. I ended up with four dockerfiles in total, a Dockerfile and Dockerfile.prod in each folder, but only one of those two were used. These iterations were very painful, and I got a lot of bad code out of it, but I eventually got it to work.
How did I get it to work? As you might expect, I got fed up and started fixing it myself. I started digging into the slop (my frontend Dockerfile deployed nginx only without even deploying my files!)
By this point, I will admit that I was “on tilt”. My partner told me I should take a break, but I was so deep in and felt so close that I didn’t want to. It wasn’t an effective use of my time or emotions.
I ended up, after maybe an hour or two, getting to the place where I was before this all started - and my migrations still didn’t work. At that point I started digging into how the migrations worked, and since it wasn’t obvious I went to the docs and realized that DATABASE_URL was missing.
Time and Effort Reality Check
The contrast in development time was striking:
Initial server build: ~3-4 hours
Deployment troubleshooting: Many more hours than I'd care to admit
The worst part about this is that, in my head at least, each step I took seemed like it made sense. In hindsight, it was an absolute waste of time and was getting me further and further away from my goal I've observed similar situations in professional settings where focusing too closely on individual steps can obscure the bigger picture.
Key Lessons
Embrace Incrementalism, but Keep Perspective
While breaking down problems is valuable, maintain awareness of the overall goal
Don't let small successes blind you to larger inefficiencies
Combat Tunnel Vision
The mantra "I'm almost there, Claude will get me there" led to hours of unproductive work
Taking breaks and approaching problems with fresh eyes is crucial
LLMs are useless if you don’t understand the context
AI can be worse than useless when you don't understand the underlying technologies
I needed to learn Docker properly; once I did, deployment issues became manageable
It was giving me mostly correct answers, but I couldn’t solve the remaining bit myself
AI solutions need verification and understanding, not just blind acceptance
Conclusion
I am glad I went through this process. I got to learn Docker better, and I got a cool, useful app out of it. I also got a better story than I expected about how to work with LLMs. It is also a good reminder of not getting lost in the weeds and the trouble of falling into making terrible incremental actions.
Check out Codescribble and let me know what you think. Feedback is always appreciated. You can follow me on X as well here: @impossibilium