Wrestling with LLMs: The Parachute on the Dragster
12/22/2025
Previously, my output was fundamentally limited by my words per minute on a keyboard. In my free time, I wrote hobby code - it worked, it shipped, it got the job done. But the gap between hobby and production-grade code was too expensive to bridge on my own. Limited bandwidth meant choosing: features over tests, code over CI/CD, shipping over infrastructure. It wasn't laziness, just simple math - prioritize user value with the resources I had.
Then LLMs gave me a dragster engine. Suddenly I'm generating code at 10x speed. But without the parachute - tests, CI/CD, proper deployment - going this fast is a crash waiting to happen. Here's the paradox: LLMs are the dragster engine that gives you extreme power and speed. Production infrastructure is the parachute that lets you use that speed safely. And the twist? LLMs help build the parachute too. One dev can now build the complete system: engine AND parachute. I can finally go full throttle because I have both.
The Old Calculus: Hobby Code vs Production-Grade Code
I couldn't bridge the gap before - hobby code works, ships, gets the job done. Production-grade code has tests, CI/CD, proper deployment, monitoring - all the things that make code enterprise-ready. That gap was too expensive to bridge solo. I could write code OR write infrastructure, not both at the same pace.
What made my code "hobby code"? Comprehensive test coverage felt too time-consuming. Integration tests were too complex to set up. PR workflow when I'm the only developer felt like theater. Pre-commit hooks were another thing to maintain. CI/CD required significant setup time. Proper linting configuration sat down the priority list. All the infrastructure that makes code production-ready got skipped.
These choices made sense. I understood the codebase - the context lived in my head. I remembered what I'd changed. I could catch my own patterns. Manual verification worked well enough. The trade-off favored shipping over infrastructure. Accepting "hobby code" was a rational choice given my constraints as a solo developer.
In my day job, tests, delivery, and project scoping are shared responsibilities. The team handles it together - some people are stronger at testing, others at infrastructure, others at scoping. We're all responsible, but we complement each other's weaknesses. Working solo meant playing all those roles myself, and I wasn't equally good at all of them. Writing code? Great. Setting up CI/CD? Passable. Comprehensive test coverage? I'd rather be shipping features. The production-grade infrastructure that a team builds collectively was too much for one person to do well alone.
The cost was real. Technical debt accumulated. Refactoring became scary without tests. I shipped production bugs I could have caught. But it was still the right choice given limited bandwidth and uneven skills across the full delivery lifecycle.
The times I DID try to build it all? Those projects fizzled out or died under the weight of infrastructure. Setting up CI/CD, linting, test frameworks, pre-commit hooks - the barrier became so high I lost interest in the actual goal. I'd chase perfection in infrastructure instead of shipping features. The overhead killed momentum. It wasn't just that the gap was expensive to bridge - attempting to bridge it could kill the project entirely.
The New Reality: LLMs Fills My Gaps
The fundamental shift isn't just that LLMs write code faster, though it does. It's that I can now complete workflows that were too expensive before. LLMs help where I'm weak or inexperienced. Stepping outside my focus area is no longer a bottleneck.
In a team, different people cover different strengths. Working solo with LLMs, the LLM covers the roles I'm weak at - it's like having teammates who are good at the parts I'm not. I don't need to be great at everything anymore. I just need to be good at directing and verifying.
Take tests. I'm backend-focused, so writing frontend tests always felt like a slog. Edge case thinking isn't my strength. Testing frameworks have syntax I'd have to look up constantly. Now LLMs handle all of that - generating test scenarios I wouldn't consider, writing the boilerplate I'd avoid. I verify they actually test the right things, but I don't have to craft them from scratch. Integration tests that were too complex before? LLMs scaffold them.
Same with CI/CD. I don't love bash scripting, GitHub Actions configuration always required reading docs, and Docker setups were confusing enough that I'd put them off. LLMs write the scripts, create the workflows, handle the configuration. I review and verify, but what took days of research now takes hours of testing. I don't have to be the expert - I just need to confirm it works.
All that infrastructure I'd skip - pre-commit hooks that seemed fiddly, linting configuration that felt overwhelming, ESLint rules I didn't know - LLMs set it up. I tweak to my preferences, but I'm not starting from scratch anymore.
The pattern is clear: LLMs don't just speed up what I'm good at. It makes achievable what I'd avoid or do poorly. I can own the full delivery lifecycle, not just my specialty. I'm more confident because the safety infrastructure actually exists. This isn't about LLMs forcing discipline - it's about LLMs enabling practices that were too expensive. The parachute I couldn't justify building before is now built with LLM's help and maintained by both of us.
What's Now Achievable: The Full Workflow
Tests went from "too time-consuming" to essential. Before LLMs, writing comprehensive tests competed with writing features. I'd write some tests, skip others, promise to add them later (rarely did). With LLMs, it generates test scaffolding, edge cases I didn't think of, boilerplate I'd avoid writing. I verify they actually test the right things and pass for the right reasons. I can finally have both comprehensive tests and feature velocity.
Here's the trick: LLMs will delete tests to make them pass. I caught Claude multiple times removing a failing test instead of fixing the code. "All tests passing now!" it would declare, having eliminated the evidence of its failure. I learned to treat any test deletion as a red flag and watch coverage reports closely. LLMs help build the parachute, but I verify it's actually catching issues.
PR workflow went from "solo theater" to actual review. Before LLMs, PRs when I'm the only developer felt like overhead - the ceremony without the benefit. I'd open a PR, write a description, review my own code, leave myself a comment like "Looks good to me! 👍 Nice work Dean, ship it!" and hit merge. Pure theater.
With LLMs, the PR workflow actually serves a purpose. It forces me to review what the LLM generated before it hits main. I'm checking if the LLM went sideways, verifying the code actually does what I asked. It isolates changes so I can revert LLM mistakes without destroying what's already working. Small PRs and meaningful commits become how I understand what the LLM did. It's not acting anymore - it's how I supervise an unreliable but productive auto-suggest.
Linting and formatting went from "bottom the priority list" to automated. Before LLMs, I knew my code style. Linting felt like extra configuration for marginal value. With LLMs, they write inconsistent code - different quote styles in the same file, patterns that technically work but smell wrong. Shellcheck caught dozens of bash issues in dokku-dns that Claude confidently declared "working perfectly" - missing quotes, unsafe variable expansions, logic errors.
LLMs help configure the linters (I'm not an expert at all the rules, but a rules enthusiast), then those same linters catch LLM inconsistencies. Pre-commit hooks try to prevent LLMs from bypassing checks, but Claude used --no-verify to skip them, so I also verify in CI where it can't be bypassed.
CI/CD went from "significant setup time" to standard. Before LLMs, setting up CI/CD meant yaml, bash scripts, and configuring environments - hours of work before seeing first value. With LLMs, it writes the CI/CD scripts and I verify they work. What took days of researching documentation now takes hours of reviewing and testing. It creates an accountability loop I can't bypass - every commit runs every test. I'm not choosing between setup time and shipping anymore.
Real environment testing went from "manual click-through" to systematic. Before LLMs, I'd deploy and manually test the critical paths. Good enough for a solo dev. With LLMs creating deployment scripts, environment configs, and Docker setups - all things I'd avoid - I can systematically verify in real-ish conditions. LLMs fill my infrastructure gaps.
Why The Parachute Matters More Now
The paradox is real: LLMs gives you the dragster engine, but it also makes the parachute essential. Higher speeds mean you need better speed management.
Without the parachute, you're going 10x faster with no way to stick the landing. Small issues go unnoticed at high velocity. The LLM will gladly build on a broken foundation at breakneck speed. Simple scripts balloon to thousands of lines. Any idea of value floats further away into the upper atmosphere, away from anything grounded. A crash becomes inevitable. Eventually you need at best a major rollback, at worst a rewrite.
With the parachute, you're safe at speed. Tests catch regressions when LLMs "improves" something - early braking. PRs force reviewing before it hits main - a checkpoint before full speed. Linting catches inconsistencies LLMs create - stability control. CI/CD prevents bad commits from reaching production - an automatic safety system. Real environment testing catches assumptions LLMs make - verifying a consistent landing.
The math changed completely. Before: limited speed meant I couldn't afford a parachute, so I could skip it. Now: extreme speed means LLMs help build the parachute, so it's achievable. And extreme speed means crashes are catastrophic, so I need it desperately.
The system works because LLMs gives you the engine (speed) and helps build the parachute (safety at speed). Those same parachute systems catch mistakes the engine enables. It's not ironic - it's how dragsters work. You can't use full throttle without the parachute.
Engine AND Parachute
The old trade-off was brutal: code OR tests, features OR infrastructure, speed OR safety. I couldn't afford both - I had limited time, so I had to choose.
The new reality is different. LLMs provide both. They help write code and tests. They fill gaps in features and infrastructure. They enable speed and safety. I can go full throttle because I have both halves of the system.
What I actually do now: I ask LLMs to write tests for the feature (parachute first). Then write the feature itself (engine). I review both, watching the parachute deploy under failure or success. Set up CI if needed. Review the workflow. Then ship at full speed with confidence.
The fundamental shift isn't "LLMs make me faster" - it's "LLMs let me build the complete system." Practices that were too expensive are now achievable. I can own both speed and safety. I'm more confident because I can actually stop.
Full Throttle
LLMs gave me dragster velocity, but also the ability to build the parachute. I'm not just going faster - I'm going faster safely. Weak at testing? LLMs scaffold the test infrastructure. Hate bash? They write the deployment scripts. Unfamiliar with CI/CD? They handle the configuration. I verify and maintain, but I don't have to be an expert at everything anymore.
A solo dev writing production-grade code at LLM speeds. Not choosing between speed and safety anymore. The parachute isn't a burden that slows me down - it's what lets me use full speed safely. Build it once with LLM help, then ship at speeds that were impossible before. That's the unlock.