On developing using an AI coding assistant
Experiments with a side project, using AI developer tools.
I've been experimenting with AI coding assistants on a side project recently. The specific project doesn't really matter - what's important is that I wanted to use it as a testbed for these AI developer tools that everyone's talking about. My journey with these tools has evolved significantly, and I thought I'd share what I've learned along the way.
Starting too big
My first instinct was to go all in. I used a couple of different tools to try and build a whole product structure - essentially asking the AI to scaffold a significant portion of my application. However, I didn't really enjoy the experience.
There was simply too much code generated at once. Reviewing it felt like reading someone else's code - which, in a sense, it was. When I tried to run it, it didn't work as I had expected or intended. Some of the tests failed - and testing is important to me. After struggling with it for a while, I abandoned this approach.
Scaling back
I decided to take smaller steps, though these were still somewhat large chunks of functionality. I asked the tool (Claude Code in this case) to check with me at each stage of development. This yielded better code, but the process didn't feel smooth. There was almost too much back and forth, it took too long to get to the changeset for the commit.
I asked Claude Code to act as a pair programmer and start with failing tests before moving on. But it felt less like a conversation and more like me constantly checking someone else's work. Making changes this way felt like it took too long. Objectively, it was probably still faster than writing everything from scratch, but the experience just felt cumbersome.
Finding the sweet spot
As I became more comfortable with the tools, I found myself more willing to give the AI more leeway to make changes. Claude Code currently asks the user if you want to run a certain command once, or every time going forward within the session, and I let it do more and more as time went by. I settled into a rhythm:
Ask it to tell me it’s strategy
Let it implement a feature with tests
Read the changeset to understand the intent
Run tests, linting and code analysers
Refactor anything I felt that needed it
Run a full build and commit the change
I started to feel more like some hybrid architect and product manager, stating desired outcomes in terms of experience and technical approach, then verifying and modifying - rather than doing all the coding myself. And as my application developed and gained more functionality, the changes I was making became smaller. This gave me more confidence in letting the AI work through problems - running its own tests, making further modifications, before telling me it was finished.
Small changes win
I realised this approach mirrors how I've always preferred to work as a developer. I've always favored lots of small changes over big ones. Whenever I used to review a pull request that touched hundreds of files or thousands of lines of code, I would approach it with extreme caution. The change set felt too big to fully understand everything that was happening.
Smaller, more discrete changes give you a better chance of understanding the impact, the intention and any potential issues. It seems I'm naturally developing this same pattern with AI coding assistants - asking for smaller, more focused changes, letting the AI work through them, and then carefully reviewing the outcome. This echoes things like ‘trunk based development’, and ‘small batch theory’ - as applied to AI software development agents.
Lessons learned
This field is changing so fast, and it’s been about 12 years since I had the word “engineer” in my job title - so take this with a pinch of salt, and as a point in time experience.
It’s a new paradigm - don’t expect an AI pair to be the same as a human pair, my experience shows it is more likely to give an answer than have a conversation, so ask better questions.
Make discrete changes with well defined outcomes - it’s more likely to produce code that you yourself might have written
The combination improves speed - I’m getting code that I might have written myself, but it feels much faster. I don’t feel I need to block a few hours to work on a side project, measurable progress can be done within 20 minutes.
Have you been using AI coding assistants in your work? I'd be curious to hear if your experience matches mine, or if you've found other approaches that work better for your projects.