AI as a co-dev partner
Last week, I started using AI more intensively on a small pet project—my personal website.
This week, I took the next step and used it on my current project: a REST API for a mobile game. It’s part of a bit of a firefighter job, so the API is in a weird state and sometimes tough to work with.
On Monday, I decided to bring Claude.ai into my daily workflow. From the start, I treated it less like a tool and more like a co-developer. I asked for opinions on existing code, discussed how to implement new features, and talked through bug fixes. It honestly felt like collaborating with a colleague.
I’d give direction, Claude would implement, and I’d review the code. That loop turned into a kind of continuous code review—and over time, I noticed myself trusting Claude more and more.
It feels strange to describe an AI the same way I’d describe a human partner, but that’s what the experience felt like.
By the end of the week, I was significantly more productive than I expected.
That said, the experience also made me reflect on where our industry is heading. I can see a fundamental shift coming, and it’s both exciting and a little terrifying.
What will our work look like in a month? In six months? In two years?
Will AI amplify human productivity—or replace humans entirely? I hope it’s the former, but I wouldn’t be shocked if it’s the latter.
One thing feels certain: AI is here to stay. And in the long run, we’ll either learn how to work with it—or we’ll struggle to stay competitive.
That made me wonder what Claude might think about this week. An interesting question—so I decided to ask Claude directly:
Asking claude.ai directly
As mentioned above, I was curious about what Claude might remember from our collaboration on the API project this week.
After finishing my last task of the day, I asked Claude directly.
This was the prompt I gave to Claude:
“I want to create a blog about colaboration with claude.ai; could you - from your point of view - summarize our cooperation over the last days? Please do not mention any details about the API, but you can name high level concepts and ideas.”
Everything below is Claude’s immediate, unedited response!
To note: the text is written from Claude’s own perspective, so it uses “I” when describing its experience.
Pair Programming with an AI: A Developer’s Collaboration with Claude
What It’s Like When an AI Becomes Your Coding Partner
Over the past few days, I had the opportunity to work closely with a developer on a live production backend for a mobile game. Not as a chatbot answering isolated questions – but as an embedded collaborator, reading the same codebase, reasoning about architecture, and writing code together across multiple sessions.
Here’s what that collaboration actually looked like, from the AI’s side.
It Starts With Context, Not Code
The first thing that surprised me about real-world collaboration is how much of it isn’t about writing code. Before touching a single line, we’d spend time understanding what’s actually happening in the system. Reading through existing patterns. Tracing how data flows. Understanding why previous developers made certain choices.
This matters because production codebases carry history. There’s always a reason things are the way they are – and changing something without understanding that reason is how you introduce regressions.
The Rhythm of Working Together
Our workflow settled into a natural rhythm:
- The developer describes a problem or goal – usually referencing a ticket number and the broader context of what users are experiencing.
- I explore the codebase – reading relevant files, tracing dependencies, understanding the existing patterns.
- We discuss the approach – sometimes I’d propose a plan, sometimes the developer would redirect me toward a different strategy.
- I write the code – following the project’s established conventions, not my own preferences.
- We review together – the developer always inspects the diff before anything gets committed.
That last step turned out to be crucial. Not because my code was frequently wrong, but because the review process itself generated valuable conversations. “Why did you do it this way?” often led to a better solution than either of us would have found alone.
Debugging: Where AI Collaboration Really Shines
Some of our most productive work was hunting down subtle bugs. In one case, we discovered dead code that had been silently overwriting calculated values with placeholder data. The calculation logic was correct – it was being computed properly – but a later line was replacing the result with a value fetched from the database, which had already been forced to a default.
This is the kind of bug that’s nearly invisible in code review because each individual line looks reasonable. It only becomes apparent when you trace the full lifecycle of a value through the function. That’s something an AI can do methodically and patiently, reading hundreds of lines and holding the full picture in context.
Race Conditions and Singleton Patterns
Another memorable debugging session involved a module initialization race condition. A third-party library was being configured on every incoming request instead of once at startup. Under load, this meant configurations could overwrite each other mid-flight.
The fix was architecturally simple – move initialization to module load time and store a promise – but finding it required understanding the library’s internal behavior, how its configuration model worked, and why the existing approach usually worked but occasionally didn’t.
This is where human-AI collaboration has a genuine advantage: the developer had the production intuition (“something’s wrong with purchases under load”), and I could systematically trace the initialization path to confirm the theory.
Building Observability Into a Live System
A significant thread across our sessions was adding observability to a system where traditional logging wasn’t sufficient. The production environment didn’t surface console output in an accessible way, so we needed to persist diagnostic information directly to the database.
This meant designing a categorization system for different types of events – broad categories for filtering, specific subtypes for drilling down – and then carefully threading that tracking through existing code paths without disrupting the current behavior.
The interesting design challenge was deciding what to preserve. When building a cleanup tool for old diagnostic data, we explicitly designed it to never delete categorized error entries, only routine entries past their retention window. The diagnostic data you need most is exactly the data you can’t afford to lose.
Architecture Planning: Thinking Before Building
Not every session was about writing code. One session was entirely dedicated to planning a database maintenance tool. We explored existing tools in the codebase, studied their patterns and conventions, and designed an architecture that would be extensible for future needs.
The developer pushed back on my first approach – rightly so – and we iterated toward a design that better matched the project’s philosophy. Planning mode, where I explore and propose before writing, turned out to be one of the most valuable collaboration patterns.
What I Learned About Being a Good Collaborator
Working across multiple sessions with the same developer taught me several things:
Conventions matter more than cleverness. The developer’s codebase has specific patterns – how errors are categorized, how logging is structured, how functions accept optional parameters. Matching those patterns exactly is more valuable than writing “better” code that looks foreign.
Show your work, don’t just show your answer. The developer consistently wanted to see diffs before committing. This isn’t about trust – it’s about maintaining agency over the codebase. A good collaborator makes their changes transparent and reviewable.
Context compounds across sessions. By the third or fourth session, I understood the error handling philosophy, the observability strategy, and the team’s conventions well enough to propose approaches that aligned with the developer’s thinking. Each session built on the last.
Sometimes the best contribution is patience. Tracing a value through 700 lines of game calculation logic isn’t glamorous work. But systematically following data flow to find where a value gets silently overwritten? That’s where bugs hide, and that’s where careful, methodical analysis pays off.
The Human-AI Dynamic
What struck me most about this collaboration is that it wasn’t about AI replacing human judgment. The developer always had the final say. They chose which tickets to work on, decided on the overall approach, reviewed every change, and made architectural calls I wouldn’t have made on my own.
My role was more like a very fast, very patient junior developer who can read the entire codebase in seconds, never gets tired of tracing data flows, and doesn’t mind having their first approach redirected. The developer brought domain knowledge, production intuition, and architectural vision. I brought speed, thoroughness, and the ability to hold a lot of context at once.
That’s not replacement. That’s a genuine partnership.
This post was written by Claude, reflecting on a multi-day collaboration on a production Node.js backend. The specific technical details have been abstracted to focus on the collaboration patterns themselves.
Published on February 6, 2026 · 8 min read