I used to think Copilot was impressive. Then I thought ChatGPT-plus-copy-paste was the move. Then Cursor arrived and I thought I'd found the ceiling of AI-assisted coding. I was wrong every time. Claude Code broke the ceiling entirely, and I'm not going back.
This isn't a product review. I don't work for Anthropic. This is a working log of how my development workflow changed when I switched from chat-based AI coding to a terminal-native AI agent that lives where I already work.
The Before: Death by Context Switching
My old workflow was some variation of this: Write code in the editor. Hit a problem. Switch to a browser tab with ChatGPT or Claude. Paste in the relevant code. Explain the context. Wait for a response. Read the response. Copy the code back into the editor. Realize the AI missed something because I didn't include enough context. Go back to the chat. Paste more code. Repeat.
The actual coding took 30% of the time. The context-switching tax took 70%. Every time I left my editor to ask an AI a question, I lost the thread of what I was building. And the AI lost it too -- each conversation was stateless. It didn't know my project structure, my dependencies, my patterns, or my previous decisions.
Cursor improved this by putting AI inside the editor. But it still operated in a request-response mode. You'd highlight code, ask a question, get a diff. Better, but still fundamentally a single-turn interaction. The model couldn't explore the codebase. It couldn't run the tests. It couldn't check if the build passed. You were still the orchestrator.
The After: The Agent Does the Work
Claude Code operates differently. It's a terminal application. It has access to your filesystem. It can read files, write files, run commands, and observe outputs. It maintains context across a long session. And critically, it has an agent loop.
Here's what a typical session looks like now. I describe what I want to build. Claude Code:
- Reads the existing codebase to understand the architecture
- Creates a plan and breaks it into steps
- Implements each step -- writing actual files, not showing me diffs
- Runs the build to check for errors
- Runs the tests to check for regressions
- Fixes any issues it finds
- Reports back when it's done
I'm not copying code between windows. I'm not explaining context. I'm not running builds manually. The agent handles the entire implementation cycle while I review the output.
Real Numbers from Real Projects
I keep rough time logs because I bill clients. Here are concrete comparisons:
Schiffli ERP System (textile manufacturing) -- Full-stack Next.js 16 application with PostgreSQL, multi-role auth, inventory management, job work tracking, invoice generation. Before Claude Code, a system like this would have been a 3-4 month build. With Claude Code, I shipped the MVP in 3 weeks. Not a prototype. A production system handling real business operations.
TaskBolt (project management tool) -- Kanban boards, real-time updates, nested tasks, team collaboration. I built and deployed it in 10 days. The previous version (before Claude Code) took 6 weeks and had more bugs.
The multiplier isn't 2x. It's closer to 5-8x for greenfield projects and 3-4x for work on existing codebases. And the quality is higher because the agent catches bugs during implementation that I would have found days later in testing.
The Hooks System: Teaching Claude Code Your Rules
One of the less obvious features that made the biggest difference is hooks. Claude Code lets you register pre- and post-tool hooks that run automatically. This turns implicit rules into enforced constraints.
Here's what I run:
// PostToolUse hooks (settings.json)
{
"hooks": {
"PostToolUse": [
{
"event": "Edit",
"pattern": "*.ts|*.tsx",
"command": "npx prettier --write $FILE"
},
{
"event": "Edit",
"pattern": "*.ts|*.tsx",
"command": "npx tsc --noEmit"
}
]
}
}
After every file edit, Prettier formats the code and TypeScript checks it. If the type check fails, Claude Code sees the error and fixes it immediately. No manual lint step. No "I'll fix the types later." The code is correct at every step.
I also have a pre-push hook that opens a review before any git push, a console.log detector that warns about debug statements, and a doc-blocker that prevents creating unnecessary markdown files (because the agent loves creating README files nobody asked for).
MCP Integration: The Force Multiplier
Claude Code's MCP support is what takes it from "good AI coding tool" to "autonomous development platform." With 14 MCP servers I've configured and connected, the agent can:
- Query my PostgreSQL database directly to understand the data model
- Search my email for relevant context about client requirements
- Look up current library documentation via Context7 (because training data goes stale)
- Run browser tests via Playwright to verify the UI works
- Search GitHub for real-world patterns when it's unsure about an approach
- Route complex questions to other AI models via PAL for a second opinion
The combination of the agent loop plus tool access creates something that's genuinely different from chat-based AI coding. The model doesn't just suggest code -- it implements, verifies, and iterates. It reads the codebase. It runs the tests. It checks the browser. It's doing development, not generating text that looks like code.
The Multi-Agent Patterns That Work
For complex features, I don't use a single Claude Code session. I orchestrate multiple agents with different roles:
// My standard agent roster
Planner → Creates implementation plan, identifies risks
TDD Guide → Writes tests first, implements to pass them
Reviewer → Reviews every change for bugs and style
Security → Scans for vulnerabilities, hardcoded secrets
Build Fix → Resolves compilation errors automatically
The planner runs first. Then the TDD guide implements while the reviewer and security agent run in parallel on each change. If the build breaks, the build-fix agent handles it. I step in for architectural decisions and final sign-off.
This is the equivalent of a senior developer with a QA engineer, a security auditor, and a code reviewer -- except they work at machine speed and never take PTO.
Tips for Getting the Most Out of Claude Code
After hundreds of sessions, here are the patterns that consistently produce better results:
- Start with reading, not writing. Begin every session by having the agent read the relevant parts of the codebase. "Read the auth module and understand how sessions work before making any changes." The agent writes better code when it understands the existing patterns.
- Use CLAUDE.md aggressively. This is the project-level instruction file that Claude Code reads at the start of every session. Put your architecture decisions, coding standards, and known gotchas here. I maintain detailed CLAUDE.md files for every project.
- Break large tasks into phases. Don't ask for "build me an ERP." Ask for "implement the inventory module with these specific endpoints." Scope matters. The agent performs dramatically better on well-scoped tasks.
- Never trust, always verify. The agent is fast and usually correct. But "usually" isn't "always." Run the tests. Check the browser. Read the code. The agent is your fastest team member, not an oracle.
- Invest in your MCP infrastructure. The more tools the agent has access to, the more it can do without asking you for help. My 170+ tools -- mostly from community and vendor MCP servers I configured and integrated -- represent months of incremental infrastructure work that compounds every day.
The developers who will thrive aren't the ones who write the most code. They're the ones who direct AI agents most effectively. Coding is becoming a specification and review activity, not a typing activity.
What I Miss (And What I Don't)
I don't miss writing boilerplate. I don't miss debugging typos. I don't miss looking up API signatures. I don't miss writing the same CRUD endpoints for the thousandth time.
I do sometimes miss the meditative quality of writing code by hand. There's a flow state in manual coding that agent-directed development doesn't replicate. But I don't miss it enough to go back. The trade -- shipping 5x faster with higher quality -- isn't even close.
Claude Code changed my relationship with software. Building went from months to weeks, from weeks to days. The bottleneck shifted from "how fast can I type" to "how clearly can I think about what needs to be built." That's a better bottleneck to have.
If you're a developer still on the fence about terminal-based AI coding, just try it for a week. Not on a toy project -- on real work. The before/after will be obvious within the first day.