Blog

How "Don’t repeat yourself" can mess up with your AI-generated code

Captain's log, stardate d122.y42/AB

Xavier Redó
Founder & CTO
AI-Generated Code

One of the most surprising patterns we've seen while working with coding agents like Claude Code is how aggressively they try to eliminate duplication. In many cases, that’s the intended behavior. But sometimes, the attempt to reuse code introduces coincidental duplication, which can cause problems in the mid-term.

Coincidental duplication occurs when two features look similar enough that the AI agent feels tempted to abstract shared behavior into common classes or services, even though, conceptually, they represent different business concerns.

The abstraction might seem the right choice. The code becomes shorter, there is no duplication and there are less classes involved. But that "shared" logic contains edge cases from both domains that can accumulate over time when we add new business logic, and eventually, the abstraction turns into a fragile coupling point.

The costs of over-abstraction

When two logical concepts are forced into the same abstraction, every new use case increases the cognitive load of that shared component.

A coding agent extending the system later on must now understand:

  • the generic behavior,
  • the edge cases of Feature A,
  • the edge cases of Feature B,
  • and how all of them interact together.

That context is already difficult for humans to preserve over time. For AI agents working across multiple sessions, it becomes a problem caused by the poor memory management systems that exist nowadays.

What we’ve observed is that Claude Code tends to keep extending the generic abstraction instead of reconsidering whether the abstraction itself was a mistake in the first place. The result is often a growing accumulation of conditionals, exceptions, and implicit assumptions.

While a good automated test suite can catch many of these regressions, relying on having a test for every single edge case continues being fragile. It's much better to address the problem by using a proper architecture.

Readability still matters in the age of AI

There’s a growing narrative that developers will spend less time reading codebases in the future. We agree with this. But readability still matters.

Based on our experience, when a codebase is easy for humans to understand, it’s usually a sign that the architecture reflects clear domain boundaries and well-separated responsibilities. Those same characteristics also make the codebase easier for AI agents to evolve safely.

The exact same problems a human developer faces when extending a poorly abstracted class are the problems a coding agent will face later when trying to modify the code it generated itself.

Ruby & Rails and the culture of DRY

We’ve observed this pattern especially in Ruby on Rails applications. For years, the Rails community has strongly promoted the "Don’t Repeat Yourself" philosophy. As a consequence, many developers and now coding agents trained on those codebases instinctively try to remove any duplicated structure they encounter.

We suspect this issue may also exist in TypeScript and Python projects. As the AI agents are not that simple at their core. But we don't have measurable data yet.

What we’re doing internally

At MarsBased, we’re currently defining rules and conventions for AI-augmented development. We are adding rules to explicit instruct coding agents to be more cautious around coincidental duplication and premature abstractions. And ask more when facing those situations during planning sessions.

At the same time, we don’t want to enforce a rigid rule against reuse. AI agents tend to be literal when we give them instructions and there are many cases where abstraction is really needed and it is still the right choice.

Our current approach is to make sure our engineers are highly aware of this pattern and can identify when duplication is actually protecting domain clarity rather than harming maintainability.

Share this article

Related articles

AI-Augmented development

Standardising innovation: Our internal guide to AI-augmented development

How we integrate agents like Claude and Copilot into our workflow using a rigorous Research, Plan, and Implement framework to ensure speed without sacrificing architectural excellence.

Read full article
MarsBased x Claude

How we use Claude for product management at MarsBased

How MarsBased uses Claude and Linear to automate PM workflows, shifting the role from administrative tasks to strategic product thinking.

Read full article
Integrating Moody’s with Claude

Integrating Moody’s with Claude

Bringing top-tier credit intelligence into AI At MarsBased, we partnered with Moody’s to make their credit-intelligence data, ratings, analysis, scorecards, and research, accessible directly within AI workflows powered by Claude. Using the Model Context Protocol.

Read full article