Thoughts on Development Process and Speed

Introduction

As AI coding agents become more widely adopted, how is development speed changing?

Coding agents are increasingly becoming companions not only for implementation, but also for design.

If that is the case, our understanding of “speed,” which has long been based on conventional development processes, should also begin to change.

In this post, I want to look back at traditional development processes and think about where speed gets lost, how design relates to speed, and what may change as AI coding agents become more common.

Traditional Development Processes

In many cases, traditional development has followed a cycle where requirements are defined from requests, design is created, and then implementation, testing, and release follow.

Whether it is waterfall or agile, I do not think the broad flow is all that different. What really changes is the size of the cycle and how it is run.

At that point, each phase basically depends on the one before it.

For example, if requirements are not decided, the implementation approach cannot be fixed. And one reason requirements do not solidify is often that the original request itself is vague.

In other words, development speed also depends heavily on the upstream phases.

That is why, once a bottleneck appears somewhere, the later phases all get delayed together.

So where do those bottlenecks actually tend to emerge?

What a Bottleneck Really Is

How do bottlenecks happen?

My sense is that most bottlenecks come down to communication.

It is not as simple as saying the upstream side is always slow or the downstream side is always slow.

When the upstream side is slow, I think the problem often lies in organizational structure or decision-making mechanisms.

On the other hand, when the downstream side is slow, it is often because the team is busy responding to sudden specification changes. Once the specification settles, a large portion of that delay can usually be reduced.

That said, specification changes occurring after development has moved into downstream phases are themselves difficult to avoid to some extent.

So why does that so easily become a bottleneck? I think it still comes back to communication.

Gaps in understanding, insufficient explanation, slow decision-making, or poor sharing of why changes were made can quickly clog the work that follows.

And the question of how well we can tolerate things changing later also leads directly into design.

The Trade-Off Between Design and Speed

When we say “design,” you might think architecture-level design for the whole system and feature-level design are different in scope.

But I think the core sense of design and the way we think about trade-offs with development speed are fundamentally the same.

There are many important aspects of design, but abstraction is probably one of the most commonly discussed.

Loose coupling is something many developers keep in mind during design, and that too is one approach to abstraction.

There are situations where loose coupling works well, but I do not think choosing loose coupling is always the right answer.

In some cases, achieving loose coupling also requires introducing many modules and boundaries.

For example, when connecting class A and class B, it is common to place an interface between them and define the boundary as a contract.

If both classes depend on the interface, then class A and class B are not directly coupled, making it easier to replace one side with class C later if needed.

On the other hand, suppose there is a future possibility of class C, but class A and class B are still changing frequently, and the contract at their boundary is also changing over and over. In other words, the system is not stable yet and is still in a build-and-break phase.

In that kind of case, introducing abstraction too early creates a new cost: changing the boundary itself.

When that happens across multiple layers, the number of modules you have to pass through increases, and only the complexity tends to grow.

Especially from a short-term perspective, I think abstraction and speed are in a trade-off relationship.

From a long-term perspective, however, abstraction can improve speed instead.

When many related modules are tightly coupled to one another, making changes eventually becomes difficult.

You can end up in a state where every change breaks something somewhere.

In that case, you need to weaken the coupling so changes can be made safely.

In other words, the opposite of the short-term view becomes true in the long run: abstraction starts supporting speed.

Design is difficult because it deals with the concept of abstraction, but I think the real challenge is that the relationship is not simple. The best answer changes depending on the condition of the system as a whole.

That is where a designer’s skill shows.

Seen this way, “speed” is not simply about moving your hands faster. It is about how smoothly you can handle change and decision-making.

What Speed Really Means

I have been talking consistently about speed, but what is speed in the first place? I want to dig a little deeper into why I think it matters.

Not only in development, but in many organizational activities, speed is always valued.

Automation, efficiency improvements, and smoother communication all contribute to speed.

And I think what speed ultimately affects is time.

I see the concept of time as one level more abstract, and more important, than many other metrics. To put it a little dramatically, it feels like it exists on a different layer from other indicators.

Time is finite in all activities, not just economic ones. If we can achieve a goal in a shorter amount of time, that should generally be better.

Of course, there are situations where spending time itself has value. But in many activities, I think it is better to use time as efficiently as possible.

Against that finite resource of time, the concept that expresses how efficiently we are operating is what I call speed.

If you think about it that way, the change brought by AI coding agents may be closer to redesigning how we use time itself than simply reducing implementation effort.

Alongside AI Coding Agents

Over the past few months, the development experience has changed dramatically.

From here on, development processes will need to be built together with coding agents.

The speed created by coding agents is overwhelmingly faster than having humans do everything manually.

Once you have seen that speed, the value of deliberately choosing slower methods starts to fade.

This is still an emerging field, so things like the following are not yet settled:

These points still depend heavily on each person and each organization’s policy.

However, I expect that over the next several months to several years, a somewhat stable workflow will emerge.

Even though the use of AI is still left to individuals today, once it becomes socially standard, the level of speed expected from individuals themselves may also rise.

The process will change a great deal, but I also think the expectations placed on people will change just as much.

Closing

In this post, I wrote down my thoughts with the concept of speed at the center.

In development processes, I think communication is the most common bottleneck to speed.

In design, I wrote that short-term speed and long-term speed are deeply connected to how we handle abstraction.

And now that AI coding agents are being adopted in earnest, I also feel that we need to reconsider development processes themselves from the perspective of speed.

I place a lot of importance on speed, but I had not fully organized for myself why that was. Writing this helped me sort it out a little.