From Autocompletes to Intelligent Coding: The AI Revolution?

It is hard to believe that it has been exactly a year since I wrote a piece on whether GitHub Copilot was worth £10 per month.

Early 2025 doesn’t feel that far away. At the time, most AI coding tools were essentially advanced autocomplete. Very good autocomplete, but still operating largely at file level. They were helpful but rarely perfect.

Twelve months later, the landscape has shifted unbelievably.

Today in 2026, tools from OpenAI, Anthropic and Google operate at a completely different level. Context windows stretching into the hundreds of thousands of tokens. Intelligent compression of large codebases. Agent modes that can traverse an entire project, create and refactor files, adjust configuration, run tests and propose cross cutting changes.

The use case has moved quite abruptly from ‘complete this boilerplate’ to ‘help me achieve this outcome’.

The first time you watch an agent navigate your repository, understand its structure and suggest coherent modifications across multiple layers, something clicks. This is no longer just assistance at the keyboard. It is a capability layer sitting alongside the engineer. For experienced developers, this has been quietly revolutionary.

Those with a solid grounding in software engineering principles can now move across stacks and domains far more quickly. Architectural understanding and transferable skills matter more than ever. Onboarding time drops. Side projects appear that might never have existed. Barriers to experimentation fall away. But it has not been without cost.

Anthropic’s recent research on how AI assistance impacts coding skill formation highlights something most of us already suspected. When the answer is always available, parts of the learning process are bypassed. Junior engineers can become dependent on the tooling rather than developing the mental models required to reason independently.

Alongside this, we have seen the rise of the so called ‘vibe coder’, even recognised by Collins as a word of the year. Simply, build on instinct and let the AI handle the details.

Used responsibly, this can accelerate exploration. Used without technical depth, it creates fragile systems at speed. Security gaps, architectural shortcuts and governance blind spots can be introduced just as quickly as features.

The models themselves are markedly better than a year ago. In well known domains they hallucinate less. They reason more coherently over existing code. They maintain context across longer sessions. In capable hands they are genuine force multipliers. But greater capability increases the responsibility on those leading engineering organisations.

The question is no longer ‘Should we buy Copilot licences’. The question is ‘How do we embed AI responsibly into the way we build software?’ That moves the conversation from tooling to operating model.

It touches architecture standards, security review, intellectual property, cost governance and skills development. It raises board level concerns about data exposure and regulatory risk. It forces clarity about what good engineering looks like in an AI augmented world.

Over the past year, my own thinking has shifted accordingly. My earlier post focused on value for money and developer experience. That felt appropriate at the time. Today, the framing feels too narrow. AI is no longer just a helpful coding assistant. It is becoming part of the production system itself. As leaders, we can’t just enable it and hope for the best. Nor can we ignore it. Both extremes are irresponsible.

Instead, we need deliberate integration, clear guardrails and explicit skill expectations. A culture that encourages engineers to challenge the model rather than defer to it. AI as augmentation, not abdication and we will always be ultimately responsible for what the AI creates for us.

Looking ahead, I can’t to imagine the pace slowing. Agentic workflows will mature further. Tooling will integrate deeper into CI pipelines and platform engineering. The line between human and machine contribution will blur more and more The real differentiator will not be who has access to the best models. It will be who builds organisations that can use them wisely.

That feels like a more interesting question than whether £10 per month was worth it.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.