Experiences and insights on growth, leadership, and the new forces shaping B2B tech and AI.
Are diseconomies of scale hampering AI adoption, and is headcount reduction the uncomfortable catalyst nobody's talking about?
· 4 min read
We've observed decades of tech adoption begin inside large organisations and then slowly trickle down to SMBs and individuals, but I'm beginning to suspect the reverse is happening with AI.
For every tale of woe I hear about inconclusive ROI and fruitless AI pilots within large organisations, I hear about a freelancer or self-employed person — unencumbered by corporate IT and procurement policies — who has lashed together various AI tools, workflows, and cloud services to utterly transform their personal productivity. It seems the secret has been hiding in plain sight all this time: productivity is personal, not organisational.
Previous enterprise shifts — ERP, CRM, Cloud — were built on the process as the unit of value. They required top-down, command-and-control rollouts to prevent left-hand/right-hand chaos and waste. But AI's unit of value is the individual: their specific thinking, creativity, drafting, and judgment. A technology that augments the idiosyncratic way a human mind works will always be adopted more readily by people acting in an individual capacity.
The data bears this out. Among large companies with more than 250 employees, AI adoption has dipped from a peak of 14% to 12%. Among freelancers, it stands at 75%.
Why does scale make it worse?
When it comes to rolling out AI, large organisations responded with the only playbook available to them: standardise, govern, procure, roll out, measure. Copilot licences were purchased, lunch-and-learns were scheduled, and 42% of companies abandoned most AI initiatives in 2025, up from 17% the year before.
The problem contradicts everything large organisations believe about their own advantages: diseconomies of scale. In most enterprise tech deployments, scale amplifies, but with AI, the reverse seems true.
The coordination overhead of deploying a personalised technology stack to hundreds or thousands of diverse workers compounds with every layer of governance. When an organisation forces a single-vendor configuration on its staff — as corporate procurement policies usually dictate — it sacrifices the very thing that makes AI valuable today: the high-fidelity fit between the tool and the individual user's unique cognitive style.
If economies of scale built the enterprise, diseconomies of scale are frustrating its ability to harness AI.
Headcount reduction is not a consequence of AI adoption. It looks more like a precondition.
The current wave of large-scale tech layoffs reshaping global organisations like Oracle, Amazon, and Block is not, in the main, caused by AI replacing workers, as the media usually reports it. Instead, it is being driven by boards who have concluded that organisation-wide AI adoption cannot be coaxed, encouraged or even mandated.
Instead, it will be forced by circumstance.
The re-org, redundancy programme, or RIF (Reduction In Force) is the lever usually yanked when large organisations need to enact a major change in direction or strategy. CEOs can demand that their managers prudently constrain headcount, rationalise and manage costs and encourage the adoption of AI in their teams, but it's more efficient to simply remove the optionality.
So, my sense is these cuts are coming ahead of the AI productivity gains, not because of them. When there are fewer people to absorb the work, necessity will become the mother of invention, and those who remain will be forced to find new ways to do it — with AI tools.
The self-employed are building personal AI operating systems while large organisations are still negotiating procurement terms. How that gap closes is the more interesting question.
Sources: Fortune, 2727 Coworking, S&P Global Market Intelligence.
Eight themes and reflections on how AI might play out in B2B
· 19 min read
Twenty-five years ago, I was a tech blogger. Before LinkedIn, before Twitter, before Substack. Over the course of three or so years, I wrote a good number of posts where I would think-out-loud about what the Web, the then-nascent Web 2.0, early social media and what web services might do to the world of software, business and society.
Fast forward to 2026, and these days I spend a decent chunk of my time meeting with early-stage tech founders, venture capitalists, CEOs of mature tech businesses, and a few private equity partners, all of whom inform and shape my perspective.
In short, while we grapple with what AI is, arguably more pertinently what it isn't and where, whatever it is, is taking us, I have an uncannily familiar feeling in my gut again.
A feeling, just like twenty-five years ago, that we're back in a full-blown technology revolution, and suddenly everything is exciting again.
So, this is probably the closest to the kind of thinking-out-loud I used to do twenty-five years ago. These are not predictions; some may be more like propositions, and they are definitely not fully baked.
1. The strongest signal right now
The single most important signal in Q1 2026 isn't a benchmark, a valuation, an analyst research paper or a polemic. It's behavioural. The engineers who least need AI assistance are the ones most enthusiastically embracing it.
Here is perhaps the most telling data point in the entire AI debate. Some of the world's most capable software engineers, the people who could write elegant, complex, production-grade code in their sleep, are now among the loudest advocates for AI-assisted development.
This is not a story about AI replacing mediocre developers. This is about elite practitioners voluntarily and enthusiastically changing how they work because the output is better and delivered faster. When the best people in a field restructure their own practice around a new tool, that is not hype. That is a leading indicator.
It's also important to note that this particular advantage is not limited to only pioneering developers and start-ups operating at the edge of tomorrow. I've spoken with CEOs of mature, market-leading software businesses who are blending (and increasingly mandating) AI-assisted coding into their organisations.
And professionals in other domains, clinging onto the belief that AI is unlikely to automate chunks of their daily workflows any time soon, don't understand how insanely complex software engineering is. Or, rather, was.
2. Good enough to be dangerous
The AI debate is fixated on the wrong threshold. AGI shouldn't be the bar for mainstream utility. Instead, SGI or Sufficient General Intelligence, LLM-based AI that while narrow, is capable enough, in enough places, enough of the time, may present a significant opportunity for pragmatic transformation and operational leverage.
The most common dismissal of large language models follows a predictable line of argument. Critics acknowledge the impressiveness, then pivot: these systems don't truly understand. They're not rational, nor sentient. They lack grounded world models. They confuse correlation for causation. Plus, they hallucinate.
The first part of that argument is probably correct. True AGI will require something LLMs fundamentally don't have: a causal, deterministic and rational model of how the world actually works.
But they're drawing the wrong conclusion from it.
There is an inconvenient truth here that we may not have fully reckoned with: we would likely be surprised, and possibly a little unsettled, by just how little intelligence is required to complete a good-ish number of discrete white-collar tasks. A surprising proportion of professional work was never bundled into human roles because it required Einstein-level intelligence eight hours a day. It was bundled there because humans were the only general-purpose processors available.
And so even if LLMs only ever attain Sufficient General Intelligence, possibly as little as 10% to 20% of a human's capability across a given role or domain, this may still represent significant operational gain.
SGI won't require world models or sophisticated reasoning. It requires clearing the bar in enough places that the old assumptions stop holding. Agentic AI, systems that can plan, act, and iterate across multi-step tasks, could be the unlock.
None of this means the end of white-collar work. The industrial revolution didn't eliminate human labour; it redistributed it, reduced the headcount needed for certain tasks, but it also created new categories of work that hadn't existed before.
The critics may be right that AGI is a long way off. They may be wrong about how much that actually matters.
3. The Death of Distance, revisited
AI has all but eliminated the distance between having an idea and building it. That is genuinely transformative. But it has not reduced the distance between a good idea and a bad one — and in the age of vibe coding, the difference between the two gets built and shipped faster than ever before.
In 1997, economist Frances Cairncross published The Death of Distance, arguing that the internet would make geography economically irrelevant. In 2026, we are witnessing a second, more profound collapse: the death of the distance between intent and execution.
Vibe coding and AI-native development have effectively reduced the "build time" of an idea to near zero.
"The projects I build with my AI agents weren't impossible before — they were sitting in a queue for months because the cost of explaining them to anyone exceeded the cost of just not doing them. That cost has now collapsed." — Azeem Azhar
For the first time in history, you do not need to know how to build something in order to ship it. A founder can prototype without an engineer; a marketer can deploy a functional tool without touching a line of code. But there is a trap hidden in this efficiency.
"People think computers will keep them from making mistakes. They're wrong. With computers, you make mistakes faster." — Adam Osborne
When execution is difficult or costly, bad ideas usually die in the friction of development. When execution is instant, the world is suddenly flooded with high-fidelity, functional noise. We may be entering the era of "Confident Mediocrity".
The risk is not that the AI will fail to build what you ask for. The risk is that it will build exactly what you ask for, even if it's nonsense. AI is a force multiplier, but zero multiplied by a thousand is still zero, so it quickly becomes a farce multiplier instead.
"If you look at SaaS spend today, if you look at IT spend overall, it's 8-12% of enterprise spend. You have this innovation bazooka with these models, why would you point that at rebuilding payroll, or ERP or CRM? You're going to take it and use it to extend your core advantage as a business, or you're going to optimise the other 90% you're not spending on software today." — Anish Acharya
As the technical barrier to entry evaporates, the signal in any industry is shifting upstream. In the old world, the bottleneck was the ability to build — Technical Fluency. In the new world, the bottleneck is the ability to direct intent — Intellectual Taste.
In the age of instant execution, the only remaining competitive advantage is judgment. We no longer need people who can 'do the work' as much as we need people who know what work is actually worth doing.
4. The Ghost UI
For twenty-odd years, I've used my own personal heuristic to gauge whether a B2B software business was any good or not. We used to judge software by its elegance. We may come to judge it by its absence.
My philosophy is very simple. If a software company is proud of its product, and if it's any good, then it prominently displays it as a hero image on its website's front page. Conversely, less product-led software companies that adorn their homepages with stock photography are usually concealing vast amounts of UI complexity or an ugly product.
After three decades, I am not sure my heuristic survives in an agentic world, where a UI is not a feature — it is redundant. Every minute a user spends navigating a menu is a minute the software has failed to do its job autonomously.
Agents with the appropriate levels of autonomy will select, in milliseconds, which skills and connections to employ in their Ghost UIs to get stuff done — and that will be a far more logical and clinical affair than a beauty parade.
5. Per-seat pricing and price list innovation
The per-seat licence was built for a world where humans did the work. AI is breaking that assumption. But before writing off enterprise software incumbents, take note of another of my heuristics: When innovation stops in the product, it moves to the price list.
The per-seat licence is an elegant model for a world where humans do the work. As a company grows, it hires more people, and enterprise SaaS revenue grows in lockstep.
But it was designed for a time when the unit of productivity was a person. AI is breaking that assumption. If an autonomous agent does the work of ten analysts, the customer's productivity skyrockets, and the incumbent's per-seat revenue collapses by 90%. Legacy SaaS vendors are now financially incentivised to keep users busy rather than effective.
Faced with existential pressure on the per-seat model, enterprise SaaS vendors will do what they have always done: repackage, rebundle, and reprice with considerable speed, creativity and innovation. The price list is the last line of defence. And in enterprise software, it is a surprisingly well-fortified one.
6. In SaaS, the context is the moat
While AI might easily replace 'thin' SaaS apps that merely shuffle data around, it cannot replicate the context, the deep organisational memory and complex workflows embedded in 'thick' systems of record.
On one hand, you have 'thin' SaaS apps that are more like UI-wrappers sitting on top of a SQL database. These are more susceptible to disruption from AI.
Then you have 'thick' systems of record. A good example would be your accounting app, where not only is transactional data ingested, recorded and reported, but where your business processes and the discrete workflows of your business are codified and defined behind every button and discipline. These systems of action provide a critical element of context; they understand the why behind an organisation.
7. AI-forcing and multi-region SMB
European SMB software is highly fragmented by local regulations, culture and practices, making pan-European expansion cost-prohibitive. AI-assisted coding may finally reduce the costs of localisation and compliance, allowing these companies to overcome the structural barriers that have long limited Europe's tech growth.
From a B2B software perspective, Europe is a nightmare, particularly so in SMB. It may be unified by currency, free passage, and multiple laws, but at the ground level of business, it's incredibly fragmented.
If you're an SMB focused B2B SaaS vendor in France, Germany or the UK, the sheer cost and complexity of building, localising product, country-specific integrations, banking, reporting and compliance is just eye-wateringly cost-prohibitive.
It's possible, if not probable, that AI-assisted coding leverage may finally tame the prevailing cost-inefficiency equation of taking SMB tech multi-region.
8. The rise of the all-terrain leader
Most leaders are shaped by where they came from — sales, finance, product, engineering. That origin story used to define their operational ceiling. AI is about to change that.
In retrospect, one ability that I'm certain enabled me to operate more effectively than my peers during Xero's early years was the range of competencies I utilised across multiple operational and functional disciplines.
Most CEOs typically accumulate experience and develop skills within a specific domain — whether that be sales, finance, technical, or operations and that background tends to then shape or flavour their overall ability and effectiveness as leaders.
AI is about to make range available to anyone. The constraint was never intelligence or ambition — it was access to the right knowledge, at the right moment, explained in the right way.
This will be the engine behind the all-terrain operator. A founder who came up through engineering can now think fluently about or finely orchestrate their go-to-market processes. A sales-led CEO can interrogate and shape their product roadmap. The cognitive overhead of operating outside your native domain is vanishing. Breadth was once a happy accident of how a career unfolded. It will soon become learnable.
If the first era of the Web was about access (finding information) and the second was about connection (social and SaaS), this third era is about synthesis. We are moving from a world where we had to learn the language of machines to a world where machines have finally learned ours.
When the "how" becomes trivial, the "why" becomes everything. The real winners of this revolution won't be those with the most powerful AI, but those with the best "Intellectual Taste" — the ones who know what work is actually worth doing in the first place.
Twenty-five years ago, everything felt new, and the walls were bouncing with ideas. It's taken a quarter of a century, but that feeling is finally back.
Is AI-assisted coding about to bring about the YouTube-ification of software?
· 5 min read
There's still a great deal of debate and skepticism about where LLMs are taking us. This is natural and not entirely dissimilar to when the web went mainstream thirty years ago, and the same with cloud apps around twenty years ago.
It is also worth reflecting on how long it took those shifts to progress from niche nerd topics to broader awareness and then adoption. Technological inertia is powerful; it takes about a decade after the intellectual arguments are won before we see majority adoption.
So I think it would be a mistake, therefore, to disparage the current state of language model-based AI on the basis that it doesn't deliver sustainable economic returns to those who try to deploy it.
I personally suspect we're about 1% of the way into wherever LLMs are going, and it's far too early to size it, just as Thomas Watson, the founder of IBM, famously did when he said he thought there was "…a world market for maybe five computers".
However, my time in software spans three decades plus, which means I'm pathologically incapable of moderating my impulse to map prior shifts to the present one and attempting to proverbially peer around the corner.
While generalised agentic AI business applications are still percolating, and AI investors huff and puff themselves into a bubble — what if it's not a bubble? — much of what I'm reading and hearing points to AI-assisted coding having turned a corner in the second half of 2025.
"I've never felt this much behind as a programmer... I have a sense that I could be 10X more powerful if I just properly string together what has become available over the last ~year and a failure to claim the boost feels decidedly like skill issue." — Andrej Karpathy
"The quality of work produced by Claude... regularly blows my mind. I cannot and do not want to go back to that infuriating, head-scratching, hair-pulling, sloth-like programmer productivity of, er, 2 years ago." — Olly Heady
"I believe the AI-refusers regrettably have a lot invested in the status quo...They all tell themselves that the AI has yet to prove that it's better than they are at performing X, Y, or Z, and therefore, it's not ready yet. But from where I'm sitting, they're the ones who aren't ready." — Steve Yegge
The implications are profound. Collapsing production costs and simultaneously leveraging productivity in software development will spark a number of significant changes, not least a likely explosion in the number of new products coming to market.
You're gonna need a bigger metaphor
The Cambrian Explosion was occasionally employed as a somewhat grand metaphor by VCs to describe what happened to software sometime around 2010, when the practice of building desktop software was finally obsoleted by the general availability of reliable, low-cost cloud services.
As great as the shift to the cloud was, it's possible, if not probable, that AI-assisted coding will bring about a much bigger shift.
Anish Acharya, a General Partner at Andreessen Horowitz, makes a compelling case arguing that the structure of the software industry could be about to undergo a huge metamorphosis, akin to the impact YouTube has had on video content creation and distribution.
In barely 20 years, YouTube has succeeded in all but eliminating the cost of producing and distributing video content, resulting in a colossal number of mainstream and niche-interest channels, with a staggering 700,000 hours of new content being uploaded every day.
"We may see hyper-personalized applications on the web, for much smaller audiences. This is tremendously liberating: software no longer needs to be practical... It just needs to have a good idea behind it, and a couple of people who understand its value." — Anish Acharya
Mass amateurisation, the democratisation of expertise or unadulterated chaos?
Which brings us back to pattern recognition. Just as skepticism was abundant during the early days of the web and the cloud, overly fixating on the risks of AI-generated code misses the forest for the trees. The collapse of production costs is not a variable; it is a trend line.
Yes, we will be awash in niche, "good enough" software, many of which will go nowhere or fail. Yes, relying on unproven, micro-scale vendors carries risk. But to focus solely on the potential for chaos is to make the same mistake Thomas Watson made with the computer. We are witnessing the democratisation of engineering, where the barrier to entry drops from "millions of dollars" to "a good idea."
As I said, I believe we may well be only 1% of the way into all this. If the YouTube era taught us anything, it's that while mass amateurisation brings chaos and noise, it also unleashes creativity on a scale previously impossible. The next ten years won't just be about who can write code, but who can navigate a world where coding is no longer the bottleneck — and who can filter the signal from the noise.