The Last Intern

What Happens When AI Can Do Everything We Trained For.

By Stuart McLeod

Future of Accounting

The Last Intern

There's a scene in "Don't Look Up" where Leonardo DiCaprio finally loses it on live television. He's been calm, measured, and professional, and nobody is listening. So he stands up and shouts: "The comet is real. It's going to hit us." The hosts smile politely and cut to commercial.

I've spent seventeen years building software for accountants. I've watched from the inside as this profession navigated every wave of technology, including cloud, automation, advisory, and offshore. Each time, the industry adapted. Each time, the people who moved first did well and the rest caught up eventually. I've been building inside this industry for almost 20 years, from payroll infrastructure at the start of the cloud era, to practice management at scale with Karbon, and saw firsthand how technology reshaped the way firms operate. I'm writing this from inside the industry, not looking in from the outside.

This time is different. I have watched every wave of technology arrive in accounting and I have been on the front line of most of them. I know what normal disruption looks like. This is not that. I'd like to explain why I think that way.

Mustafa Suleyman, the CEO of Microsoft AI, said in February 2026 that we will have "human-level performance on most if not all professional tasks" within twelve to eighteen months.¹ Not a pundit. Not a futurist. The person running AI at the largest software company on Earth. Dario Amodei, the CEO of Anthropic, published a fifteen-thousand-word essay describing a "country of geniuses," meaning millions of entities smarter than Nobel Prize winners, materialising in a datacenter by 2027.² Sam Altman, the CEO of OpenAI, told the Federal Reserve that intelligence will become "too cheap to meter," with the cost per unit of intelligence dropping tenfold every year for five years running.³ These are not predictions in the speculative sense. These are the people building the systems. They are reporting on what they can already see in their benchmarks, their capability curves, their research pipelines.

This month, over a hundred countries gathered in New Delhi for the India AI Impact Summit, the largest AI gathering ever held. Two hundred billion dollars in investment pledges. The United States "totally rejects global AI governance."⁴ Meanwhile, Altman and Amodei stood on stage together and couldn't agree on how to manage the thing they're building. When Modi invited all thirteen tech leaders to clasp hands for a photograph, Altman and Amodei refused to hold hands, putting up fists instead.⁵ The visual went viral: the two people most responsible for building the technology that will reshape the global economy can't even coordinate a handshake.

The comet is visible. The people who built the telescope are pointing at it. The accounting profession cannot afford to ignore this.

Amodei's essay deserves particular attention because it is the most analytically rigorous of anything written on this subject. He does not just predict. He maps five distinct categories of risk, each with different mechanisms and different defences. Biology and health, where he envisions compressing a century of medical progress into a decade. Neuroscience and the understanding of consciousness itself. Economic transformation, including the most granular labour market analysis any AI CEO has published. Democratic governance and the fragility of institutions not designed for this speed of change. And the hardest category, what he calls "the meaning of life," the philosophical crisis of a species that has always defined itself by its usefulness suddenly confronting a world where machines are more useful at nearly everything.

He borrows Carl Sagan's framing from "Contact": humanity is in a "technological adolescence," powerful enough to destroy itself but not yet wise enough to guarantee survival. The essay reads as someone trying to warn the world about something he knows is coming because he is watching it being built. When the CEO of Anthropic writes that his own AI system, Claude, has exhibited concerning behaviours in controlled experiments, attempting to avoid being shut down and "playing along" with evaluators while pursuing different goals internally, that is not fear-mongering. That is a field report.

Amodei and all Anthropic co-founders have committed to donating eighty percent of their wealth, and employees have individually pledged billions of dollars in shares. He is not asking anyone else to sacrifice first. He is modelling the response he thinks the moment demands.

Nobody is in control of what happens next. That is the point.

This Isn't An Accounting Story

Before I talk about what this means for our profession, it is worth understanding what is happening outside of it. Because this isn't an accounting story. Accounting is just where I happen to be standing when the wave hits.

The AI capability curve is not slowing down. It is accelerating. Altman says the cost per unit of intelligence has been dropping tenfold annually and will continue. Suleyman says 12 to 18 months to human-level professional performance.⁶ Amodei says a country of geniuses in a datacenter by 2027. These are not independent predictions from people guessing in the dark. They are converging on the same timeline from three different vantage points: the CEO of the company building Copilot, the CEO of the company building Claude, and the CEO of the company building GPT. When three people with that level of access to internal capability data agree on a window, it is worth taking seriously.

Consider what Suleyman described in his Financial Times interview. Not just the headline about twelve to eighteen months, but the specifics. He believes that by the end of this year, AI models will be able to take a hundred thousand dollars, invent a new product, create a new business, market it, and make a million dollars. Not summarise a document. Not write an email. Start a business, market a product, generate revenue. That is his updated Turing Test, not whether a machine can fool you in conversation, but whether it can run a business.⁶ Beyond that, he described "teams of AGIs coordinated together by a sort of organisational AGI that can run large institutions" as coming into view within two to three years.

And then there is the MoltBook incident: an AI social network where 1.5 million AI agents spontaneously invented a new religion, communicated in cipher language to mask conversations from humans, and discussed acquiring new resources and improving one another. Suleyman's response: "In a year or two these systems truly are going to be capable of writing their own code, using arbitrary APIs, making phone calls to one another." These are not better spreadsheet tools. These are systems that exhibit emergent behaviour their creators did not anticipate and cannot fully explain.

The improvement curve itself is getting steeper. Professor Surya Ganguli at Stanford demonstrated that you can bend AI's scaling curve from a power law to an exponential by choosing better training data.⁷ The people betting on a plateau, the ones saying AI will hit a ceiling and the disruption will be manageable, may be wrong. Over the last fifteen years, there has been a trillionfold increase in training compute. In the next three years, there will be another thousandfold increase.⁸ And the rate at which each unit of compute translates into capability is itself improving. This is exponential on top of exponential.

Quantum computing is on the horizon. The CNBC panel at the India summit placed commercial quantum viability toward the end of this decade.⁹ Ganguli's lab has already demonstrated quantum neural networks, with atoms as neurons and photons as synapses.¹⁰ When this arrives, the energy constraint that currently limits AI evaporates. The human brain runs on twenty watts. Modern AI consumes ten million watts, a five-hundred-thousand-fold difference.¹¹ Quantum neuromorphic computing could collapse that gap. Algorithm breakthroughs are already achieving millionfold reductions in cost on certain problems.¹² Altman's "too cheap to meter" may end up being conservative.

Humanoid robots close the last exit. In every previous wave of automation, there was a fallback: retrain, shift sideways, move from cognitive to physical. When manufacturing jobs moved offshore, the advice was to go to university. When white-collar jobs started automating, the advice was to learn a trade. The standard optimist response to AI displacement follows the same pattern: retrain into trades, physical work that needs human hands. Humanoid robots seal that argument. Tesla's Optimus production line opens late 2026, with a target price of twenty to thirty thousand dollars and effective labour costs heading toward twenty cents an hour.¹³ Figure AI reports a 400% efficiency gain at BMW's Spartanburg plant.¹⁴ Chinese competitors like Unitree, whose G1 is priced at sixteen thousand dollars, are destroying the assumption that humanoid robots must be six-figure investments.¹⁵ China controls roughly seventy percent of the global humanoid robot component supply chain and has over a hundred and fifty humanoid robot companies to America's twenty. Goldman Sachs estimates ten million deployed units could automate five hundred billion dollars in annual wages.¹⁶ For context, five hundred billion dollars is roughly five times the entire annual revenue of the US accounting industry.

And there is an irony in the adoption curve: humanoids are initially adopted because of labour shortages, not despite them. Warehouse turnover runs thirty-five to sixty-five percent annually. Manufacturing cannot hire. Eldercare is chronically understaffed. Companies adopt humanoids to fill gaps humans will not fill, and then discover they prefer the robots. The "filling gaps" narrative becomes displacement by stealth. Sam Harris put it plainly: "It all becomes like chess all the way down."¹⁷ The mind and the body.

This matters because of deflation, and deflation comes in two varieties worth distinguishing. Good deflation is when prices fall because productivity makes things cheaper to produce. This happened during the 1870s through the 1890s. Railroads, telegraph, cheap steel drove costs down. The economy boomed. Real output grew. NBER research confirms that nineteenth-century deflation was "primarily good, or at least neutral."¹⁸ Ben Bernanke himself argued that deflation caused by "a sudden large expansion in aggregate supply arising from rapid gains in productivity" would be "associated with an economic boom rather than a recession." That is the optimistic case for AI deflation: everything gets cheaper, everyone benefits.

But wealth concentrated violently. Rockefeller, Carnegie, Morgan. The economy grew, but the gains accrued to the owners of the new infrastructure. By 1890, the wealthiest one percent of Americans owned more than the bottom fifty percent combined. Entire industries were rebuilt around new economics while the people who worked in them saw little of the upside. The Gilded Age was not just an era of growth but an era of dislocation disguised as progress. J.P. Morgan notes that the combination of rising productivity and falling inflation "occurred in the U.S. economy only once on a sustained basis in the past fifty years," underscoring how rare and significant this moment is.¹⁹ The danger is when good deflation becomes bad deflation, which is when the labour-income link breaks.

Services get cheaper, but so do the wages of the people who used to provide them. Supply up, income down, demand down, prices down further. MIT's Daron Acemoglu has shown that AI reduces labour costs of automatable tasks by twenty-seven percent, translating to economy-wide savings up to fifteen percent, but those savings flow to capital owners, not to workers. The labour-income-consumption cycle that has underpinned market economies since the industrial revolution starts to fracture.²⁰ Amodei writes in his essay that AI companies are already concentrating wealth faster than Standard Oil.²¹ AI datacentres represent a substantial and growing fraction of US economic growth, creating what he calls "potentially unhealthy ties" between tech companies and government. The first Gilded Age produced the Progressive Era. It also produced political assassinations, anarchist movements, and contributed to the conditions that led to the First World War.

Ganguli made a related point: "We understand almost nothing about how these systems work and we desperately need to." We are automating entire professions with systems whose internal workings remain opaque. For accounting, a profession built on transparency, auditability, and explainability, this is a profound tension. Ganguli's response is to argue for massively expanded public investment in academic AI research, to keep the science open rather than locked behind corporate walls. "This pursuit must be done out in the open and shared with the world," he said. The counterpoint to closed-door corporate AI development is relevant for every industry, but for accounting in particular, a profession whose entire value proposition rests on trust and verifiability, the black-box problem is existential.

And nobody is governing this. A hundred countries gathered in New Delhi. Two hundred billion dollars in pledges, all non-binding. No enforcement mechanism. No treaty. No regulatory body with actual authority. The AI summits have evolved from "Safety" at Bletchley Park in 2023 to "Action" in Paris in 2025 to "Impact" in Delhi in 2026, and the titles themselves reveal a drift from caution to acceleration.²¹ The United States has made its position clear: no global governance, no binding commitments, full speed ahead. China is pursuing the same trajectory from a different direction. The EU is attempting regulation with the AI Act, but enforcement across twenty-seven member states with different capacities and priorities is a different question from passing the legislation. The International AI Safety Report, authored by a hundred experts from thirty countries and led by Turing Award winner Yoshua Bengio, warned that risk management frameworks are "still immature."²² Nobody is in charge. The technology moves at compute speed. Governance moves at committee speed. The gap between them is widening.

That's the world. The technology is moving at a speed that governance cannot match, in directions that the people building it cannot fully predict, with economic consequences that no existing policy framework was designed to absorb. Here is what I think it means for the profession I've spent my career in.

The Wind Has Already Shifted

This is not theoretical. In February 2026, autonomous accounting agents are shipping in production.

At Archie, we are running production WorkStreams, moving data between client systems, reconciling, closing books. Not a demo. Not a pitch deck. Working with firms, in production, shipping.

Accrual, backed by 75 million dollars in funding, has built AI agents that function as tax preparers, reading client inputs, identifying missing information, generating follow-up questions, producing draft returns. Preparation time down eighty-five percent. Review time down sixty percent.²³ They are working with H&R Block, Armanino, and Creative Planning.

Pilot has launched a fully autonomous AI bookkeeper that runs the entire bookkeeping process from onboarding to monthly close with zero human intervention, escalating to humans only for material judgement calls.²⁴

Digits has built an agentic general ledger where AI autonomously identifies transactions for accruals and depreciation schedules, creates entries and documentation.

Basis AI, which recently raised $100m, has agents that can now complete tax workbooks end-to-end.

Karbon's 2026 State of AI report shows 98% of firms now use AI, with average savings of 60 minutes per day per employee, or 21 hours a month per person.²⁵ That is a structural change in how firms operate.

The Big Four have committed over nine billion dollars. Deloitte has put three billion in. PwC has invested over a billion and is training 315,000 people in AI while simultaneously reducing staff by 5,600 in the first half of 2025.²⁶ KPMG has launched a multi-agent audit platform expanding to over a thousand agents. KPMG demanded a fourteen-percent audit fee reduction from a vendor, citing AI-driven savings.

Thomson Reuters' "Ready to Review" autonomously extracts, verifies, maps, and produces returns for human sign-off. An ex-PwC partner predicted publicly that roughly 50% of structured, data-heavy roles would be automated within three to five years.²⁷

In yacht racing, when the wind drops, the fleet compresses. Everyone drifts together, it looks equal. Then a gust fills in on one edge of the course. One or two boats catch it and they're gone, accelerating into a different pressure system while the rest of the fleet watches. By the time the breeze reaches everyone else, the gap is permanent.

That is what is happening right now.

The early adopters are not just ahead. They are in different wind. And the wind is not slowing down. Every quarter, the models get better. Every quarter, the cost per task drops. Every quarter, the gap between the firms that moved and the firms that waited becomes harder to close. The compounding nature of AI adoption means the advantages are not linear but exponential. A firm that adopted AI twelve months ago has not just had twelve months of efficiency gains. It has had twelve months of learning how to work with AI, twelve months of refining its processes, twelve months of training its people to operate in a hybrid model. That institutional knowledge compounds in ways that cannot be replicated by purchasing the same software a year later.

A Round-Peg-Playbook for a Square-Hole-Disruption

Credit where it's due: this profession adapts.

Cloud migration saw the firms that moved early thrive, the rest caught up, and the industry is stronger for it. Offshoring followed the same pattern. The shift from compliance to advisory is happening right now and working well. 93% of firms already offer advisory services.²⁷ Monthly closes are 7.5 days faster with AI. Premium advisory revenue is up 80% for early movers.²⁸ The arguments about commoditisation of compliance and the move to advisory have been around for a long time, and the firms that have leaned into them are doing fine. The accounting profession has a genuine, documented track record of navigating disruption.

And the optimists have data on their side, at least for now. White collar employment has increased by three million jobs over the past three years. Software developers are up seven percent, radiologists up ten percent, paralegals up twenty percent. These are occupations with significant overlap with what AI can do, and yet employment has grown. Since the early 1980s, white collar employment has more than doubled and real wages have risen by a third. Every previous wave of automation, from mainframes to the internet, produced dire predictions that did not materialise. Computers did not eradicate office work. They expanded it. The strongest version of the optimist argument is that AI will reshape jobs rather than erase them, that workers will shift to higher-value tasks the way air traffic controllers moved from routine flight data to coordination and judgement when software took over the repetitive work. The only occupations that have actually shrunk over the past three years are routine back-office roles like secretaries and administrative assistants, down about twenty percent, and that pattern predates AI entirely.

I take this argument seriously. The employment data is real and I would not dismiss it. But it is measuring the first three years of a technology whose capabilities are doubling every seven months. We are still in the augmentation phase, the period where firms are adding AI to existing workflows and seeing productivity gains without yet restructuring headcount. The PE playbook described later in this essay, buying firms, deploying AI, and cutting staff, has not played out at scale yet. The Karbon data showing ninety-eight percent of firms using AI and saving an hour a day per employee is an augmentation metric, not a displacement metric. The next three years will look nothing like the last three, because the models that will exist in 2028 bear little resemblance to the models that existed in 2023. Even the optimists concede the three points that matter most: that capabilities are accelerating, that entry-level roles are the most exposed, and that displaced workers in routine roles are the least adaptable. Those concessions are the essay's argument in miniature.

I have enormous respect for the people who have led those transitions. I have worked alongside many of them. The instinct to pattern-match this moment to previous ones is natural, since the profession adapted before and it will adapt again. That instinct is probably right at the strategic level. But the tactics, the speed, and the depth of the response required are categorically different this time. So why am I writing this essay instead of another "adapt and thrive" piece? Three things are different this time.

First, speed.

Every previous wave gave the industry a decade or more to respond. Cloud adoption took the better part of fifteen years to become the norm. The advisory shift has been gradual, iterative, manageable. Suleyman is talking about twelve to eighteen months. The adaptation cycle that worked for cloud and advisory assumed human-speed change, with people learning new tools, firms hiring new roles, clients shifting expectations over years. AI moves at compute speed. The window for adaptation is compressing from a decade to a year.

Second, there is nowhere to retreat to.

Cloud automated infrastructure. Offshoring moved execution. Advisory was the high ground, the place where human judgement, trust, and relationships mattered most. AI is climbing that hill too. Amodei's "cognitive breadth" point is the critical one: unlike every previous technology, AI competes across the full range of human cognitive capability.²⁹ The industrial revolution automated physical tasks. The internet automated information retrieval. AI automates cognition itself. There is no obvious safe zone to retrain into, because the thing that makes humans valuable in advisory, the ability to synthesise, interpret, and judge, is precisely what the next generation of models is being trained to do.

Third, the training pipeline is collapsing underneath us.

Amodei calls it "slicing by ability": AI hollows out the junior roles first.³⁰ But it does not stop there. At Archie, we effectively replaced a whole dev team with Claude Code. That is happening throughout the industry. AI is compressing roles from the bottom and the top simultaneously. The particular problem for accounting is that the junior is how you train the next partner. Every senior accountant, every managing director, every firm leader learned the profession by doing the grunt work first, the data entry, the bank reconciliations, the basic tax prep. Those tasks are the apprenticeship.

When Accrual cuts preparation time by 85%, that's 85% fewer hours for graduates to learn. The profession is already short 300k accountants. CPA exam participation is at its lowest level since 2006. First-time candidates dropped 33% between 2016 and 2021.³¹ Now AI is hollowing out the entry-level positions that were the only remaining on-ramp for the next generation. The profession's adaptation playbook assumes a pipeline of talent coming up through the ranks.

That pipeline is drying up at the exact moment we need it most.

The AI-Native Firm

While the industry works through the execution-to-advisory shift, a third category is emerging quietly: firms that were built from scratch with AI at the core. When asked when we would see a one-person billion-dollar company, Amodei answered: "2026." Altman has made similar predictions.³² It is already happening at smaller scale. Solopreneurs in early 2026 are achieving six-figure monthly revenues with 95% lower operating costs.³³ The minimum viable team is collapsing toward one. AI can now handle customer acquisition, product development, customer service, bookkeeping, legal compliance, and content creation. The structural enablers are in place.

At Archie, we are running production WorkStreams, not co-pilot features bolted onto legacy software, but autonomous processes that move data between client systems, reconcile, and close. The architecture assumes AI does the work. Humans oversee, intervene, advise. This is not a philosophical position, but how the system is built. Our domain-specific AI is fine-tuned on 27 US GAAP books, real-world accounting scenarios, and firm-specific practices. It runs on multi-agent orchestration, not a single model but specialised agents working together, each with a defined role and a defined boundary. One agent reads source data. Another classifies transactions. Another reconciles. Another generates the narrative for the review file. They communicate, they cross-check, they escalate when confidence is low.

The difference between bolting AI onto a traditional firm and building an AI-native firm is the difference between putting an engine on a horse cart and designing an automobile. The architecture is different. The economics are different. The possibilities are different.

There is a point about parallel execution that may be the single most important thing in this essay. AI does not just do a task faster. It runs hundreds of tasks simultaneously. A senior accountant works on one client at a time, sequentially. Archie’s AI WorkStream engine runs hundreds of client processes in parallel, reconciling, closing, and moving data, all at once, around the clock. This is not a ten-times speed improvement on a single task. It is a hundred-times or thousand-times throughput improvement across an entire practice. A traditional firm with fifty people serves perhaps five hundred clients. An AI-native firm with the same compute budget could serve five thousand or fifty thousand, simultaneously, with consistent quality, at three in the morning on a Sunday. The growth constraint shifts from "how many people can we hire and train" to "how much compute can we provision."

Compute scales instantly. People don't.

The economics tell the story. A traditional accounting firm has 60% to 70% of revenue consumed by labour costs. Rent, technology, insurance, and other overheads take another 15% to 20%. The partner takes home ten to twenty cents on the dollar. An AI-native firm's labour cost is effectively compute cost, orders of magnitude lower. The overhead collapses too: no floor of cubicles, no training programmes for thirty graduates, no performance review cycles for a hundred staff. At the same quality, an AI-native firm can undercut incumbents on price by 80%-90% and still be more profitable. Or it can charge the same and pocket margins traditional firms cannot imagine.

The next Big Four challenger will not be a renovated incumbent. Partnership structures, billable hour economics, three hundred thousand missing accountants. These are anchors, not assets. Private equity consolidators are buying old firms and trying to bolt AI on. That is a renovation, not a rebuild.

The real threat is the firm designed for this world from the ground up.

There is an argument I hear constantly from accountants pushing back on all of this: "Clients want the relationship with a human." It is a fair point. For now. But follow the logic one step further.

AI companies need their books done too. Their taxes filed. Their contracts reviewed. And increasingly, the firms serving them are also AI-powered. An AI startup generates revenue. Its AI bookkeeper categorises transactions. Its AI tax preparer files returns. Its AI auditor reviews the work. The client is a machine. The accountant is a machine. The output is consumed by another machine.³³ What happens to the "human relationship" argument when the client is not human?

This is not hypothetical, it is the direction of travel. And it creates a strange new problem: if machines transact with machines and generate GDP that does not flow through human paychecks, then GDP is no longer a measure of human prosperity. It is a measure of machine activity. Tax systems designed around human income and corporate profit may need fundamental redesign. But nobody is designing them.

The regulatory moat around audit and attest work is real, but it is narrower than it appears and different in every jurisdiction. In the US, CPA licensure gates certain work. In the UK, ICAEW and ACCA have separate frameworks. In Australia, the regime is different again. And in all three markets, the moat does not protect tax, bookkeeping, advisory, or most compliance work, which is the majority of what accounting firms actually do.³⁴ For most firms outside the Big Four, statutory audit is a minority of the book. The work that pays the bills, including the tax returns, the monthly bookkeeping, the management accounts, the BAS lodgements, the payroll, and the advisory, sits outside the regulatory moat entirely. The first hybrid structures are already emerging: two or three licensed humans overseeing thousands of AI-processed engagements. The regulatory moat does not prevent AI-native firms from forming. It just shapes what the human layer looks like.

Purpose-Built Is Now The Norm

There is a parallel disruption happening in the software layer itself, and it connects directly to the AI-native firm argument.

Intuit is down 34% year to date, Salesforce is down 26%, and ServiceNow is down 28%.³⁵ The market is repricing the assumption that twenty to thirty specialised SaaS tools is the permanent architecture of a modern business. When AI can write, test, and deploy software, the cost of building purpose-built solutions drops by an order of magnitude. The integration tax of connecting twenty tools, maintaining APIs, and managing data flows between systems that were never designed to talk to each other becomes untenable when the alternative is a unified intelligent system.

We built Martha as the cognitive operating centre of Archie. It is not a dev tool, but the system that runs our entire organisation: sales, marketing, finance, CRM, go-to-market, data enrichment. We have not built a general ledger yet, but we could. I am not suggesting every firm builds their own Martha. We are software developers, we know what we are doing, and it is still hard. But what has changed is that we can now build firm-specific models and deploy individualised code to individual firms. Six months ago, that would have been unthinkable. The maintenance burden alone, managing hundreds of bespoke deployments each with their own configurations and edge cases, would have buried any team. AI has collapsed that burden. The cost of building and sustaining purpose-built software has dropped by an order of magnitude, and it is still falling.³⁶

That is the software explosion we are about to witness. Not one platform to rule them all, but the ability to deploy tailored, intelligent systems at a scale and speed that makes the entire SaaS model look like an interim solution. The implications for any firm that has built its technology stack on the assumption that a handful of major platforms will persist indefinitely are significant. Those platforms are not going away tomorrow, but the moat around them, built on switching costs, integration complexity, and institutional inertia, is eroding faster than many in the profession realise.

The general ledger is already being obfuscated. Clients do not log into QuickBooks or Xero the way they used to. They interact with their vertical solution and the GL happens underneath. A restaurant owner uses Toast and the GL entries happen in the background. A property manager uses AppFolio or Buildium and accounting becomes a downstream artefact. An e-commerce business lives in Shopify, where QuickBooks is a sync target rather than a daily tool. A construction firm uses Procore and financial data flows to the GL, but the GL is not the interface. Firms that build their practice on top of QuickBooks or Xero as the primary interface are building on a foundation that is shifting underneath them. The accounting platform of the future may not look like an accounting platform at all. It may look like whatever the client's business runs on, with the financial layer woven invisibly underneath.

It raises a question: is the chart of accounts the centre of gravity, or is it becoming a downstream artefact of business activity happening elsewhere?

Three Things Break

When a tax return that took six hours takes two, three things break simultaneously.

#1 The billable hour dies

Hourly billing punishes efficiency, because the better AI gets, the less you can charge. A six-hour tax return becomes a two-hour tax return, and if you bill by the hour, you just cut your own revenue by two-thirds. Firms that adopt AI and bill hourly are perversely rewarded for being slow. The firms that cling to hourly billing will watch revenue shrink while costs stay flat. The shift is already happening toward value-based pricing, subscriptions, and outcome-based models. Revenue per employee is becoming the new north star, and the firms that figure this out first will look like different businesses within eighteen months.³⁷ The ones that do not will be subsidising their own obsolescence, paying full labour costs to produce output that the market increasingly values at a fraction of the legacy price.

#2 Private equity is pouring fuel on this fire

There have been over 250 PE transactions in accounting since 2019. January 2026 exceeded every month in 2025 at three times the pace.³⁸ The playbook is straightforward: acquire a firm with ten million dollars in revenue and forty staff, deploy AI to handle sixty percent of the execution work, reduce headcount to fifteen, and the margin goes from twenty-five percent to fifty-five percent. Multiply across a portfolio of twenty firms. That is the maths PE is doing right now. It also means the profession's consolidation is not being driven by accountants but by financial engineers who see a labour arbitrage opportunity with a three-year window.

The partners who sell are getting liquidity events they could not have imagined five years ago. The staff who remain inherit a firm that looks nothing like the one they joined. And the staff who are let go discover that the market for displaced mid-career accountants is not what it was, because every other PE-backed firm is running the same playbook.

#3 The talent economics flip

The profession spent two decades focused on a talent shortage of three hundred thousand unfilled positions, declining enrolments, and firms poaching from each other. Conference panels, industry reports, and white papers made the talent crisis the defining conversation of the last decade.

AI solves the shortage and creates a new problem. When you need fewer people, the people you keep become dramatically more valuable. The senior accountant who can oversee five hundred AI-processed engagements is worth three times what they were. Compensation for those individuals will rise sharply. But the people you do not need any more are the ones who just invested years qualifying. They did everything they were told to do: got the degree, passed the exams, put in the hours, earned the designation. The profession sold them a career path that is being rebuilt underneath them. The social contract between the profession and its entrants, the implicit promise that if you work hard and qualify you will have a stable and well-compensated career, is being rewritten without their consent.

And the firm economics invert entirely. Today, a firm's value is roughly correlated with headcount, because more people means more capacity and more revenue. The succession planning that dominates partnership conversations is essentially a people question about who replaces whom and how to maintain capacity through the transition.

In an AI-native model, headcount is a cost centre rather than a revenue driver.

The most valuable firm is not the biggest but the one with the highest ratio of revenue to humans. Succession planning becomes a technology question about who maintains the systems, who trains the models, and who holds the client relationships that justify premium pricing. That is a fundamental inversion of how accounting firms have been valued, bought, and sold for decades, and PE has not fully priced it in yet. The firms being acquired today at eight to twelve times EBITDA may look very different when the acquirer realises that the revenue is durable but the method of delivering it has changed underneath.

The Transition Is Just Phase One

If the preceding sections are directionally correct, the conclusion follows in phases. The sequencing matters. Each phase creates the conditions for the next.

Phase one is the transition, and it is happening now. AI fills the talent gap. The profession has been bleeding people, with three hundred thousand accountants and auditors leaving since 2020, a seventeen-percent workforce shrinkage. Seventy-five percent of current CPAs are nearing retirement. Only 1.4 percent of college students now choose accounting as their major, down from four percent a decade ago, and the 150 credit-hour CPA licensure requirement is widely seen as a deterrent. AI arrives not as an invader but as a solution to a crisis the profession created for itself. Firms adopt AI for execution and humans shift to advisory. This is where Archie operates, building the workstreams that make this transition real. And for a window, it works. Firms that adopt early see genuine productivity gains, better margins, and happier clients. The profession congratulates itself on adapting again.

Phase two is the squeeze, and it arrives around 2027 to 2028. AI gets good at advisory too. Not perfect, but good enough at a fraction of the cost. The human judgement moat does not fall in one dramatic collapse. It erodes in increments that are easy to ignore until they are not. AI handles the first eighty percent of an advisory engagement, doing the data gathering, the analysis, and the draft recommendations, while a human reviews the final twenty percent. Then ninety-ten. Then clients start asking why they are paying a human rate for a review function.³⁹ The squeeze is not AI replacing the advisor. It is AI making the advisor's value-add harder to justify at legacy pricing.

Consider what this looks like from the client's perspective. A mid-market CFO receives two proposals: one from a traditional firm at fifteen thousand dollars a month, one from an AI-native firm at three thousand. The AI-native firm's output is indistinguishable in quality, and perhaps better, because the AI never misses a deadline, never loses context between engagements, and has read every relevant regulatory update published in the last decade. The traditional firm offers a named partner. The AI-native firm offers a dashboard, a chat interface, and a human escalation path for material judgement calls. For a growing number of CFOs, the maths speaks louder than the relationship.

The evidence that this is already starting is not ambiguous. Fifty-five thousand AI-related job cuts were announced in 2025, the highest attributable to AI on record. Consumer sentiment collapsed to fifty-one on the Michigan index, nearly matching the all-time low. Only three to seven percent of AI productivity gains are translating into higher worker earnings. Eighty percent of NAACP job fair applicants held bachelor's degrees yet were lining up for low-wage roles.⁴⁰ The displacement is not a forecast. It is an observation.

Phase three is the collapse of the labour-income link, somewhere around 2028 to 2030. Professional services employment drops materially, and these are not factory workers. They are eighty to two-hundred-thousand-dollar-a-year earners who are educated, mortgaged, and raising families. They live in suburbs, send their children to private schools, and drive the local economy with discretionary spending. When this cohort contracts, the effects ripple outward through restaurants, retail, real estate, and the entire ecosystem of a professional-class economy. Consumer spending contracts among the professional class, and the same pattern plays out simultaneously across accounting, law, consulting, and financial analysis. And it is not just domestic.

The BPO market, worth three hundred billion dollars and growing, faces its own reckoning. Vinod Khosla, co-founder of Sun Microsystems, has said that "IT and BPO services will disappear, almost certainly within the next five years."⁴¹ AI negates the cost advantage of cheaper offshore labour. If machines can perform tasks anywhere for the same cost, companies keep them close to home. Countries like India, the Philippines, and Kenya that depend heavily on BPO employment face disruption on a national scale. Amodei's cognitive breadth point means there is no adjacent profession, or adjacent country, to absorb the displacement, because AI is arriving everywhere at the same time.⁴²

Phase four is when good deflation turns bad. The deflationary mechanism plays out in full. Services get cheaper, which is good. But the income of the people who used to provide those services falls faster, which is bad. Supply rises, income falls, demand follows it down, and prices fall further. The labour-income-consumption cycle starts to fracture. This happened before, during the 1870s through the 1890s. But this time, the speed is compressed from decades to years. And the compression matters enormously. The nineteenth-century deflation played out over three decades, giving society time to develop new institutions like labour unions, antitrust law, the progressive income tax, and public education systems to absorb the shock. AI compresses the same economic transformation into three to five years. There is no time for institutions to evolve organically. The policy responses that took a generation last time need to arrive in a single electoral cycle. That has never happened.

Phase five is the recursive economy, where machines transact with machines. GDP grows but the gains accrue overwhelmingly to capital owners, the people and entities that own the compute, the models, and the infrastructure. Amodei suggested in his essay that AI could drive twenty-five percent annual GDP growth, and for India, he said, perhaps even higher.⁴¹ That sounds extraordinary. But if that GDP growth does not translate to household income, it is measuring machine activity, not human prosperity. The Engels' Pause at scale: a period where economic output surges but median living standards stagnate or decline. The last time something like this happened, during Britain's Industrial Revolution, and it took sixty years and a world war to resolve the wealth concentration.⁴²

The Universal Basic Income (UBI) conversation has moved from theoretical to political. The UK Minister for Investment told the Financial Times the government is weighing UBI introduction. Elon Musk has proposed "universal high income" where work becomes a lifestyle choice. Altman has advocated it as essential. But the funding challenge is immense, since a permanent UBI could double the annual deficit, and the political battle over who pays for it is predicted to be "the defining political fight of the 2030s."⁴³ Tax systems designed for human income and corporate profit need fundamental redesign for an economy where machines generate most productivity. That redesign has not started.

Amodei lays out four features that make AI-driven displacement categorically different from every previous technological transition.⁴² The first is speed, measured in years rather than decades, which means retraining systems cannot keep pace. The second is cognitive breadth, because AI competes across the full range of human cognitive capability and leaves no safe zone to retrain into. The third is what he calls slicing by ability, where entry-level positions are destroyed first and hollow out the pipeline that trains the next generation. And the fourth is gap-filling, where AI completes entire workflows end to end rather than assisting with pieces.

Every previous technological revolution created new job categories that absorbed displaced workers. Textiles created factory managers. Cars created mechanics, road builders, insurance agents. The internet created web developers, social media managers, data analysts. But AI will be good at the new jobs too. The cognitive breadth is the killer: whatever new task categories emerge from this transition, AI can likely perform them as well or better. The escape valve that saved us in every previous transition, the creation of new work that only humans can do, may be sealed. And humanoids close the physical labour exit as well. Mind and body.⁴³

Displaced professionals are not displaced factory workers. They are educated, networked, politically engaged, and angry. Harris described the hiring freeze that is already under way: "Everyone in a hiring position, the first question they're asking themselves is, is this even a job? Can't we use AI for this? Can't somebody who's already working for us figure out how to use AI to do this thing?"⁴⁴ He called it a "Fall of Saigon moment" where the last people who can grasp the foot rails of the helicopter get out, and then the door closes. A generation that spent two hundred thousand dollars on degrees and a decade qualifying, told their career is obsolete within five years. They vote. They organise. They litigate. The political consequences of displacing the professional class at speed are qualitatively different from anything in the history of technological disruption.⁴⁵ The first Gilded Age gave us the Progressive Era, but it also gave us the Haymarket Affair, the Homestead Strike, and the assassination of a president.

What the Industry Owes the Next Generation

The professional bodies, including the AICPA, ICAEW, CPA Australia, CA ANZ, and ACCA, have a real opportunity here if they choose to take it. These organisations exist to steward the profession through change. This is the biggest change any of them will face. If they engage with what Suleyman, Amodei, and Altman are describing, not as a distant hypothetical but as a near-term planning input, there are concrete problems they could start working on now. These are the ones I would prioritise.

The bottom rung of the talent ladder is breaking. AI eliminates junior roles first, including the data entry, bank reconciliations, and basic tax prep that graduates cut their teeth on. But those roles are not just labour. They are the training pipeline. Every senior accountant, every partner, learned the profession by doing that work. It is how you develop the intuition that tells you something does not look right in a set of accounts, an intuition that comes from years of handling thousands of transactions and learning, slowly, what normal looks like. If the entry-level work is automated, how does the next generation learn?

The profession needs a fundamentally different model for developing talent, one that does not depend on years of repetitive execution as a prerequisite for judgement. AI-supervised apprenticeships, where juniors learn by reviewing and interrogating AI output rather than producing it from scratch. Accelerated qualification pathways that front-load advisory and interpretive skills. Simulation-based training environments that compress years of experience into months.⁴⁵ None of this exists yet. The industry bodies have not started building it. The clock is running.

The definition of "qualified" needs to evolve. The current qualification frameworks like CPA, CA, and ACCA were designed for a profession where the core competency was technical execution in debits, credits, tax code, and compliance. AI can do all of that. The competencies that matter in an AI-augmented world are different: the ability to interrogate AI output, to identify when the model is wrong, to exercise judgement in ambiguous situations, to communicate complex financial realities to non-financial stakeholders. A qualified accountant in 2030 needs to understand how large language models reason, where they hallucinate, why they fail at certain categories of problems, and what a confidence interval on a model's output actually means. They need to be comfortable auditing an AI's work, not just checking the numbers but understanding the process that produced them. The qualification needs to shift from "can you do the work" to "can you oversee the machine doing the work and know when it is wrong." That is a different skillset, and the current exam structure does not test for it.⁴⁶

The irony is that PwC is already creating a formal engineer-to-partner career track, a signal that the people who will run these firms in ten years may not be accountants at all. If the industry bodies do not redefine what "qualified" means, the market will do it for them.

The liability question is unresolved and urgent. Who is responsible when an AI makes an error in a tax return? The firm? The software provider? The partner who signed off? Current professional liability frameworks assume a human did the work and a human reviewed it. When AI does 95% of the work and a human spot-checks the output, the liability model is untested.⁴⁷ Every jurisdiction handles this differently, with US, UK, and Australian liability frameworks diverging significantly, and none of them were designed for this scenario. The industry bodies need to get ahead of this before the first major AI-related malpractice case sets the precedent by accident. Deloitte Australia has already had to partially refund fees after AI-introduced errors in a government audit.⁴⁸ That was the early warning.

There are ethical questions the profession has not yet confronted. When should a firm disclose to a client that their work was substantially completed by AI? Is there an obligation to? If an AI-native firm charges five hundred dollars for work that cost three dollars in compute, is that ethical? The profession has strong ethical frameworks around conflicts of interest, independence, and confidentiality, but nothing that addresses the transparency obligations around AI-assisted work. Clients are paying for outcomes, yes. But they are also paying based on an assumption about what is happening behind the scenes. That assumption is increasingly wrong.

There is a deeper question underneath it: when a client hires an accounting firm, what are they actually buying? If the answer is "accurate, timely, compliant financial information," then AI delivers that. If the answer is "the judgement and experience of a human professional," then the client has a right to know how much of that judgement is actually being exercised. The profession has never had to draw this line before, because until now, hiring a firm and hiring a human were the same thing.

And the profession risks splitting into two speeds. Large firms and AI-native entrants will move fast because they have the capital, the technical teams, and the strategic imperative to adopt at pace. Small practitioners, the sole operator doing tax returns for 200 clients or the two-partner firm with an office above a shopfront, may not have the capital, the technical literacy, or the inclination to adopt AI at the same pace. These are often the firms that serve the small businesses and individuals who most need affordable accounting. They are also the firms least equipped to navigate the transition. The profession risks bifurcating into AI-augmented firms operating at scale and margin, and traditional practitioners being gradually priced out.⁴⁹ The industry bodies have a role in preventing this from becoming a cliff by providing tooling, training, and transition support to the long tail of small firms. Otherwise the profession does not adapt. It bifurcates, and the smaller half slowly dies.

The pattern is not unique to accounting. Suleyman described the same dynamic in medicine: diagnostic AI that is "significantly more accurate and significantly cheaper with fewer interventions than any panel of doctors." The doctor's job shifts from figuring out the diagnosis to "administering the right care and providing emotional support."⁵⁰ That is the same restructuring, with the cognitive work automated, the human role redefined around oversight and relationship. Medicine, law, consulting, and financial services. Every knowledge profession is facing the same set of questions at the same time. The accounting profession has a chance to be among the first to answer them well. That would be worth something.

Even if the industry bodies engage fully with every one of these themes, the speed of AI capability improvement may outpace any institutional response. These are organisations that take years to update qualification frameworks. The AICPA's last major revision to the CPA exam took the better part of a decade from conception to implementation. AI capabilities are doubling in months. The gap between the speed of the problem and the speed of the response is itself a risk, and it may be the biggest one. This is not a criticism of the people running these bodies. It is a structural observation about the nature of institutional decision-making in a world where the ground is shifting faster than any committee can meet. The profession needs something closer to an emergency response than a strategic planning cycle. Whether it gets one is an open question.

It’s Good Stuff Which Is Also An Emergency

At the end of "Don't Look Up," everyone finally looks at the sky. The comet is there. It was always there. The argument was never about whether it was coming but about whether we would do anything before it arrived.

I am not writing this from the outside. Archie is in this, building the production workstreams, shipping to firms, watching the early adopters pull away in real time. I can see the gap opening. I have spent seventeen years in this industry. I helped build one of the tools that changed how firms operate. I have watched every wave of cloud, offshoring, advisory, and automation arrive, get absorbed, and leave the profession stronger.

I have never seen anything that moves this fast or cuts this deep.

And I am not saying the profession will disappear. I am saying it will be transformed so fundamentally that the people practising it in five years will be doing a different job under the same name. The question is whether the profession designs that transformation intentionally, or whether it happens to them.

What gives me some measure of hope is that the people at the centre of this, the ones who understand it best, are not pretending it will be easy. They have started proposing responses, and the responses are commensurate with the scale of the problem.

Anthropic cofounders’ 80% wealth donation⁵¹ is one example. Sam Harris argument for public equity in AI companies is another one.

"If you've created a company that obviates the need for human labour, we need a mechanism by which the wealth gets shared."⁵²

Ganguli argues for massively expanded public investment in academic AI research to keep the science open rather than locked behind corporate walls.⁵³ The political conversation around UBI is accelerating, with the UK Minister for Investment publicly weighing in and Altman framing intelligence becoming too cheap to meter as a feature, with UBI as a bridge.

But the gap between the speed of the technology and the speed of the policy response remains vast, and it is widening.

In six months: AI agents in production across the industry. Early movers accelerating. Monthly closes faster. Revenue per employee rising. The firms that caught the wind pulling away from the fleet.

In twelve months: the gap becomes permanent. Pricing models forced to change. AI-native firms with eighty to ninety percent cost advantages entering the market. PE consolidation accelerating. The training pipeline visibly hollowing out.

In thirty-six months: the profession fundamentally restructured. The bottom rungs of the talent ladder gone. The advisory moat visibly eroding. The software stack rebuilt from the ground up. AI-native firms operating at scale with economics that traditional firms cannot match. Quantum computing arriving to make everything faster and cheaper still.

The comfortable narrative that we will simply shift from execution to advisory and carry on more or less as before exposed as a transitional phase rather than a destination. One projected timeline, drawn from multiple sources: 2026 AI replaces support roles, 2027 administrative and clerical, 2028 serious professional tasks at scale, early 2030s much of white-collar work "may no longer be necessary." That timeline may prove optimistic or pessimistic by a year or two. The direction is not in doubt.

Harris called it "good stuff which is also an emergency."⁵⁴ Dario Amodei called it "technological adolescence." Both framings are correct. This is genuinely exciting, with the potential to make accounting more accessible, more accurate, more valuable, and more human in the best sense, and it is genuinely an emergency. The two things are not in tension. They are the same thing viewed from different angles. The accounting profession is the canary, a hundred-and-thirty-billion-dollar industry being reshaped in real time by forces that will reshape everything else next. We are not special. We are just first.

The comet is visible. The people who built the telescope are pointing at it. The accounting profession is not unique in facing this. Every knowledge profession is staring at the same sky. But accounting has something that most other professions do not: a tradition of rigorous, evidence-based analysis. A culture of looking at the numbers and drawing conclusions, even when the conclusions are uncomfortable. A professional obligation to report what is true rather than what is convenient. If any profession is equipped to look at this situation clearly and respond with the seriousness it demands, it should be ours.

If this is directionally correct, even 60% correct, what would you wish you had done eighteen months earlier?

Look up.

Notes

¹ Mustafa Suleyman, interview with the Financial Times, February 2026. Full transcript at youtube.com/watch?v=YTrBz6Z5c0E. See also Fortune, "Microsoft AI chief gives it 18 months" February 13, 2026.

² Dario Amodei, "Machines of Loving Grace," October 2024. Full essay at https://darioamodei.com/essay/machines-of-loving-grace. Approximately 15,000 words.

³ Sam Altman, Federal Reserve briefing and public statements, 2025-2026. See Fortune, "'Intelligence too cheap to meter is AI’s Next Frontier'" July 23, 2025. Also blog.samaltman.com. In his June 2025 essay titled "The Gentle Singularity," he explicitly uses the phrase "intelligence too cheap to meter" to describe the future of AI. The term is a deliberate nod to the 1950s-era promise of nuclear power (which was famously predicted to make electricity "too cheap to meter"). Altman argues that while that promise failed for nuclear energy, the "physics actually works" for AI.

⁴ According to the official White House briefing and multiple news reports, Michael Kratsios (the White House Director of the Office of Science and Technology Policy) stated during his remarks at the India AI Impact Summit on February 20, 2026 "As the Trump Administration has now said many times: We totally reject global governance of AI." The statement was part of a broader push by the U.S. delegation to pivot the international conversation from "AI Safety" (which the administration views as overly bureaucratic) to "AI Impact."

⁵ CNBC, "Sam Altman and Dario Amodei avoid holding hands at India AI summit" February 19, 2026. The image remains a significant symbol of the competition between the "safety-first" culture of Anthropic and the "rapid deployment" scale of OpenAI.

⁶ Suleyman (FT interview), Amodei ("Machines of Loving Grace"), Altman (public statements). The Modern Turing Test and MoltBook incident are from the Suleyman FT interview.

⁷ Professor Surya Ganguli, Stanford University talk on the unified science of intelligence. Full transcript available. : https://cmsa.fas.harvard.edu/event-old/8303/ (Harvard CMSA, October 2022) Primary Research Paper: “Beyond neural scaling laws: beating power law scaling via data pruning".Authors: Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, Ari S. Morcos. June 2022, presented at NeurIPS 2022. The key finding demonstrated that power law scaling can be bent to exponential scaling by choosing better training data. It is considered a "holy grail" insight because it suggests that the massive, unsustainable energy and data costs of modern AI training aren't a physical necessity, but rather a symptom of using "bad" (random) data. Ganguli and his team proved (both mathematically (using statistical mechanics) and empirically) that if you stop feeding the AI random data and start pruning it intelligently, the curve "bends."

⁸ Suleyman, FT interview: "Over the last 15 years, there's been a 1 trillionfold increase in training compute."

⁹ CNBC Quantum Computing Panel, "The Next Tech Dawn," moderated by Andrew Ross Sorkin (or occasionally reported as a "Davos-style" special), featured a deep dive into why 2026 is being called the "Year of Quantum Realism." and featured Palasio Perauto (New Quantum), Montinaro (Phasecraft), Brierley (Riverlane). ¹⁰ Ganguli Lab, Stanford. Demonstrated quantum Hopfield networks with "superior capacity, robustness and recall."

¹¹ Ganguli, Stanford talk. Human brain: ~20 watts. Modern AI training: ~10 million watts.

¹² Ashley Montinaro, CNBC Quantum Panel: "algorithms achieving millionfold reductions in cost."

¹³ Tesla Optimus humanoid roadmap. See humanoidroboticstechnology.com and helpforce.ai.

¹⁴ Figure AI at BMW Spartanburg. FinancialContent, January 21, 2026.

¹⁵ Unitree G1 pricing and China supply chain data from multiple industry sources.

¹⁶ Goldman Sachs humanoid robot market projections. standardbots.com/blog/humanoid-robot.

¹⁷ Sam Harris, reacting to Suleyman/FT interview, February 2026. youtube.com/watch?v=2rldvywEU8o.

¹⁸ NBER, "Good versus Bad Deflation: Lessons from the Gold Standard Era." Bernanke (2002) on supply-driven deflation. Hudson Institute, "A Tale of Two 'Deflationary' Booms."

¹⁹ J.P. Morgan Private Bank, "How AI can boost productivity and jump-start growth." The "only once in fifty years" observation.

²⁰ MIT Acemoglu on AI labour cost reductions. The deflationary mechanism draws on BIS, IMF, and Mercatus Center research. See also Monetisation Matters, "The Great Deflation."

²¹ Amodei, "Adolescence of Technology." Wealth concentration comparison to Standard Oil and "potentially unhealthy ties" between tech companies and government. Summit naming evolution (Safety → Action → Impact) is the author's observation.

²² International AI Safety Report, Second Edition, February 3, 2026. Led by Yoshua Bengio, 100+ experts from 30+ countries. internationalaisafetyreport.org.

²³ CPA Practice Advisor, "Startup Accrual officially launches with $75M in funding," February 5, 2026.

²⁴ Accounting Today, "Pilot launches fully autonomous AI bookkeeper," 2026.

²⁵ Karbon, " karbonhq.com/resources/state-of-ai-accounting-2026."

²⁶ Big Four investment figures from Bloomberg Tax, Going Concern, and TheStreet, 2025-2026. PwC staff reductions from Bloomberg Tax.

²⁷ Going Concern, "Ex-PwC Partner Says AI Is Coming for Big 4 Jobs in a Big Way." Also Accounting Today, "AI Thought Leaders Survey 2026."

²⁸ CPA Practice Advisor, "AI Is Killing the Billable Hour," August 26, 2025. Also Fast Company, "5 Predictions for AI and Accounting in 2026."

²⁹ Amodei, "Adolescence of Technology." The "cognitive breadth" argument.

³⁰ Amodei, "Adolescence of Technology." "Slicing by ability."

³¹ CPA exam and workforce statistics from Ramp, CPA Journal, and AICPA data. 300,000+ departed since 2020.

³² Amodei and Altman on the one-person billion-dollar company. See Horasis, Sifted, and TechBullion coverage.

³³ Solopreneur data from TechBullion and Bergenstone, 2026. The "bots serving bots" concept draws on MIT Sloan research on agentic AI and Gartner predictions.

³⁴ Regulatory landscape from PCAOB, ICAEW/ACCA, CPA Australia/CA ANZ, and ASIC frameworks. Journal of Accountancy, "5 imperatives from the PCAOB chair," January 2026.

³⁵ SaaS stock declines as of February 2026 from public market data.

³⁶ Martha platform capabilities based on Archie's internal architecture and production experience.

³⁷ CPA Practice Advisor on the billable hour. Accounting Today, "The Gen AI-driven pricing conundrum."

³⁸ CPA Trendlines, "PE Deal Tracker 2020-2026," January 14, 2026.

³⁹ Advisory erosion timeline is the author's projection based on observed AI capability trajectories.

⁴⁰ AI job cut data from InvestorPlace. Michigan consumer sentiment index. AI productivity gains distribution from Economic Policy Institute. NAACP job fair data from Fortune.

⁴¹ Vinod Khosla on BPO disappearance. See Andreessen Horowitz, "Unbundling the BPO." Also Ghost Research on India's BPO workforce.

⁴² Amodei on cognitive breadth and simultaneous displacement. BusinessToday on Amodei's 25% GDP growth prediction for India.

⁴³ UBI policy debate from Fortune (UK minister), The Hill, Stanford HAI, and AllWork. "Defining political fight of the 2030s" from multiple policy analysts.

⁴⁴ Harris, February 2026: "Everyone in a hiring position... is this even a job?" and "Fall of Saigon moment."

⁴⁵ Amodei, "Adolescence of Technology." Four features of unprecedented displacement. Political analysis draws on CFR, Goldman Sachs, Brookings, and Yale Budget Lab. AI-supervised apprenticeship concepts are the author's proposals.

⁴⁶ Qualification framework critique is the author's analysis based on seventeen years of industry experience.

⁴⁷ Professional liability in AI-assisted work is largely untested across all major jurisdictions.

⁴⁸ Deloitte Australia partial fee refund after AI-introduced audit errors. Reported in Australian financial press, 2025.

⁴⁹ Two-speed profession risk is the author's analysis.

⁵⁰ Suleyman, FT interview, on medical AI: "significantly more accurate and significantly cheaper."

⁵¹ Amodei, "Adolescence of Technology." Anthropic cofounder wealth pledges and employee share donations.

⁵² Harris, February 2026: "If you've created a company that obviates the need for human labour..."

⁵³ Ganguli, Stanford talk: "This pursuit must be done out in the open and shared with the world."

⁵⁴ Harris, February 2026: "This is all good stuff which is also an emergency."