Big Four Firms Race To Deliver AI

The world’s largest accounting firms are developing a new type of audit for artificial intelligence. Deloitte, EY and PwC are building frameworks to verify that AI tools work safely, fairly and as intended, positioning themselves to lead an emerging market in digital assurance.

Auditing the algorithms

Each firm is preparing to launch dedicated “AI assurance” services that will test how automated systems perform in real-world settings. The aim is to provide companies, investors and regulators with independent proof that AI models operate correctly and comply with future laws.

The need is clear. As AI systems are used to diagnose cancer, approve loans and make real-time decisions in autonomous vehicles, the potential for error has increased. Richard Tedder, audit partner at Deloitte, said this new layer of assurance is becoming essential. “Companies will want assurance over the AI they use to manage critical functions,” he said. “Consumers will want the same if they rely on AI for their health or finances.”

Turning governance into business

For the Big Four, AI assurance could be the next major source of growth. It mirrors their expansion into environmental, social and governance auditing over the past decade, when companies sought external validation of their sustainability data. That boom created a multibillion-pound market.

PwC UK’s chief technology officer for audit, Marc Bena, said the firm would launch its service “soon”. PwC is already testing client tools, including chatbots, to assess accuracy and identify bias. Deloitte and EY are taking similar steps, hoping to translate decades of experience in financial auditing into this new, data-driven field.

A cautious approach

EY’s UK technology risk leader, Pragasen Morgan, said full certification of AI systems remains some way off. “Because models learn and adapt as they ingest new data, they may not react the same way in a given scenario,” he said. “Complete assurance is not yet possible, and none of the Big Four are ready for that level of responsibility.”

The main challenge is liability. If an audit firm certifies a system that later causes harm, the legal consequences could be severe. Most firms therefore plan to start with advisory reviews before progressing to formal attestations.

Fragmented standards

The UK already has hundreds of providers offering forms of AI assurance, according to government research. Most are technology developers reviewing their own products, which raises concerns about independence. There is no consistent framework defining what “assurance” means in this context, so quality varies from light-touch compliance checks to detailed model testing.

The Institute of Chartered Accountants in England and Wales recently held its first conference on AI auditing, reflecting how quickly interest is growing. The Big Four are working to shape this field before specialist start-ups establish rival standards.

Early demand

Government studies show the strongest appetite for AI assurance in sectors such as financial services, life sciences and pharmaceuticals, where mistakes can be costly. Some insurers have begun offering cover for losses caused by faulty algorithms, adding further pressure for companies to demonstrate that their systems are tested independently.

Tedder said boards are increasingly aware of the reputational and financial risks of unverified AI. “AI is now integral to business operations. The trust placed in these systems must be matched by credible oversight,” he said.

Defining the rules

If Deloitte, EY and PwC can create credible audit methodologies, they could influence how regulators define AI accountability. That would give them a similar role to the one they hold in financial reporting, where their standards underpin corporate disclosure.

Industry observers believe the first firms to produce consistent, transparent approaches will set the tone for the entire sector. It is a rare moment where auditors have a chance to lead technological regulation rather than follow it.

Technical limits

The central obstacle is that AI systems are dynamic. Once trained, they continue to evolve as they process new data. An algorithm that performs well during an audit may behave differently months later. This volatility makes assurance both more necessary and more complex.

Morgan said the task is to balance credibility with realism. “AI assurance is about building confidence, not guaranteeing perfection,” he said. “Auditors and developers will need to work together as the technology changes.”

Lessons from ESG

The parallels with ESG assurance are instructive. That market proved lucrative but also exposed inconsistencies and conflicts of interest. AI could follow the same path unless standards are clear and oversight is independent. Professional bodies and regulators are therefore urging collaboration to ensure rigour from the outset.

The Financial Reporting Council has begun early work on potential AI audit principles in the UK. The European Commission is considering similar measures within its AI Act, while US agencies explore model accountability frameworks.

A race for trust

As companies integrate AI into core decision-making, the need for credible oversight will intensify. The Big Four see an opportunity to turn their experience in financial auditing into a foundation for technological governance.

If they succeed, auditing algorithms could become as fundamental to business accountability as financial reporting itself. If they fail, smaller specialists may define the rules instead.

Either way, the next test for the global audit profession is not about counting assets or checking accounts. It is about judging machines, and ensuring that those who design them remain accountable.

RECENT NEWS

OpenAI Faces Renewed Competitive Pressure

OpenAI is entering a more demanding phase of the consumer AI race after Sam Altman issued a call for staff to concentrat... Read more

META Prepares Sharp Cut To Metaverse Spending

Meta is preparing to scale back its metaverse ambitions as Mark Zuckerberg accelerates a strategic shift towards artific... Read more

BoE Loosens Capital Rules

The Bank of England has taken a significant step towards easing post-crisis regulation by lowering its estimate of the c... Read more

BlackRock Looks To Human Fund Managers

BlackRock is overhauling its flagship quantitative hedge fund as it prepares to challenge some of the industry’s most ... Read more

Nvidia Chip Demand Defies Talk Of A Slowdown

Nvidia has delivered another set of powerful quarterly results that eased investor nerves and strengthened confidence in... Read more

META Wins Antitrust Case

Meta has secured a decisive victory in one of the most significant US antitrust cases in years, after a federal judge re... Read more