UK's Ministry Of Defence Pins Hopes On AI To Stop The Next Massive Email Blunder

The UK's Ministry of Defence is the latest to slap its hand on the big red AI button as it seeks solutions to prevent data leaks.

Aussie startup Castlepoint Systems announced this morning that the MoD had selected it to provide what it calls AI-powered data control.

The security shop promises that its explainable AI tech automates the control over complex datasets and helps reduce the likelihood of human error leading to serious leaks.

Rachael Greaves, CEO at Castlepoint Systems, said: "The MoD faces a complex challenge in managing vast and sensitive datasets in the knowledge that even a single case of data leak or loss can be catastrophic. I'm pleased that after undertaking a very thorough global search, Castlepoint was selected by the MoD as the best solution to solve this problem.

"Castlepoint, with explainable AI and true autoclassification at its core, can increase labeling accuracy and coverage without disrupting the essential work of MoD personnel. We are a trusted technology provider for public-sector organisations and enterprises in Australia and New Zealand, and having now established our global headquarters in London, we look forward to delivering our proven solutions to many more organisations in the UK."

The company claims its technology is already used by two-thirds of government departments in Australia, and has contracts in New Zealand too.

Afghan data leak

The deal follows fresh details regarding the MoD's infamous 2021 data leak where it mistakenly publicized the identities of nearly 19,000 Afghans who worked with British forces during the conflict with the Taliban.

In what is considered one of the most, if not the most, damaging data breaches in UK history, the people's names were exposed thanks to a classic CC-not-BCC email blunder from the UK's Afghan Relocations and Assistance Policy (ARAP) unit.

The Taliban vowed to punish anyone who helped the British during the war, meaning exposing that data potentially threatened the lives of thousands.

After a super-injunction was lifted in July, it was revealed that in addition to the Afghans' identities, those of around 100 British officials, including SAS troops and MI6 spies, were also exposed.

The BBC also reported that one individual obtained a copy of the exposed identities and published a snippet to Facebook, threatening to leak more.

The individual, who has not been named, was reportedly one of the Afghans who had their resettlement application rejected by the UK. Their application was reconsidered on an expedited basis following the threats, and it is understood they are now in the UK.

Is AI the answer?

The buzz around AI and its potential applications for security defenders has been building for years, and although many organizations are deploying the latest tech, few seem to know how to configure it securely.

That notion was on display at the country's National Cyber Security Centre's (NCSC) annual CYBERUK conference earlier this year. Peter Garraghan, CEO at Mindgard and professor of distributed systems at Lancaster University, asked a crowd filled with infosec pros to raise their hands if they fully understood the security risks associated with AI system controls, and not a single one was raised out of the 200-strong crowd.

"So everyone's using generative AI, but no one has a grasp of how secure it is in the system," Garraghan replied. "The cat's out of the bag."

The NCSC launched a report shortly before Garraghan's session, warning organizations against rushing AI deployments due to the increased attack surface these systems present.

At the same time, it said failing to AI-ify cyber defenses could lead to them becoming significantly more vulnerable to evolving, AI-empowered security threats by 2027.

An NCSC spokesperson told The Register at the time: "Organizations and systems that do not keep pace with AI-enabled threats risk becoming points of further fragility within supply chains, due to their increased potential exposure to vulnerabilities and subsequent exploitation. This will intensify the overall threat to the UK's digital infrastructure and supply chains across the economy.

"The NCSC's supply chain guidance is designed to help organizations gain effective control and oversight over their supply chains. We encourage organizations to use this resource to better understand and manage the risks.

"This is also why market incentives need to exist, to drive up resilience at scale, at an increased velocity." ®

RECENT NEWS

From Chip War To Cloud War: The Next Frontier In Global Tech Competition

The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more

The High Stakes Of Tech Regulation: Security Risks And Market Dynamics

The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more