Back to Bizweek
SEARCH AND PRESS ENTER
Latest News

“Raising the Bar on Cybersecurity in Mauritius to Tackle AI-powered Incidents and Enhance Integrity in Financial Services”

By Neekhil Bhowoniah | Consultant at the World Bank Group for Mauritius and Seychelles

Having closely collaborated with Moody’s Analytics in 2017/2018 in Edinburgh, Scotland, on a research entitled; ‘Artificial Intelligence, Cybersecurity and Regulation: The Case of UK Financial Markets’, I can say that cyber risk solutions are perceived as strong enabling factors in improving a country’s credit rating and resilience – in terms of safety and soundness of financial institutions. It even goes beyond to act as a crucial catalyst for economic development – driving overall productivity.

A quick update: the National Budget 2025/26 – themed “From Abyss to Prosperity: Rebuilding the Bridge to the Future” – delivered on 05 June 2025, announced the setting up of a Cyber Security Operation Centre in Mauritius – just weeks after the launching of The National Cyber Drill 2025 in April 2025 – signalling Mauritius’ active approach to build trust in cyber infrastructure, accurately anticipate risks and threat activity across an expanding attack surface, and bolster digital resilience. As per official estimates in the Public Sector Investment Programme, a total of Rs 16,200,000 will be allocated for the period 2025/2026 for the acquisition of equipment and software for the Computer Emergency Response Team of Mauritius (CERT MU) and the IT Security Unit (ITSU).

Research findings of the World Bank (2024) indicate that cyber incidents are increasing at an alarming rate – 37% annually in upper-middle-income countries and 21% worldwide. As a response, between 2014 and 2024, The World Bank provided USD 250 million of financing and USD 20 million of trust-funded grants to help build the foundations of cyber resilience in 64 countries worldwide. The International Development Association (IDA) and International Bank for Reconstruction and Development (IBRD) financing enabled the establishment of incident response capabilities through national and sectoral Computer Security Incident Response Teams (CSIRTs) – the equivalent of firefighters in the digital realm.

Should financial markets in Mauritius be worried about cyber incidents? In short, yes. Recently, the Minister of Information Technology, Communication and Innovation revealed that over 5,000 cyber-related incidents were reported in 2024, and some 1,000 already in 2025 – in the form of hacking, phishing, sextortion, and malware attacks. This is a noticeable shift from traditional cyberattacks to threats on social media platforms

 

Although Artificial Intelligence (AI) is in its infancy, there is no secret that it is increasingly becoming the most challenging source of systemic risk for financial markets and associated infrastructures. As per a survey conducted by the American International Group (AIG) on cyber risk in 2024, the financial services industry is seen to occupy the largest share relative to other sectors – over 25% –, with an overwhelming majority of respondents claiming that financial institutions will experience critical systemic impacts in the upcoming years from AI systems. The interconnected nature of the modern financial system and the presence of AI ‘black markets’, enable hackers to improve the capacity to analyze human behaviours and develop automated attack code at lower costs. As evidence, in 2022/2023, the financial sector was attacked 65% more often than any other industry, representing a 29% increase within a year – from 1310 attacks in 2018/2019 to 1684 attacks in 2020/2021 –, resulting in over 200 million records being breached.

 

The breakthrough of AI has now transformed the financial world as a leading target for large-scale cyberattacks.

 

The central issue at the nexus of AI systems and cybersecurity is the adaptability of algorithms to learn about their environments and change the overall strategic landscape of cyberspace, by building up control over data and entire systems. And, if left unchecked, this enables criminals belonging to hacking groups, or even state-sponsored, to tap into these smart technologies to automate numerous and larger-scale attacks having systemic repercussions – like attacks on financial networks and infrastructures, corruption of financial data, and loss of confidence.

 

Indeed, the motive of cybercriminals has shifted away from achieving purely financial gains toward massively threatening investor confidence in markets and causing significant material costs to firms, like liquidity dislocation, credit losses, jeopardising the reliability of clearing and settlement arrangements for funds and financial instruments, threats to data integrity, and loss of consumer confidence, mostly in the form of bank runs.

From this perspective, the threat of AI-powered attacks within financial markets is no longer viewed as an operational or simply an Information Technology (IT)-related risk. It has expanded into a much broader approach in the form of system-wide risk which is beginning to pose a greater danger to the overall financial stability, accompanied with the risk of contagion to banking and financial services. Besides, firms need to understand that they are no longer just at risk from data theft, but also susceptible to data manipulation from insider breaches – specifically to an insider modifying corporate resources or information.

Regardless of the motive these perpetrators might have, or the tools used, AI-propelled cyberattacks will inevitably compromise market liquidity flows, impeding consumer confidence. Given that the stock, commodities, and derivatives markets are predominantly electronically-operated (in “machine-time”), firms lack complete transparency into what is transpiring in their networks and when it is transpiring, making such anomalies difficult or even near impossible to spot. As most risk management systems are mostly focused on idiosyncratic risk, it becomes impractical and totally impossible for companies to identify and counter such systemic cyber risks. While rare, early signs of AI-enhanced hacking have already been detected in Mauritius, and it is highly likely to become a new source of danger that can play out in minutes and hours.

How should Mauritius enhance transparency and integrity within the cybercrime landscape? The answer lies in a simple formula, with three main ingredients; (i) Visibility, (ii) Deep Observability, and (iii) Zero Trust approach

 

“Although Artificial Intelligence (AI) is in its infancy, there is no secret that it is increasingly becoming the most challenging source of systemic risk for financial markets and associated infrastructures”

 

Cybersecurity=Visibility×Deep Observability ×Zero Trust

 

  1. Institutions must first prioritise real-time visibility, real-time monitoring and visualization of all communication in your network, including into all data in motion within defined parameters. The aim would be to understand user behaviour, detect irregularities in security protocols, undergo policies verification, and fix configuration issues.
  1. Next, deep observability would require leveraging on a combination of data and insights collected via existing security, visualisation tools, and network telemetry to uncover blind spots and hidden vulnerabilities from all endpoints, servers, applications, and user devices on the network.
  1. The zero-trust approach is based on the principle of “Never Trust, Always Verify”, that is access to applications and data should be denied by default, with users having to go through a continuous process of risk-based verification – through the two-factor authentication. It is important to note that visibility and deep observability clear the path for successful journeys towards zero-trust.

Transparency and integrity should not be viewed as a compliance topic or a regulatory burden. Why? In part, because viewing the situation through this lens can be both conceptually and programmatically difficult. Indeed, it’s time for organisations to live and breathe those values.

What is currently on the regulatory horizon for AI at this stage of development, or at least in the near future? And are existing laws sufficient enough to address the issues posed by AI in financial markets? Two main points are often made against AI policy. One is that the consideration of regulating AI seems premature, overly rigid, and slow to adapt to new realities, and secondly, misguided policy could hamper the development of AI. Undeniably, AI literacy is at a pitifully low level across the financial services sector, whereby regulators lack the necessary diligence and knowledge when it comes to understanding these cognitive technologies. This current shortage of AI professionals and unsuitability of AI expertise of regulators present a dilemma with two horns; (i) the gaps in technical literacy at the senior level can lead to more regulatory malpractice that can be detrimental to the integrity of financial markets, and (ii) the overconfidence in predicting the future of AI, such as resorting to an anticipatory regulation, may heighten the risk of making biased decisions.

In fact, regulatory oversight often struggles to keep abreast at early phases of emerging technologies, and is likely to impose a compliance burden in financial markets. This observation particularly applies to the General Data Protection Regulation (GDPR), which suffers from a number of challenges for the current AI applications in the banking industry. While Mauritius is still aligning itself with the ‘GDPR 1.0’ – implemented in May 2018 – UK financial markets are already moving towards ‘GDPR 2.0’ to achieve lower compliance costs and enable the full participation of Small and Medium Enterprises (SMEs) within this evolving technological landscape. Additionally, while the EU’s new data privacy requirements and its AI Act – which attracted over 300 feedback submissions – place more transparency obligations on financial institutions, they still cause significant shift in regulatory enforcement actions poised to create bottlenecks of widespread AI advancement, impacting market fairness by slowing the deployment of beneficial innovations within financial services.

 

Perhaps more important is the lack of consistency and accountability that could make it even more difficult to justify AI regulation. In practice, AI-equipped operating systems are highly opaque – as “eponymous black box” to potential users – rendering the algorithms rather secretive and outside regulators’ jurisdiction. And thus, policymakers fail to fully understand the boundaries and promises of these black box systems. Moreover, given that users are unable to follow the steps and fully understand the end results, or conclusions, of these ‘black box’ technologies, it is almost impossible to bring a case against the AI programme developers. Here lies the legal challenge: Who (or what) should be held liable for the resulting inadequate inputs and inexplicable outcomes? Also, would regulators believe “the computer told me to do it” as the rationale for a financial action or decision performed by algorithms?

In reality, it is unlikely that financial institutions will go out of their way to create algorithmic accountability and transparency in these black box technologies, because there is no incentive to do so – as it is technically difficult to trace the system’s output. And consequently, rather than undertaking the compliance burden of determining how much of services and functions will be subject to regulatory changes or new laws, many early-stage firms may opt to operate in a grey area. Thus, regulation also challenges the existing theories of ‘legal responsibility’ and liability. Many of these regulatory concerns spring from the fact that AI fails to cover and fulfil the fiduciary rule and duty of care, and can result in severe legal uncertainties.

 

Seek flexibility over strict parenting. It is essential to create a playground that allows different actors to collaborate.

This does not mean that there should be no role for, or intervention from, the government. In fact, before resorting to top-down proposals that are tempted to proscriptively restrict AI through a harms-based approach – that is out of the abundance of caution for the perceived economic, safety, and systemic risks AI seems to pose – policymakers need to communicate well in advance to financial stakeholders about their regulatory expectations, by clearly specifying the ‘red lines’.

Skip to content