Artificial Intelligence (AI) is rapidly reshaping global capital markets, transforming how securities are traded, supervised and managed.

An International Organisation of Securities Commissions (IOSCO) Consultation Report states:
An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment”.

How AI is Transforming Capital Markets

Globally, AI is being used in capital markets for three overarching purposes:

  • to improve efficiency;

  • generate revenue; and

  • manage risk.

In developed markets, AI is now embedded across the capital markets ecosystem:

  • Market Surveillance and Enforcement: Detecting market abuse, insider trading and other fraudulent activities and advising on and implementing associated enforcement actions.

  • Trading, Portfolio Management and Advisory Services: Using algorithmic trading, robo-advisory services, conducting investment research, and data-driven portfolio/asset management.

  • Risk Management and Compliance: enhanced risk management, compliance, and Anti-money Laundering/Countering the Financing of Terrorism (AML/CFT) processes.

  • Operational Efficiency: Improved efficiency in trading, settlements, issuer disclosures and internal operations.

Associated Risk Issues and Challenges

Notwithstanding the potential benefits of AI adoption in capital markets, there are corresponding risks that can have a significant impact, particularly for smaller capital markets like those in the Caribbean. Some of these risks include:

Cyberattacks and model exploitation risk - AI systems can introduce new cybersecurity threats and exacerbate existing ones, with challenges for financial firms compounded by resource constraints. AI systems can be targeted with data poisoning, adversarial attacks or model theft that can result in operational disruption and data breaches.

Concentration, Outsourcing and Third Party Dependency Concentration risks relating to the use of AI technologies can potentially arise across various dimensions: technological infrastructure, data aggregation, and model provision. Reliance on a small number of technology infrastructure providers, such as cloud service providers, can create concentration risks in technical provision and associated services. Widespread reliance on similar AI models and third-party providers can increase systemic shocks.

Governance, accountability and model risk lack of accountability and regulatory non-compliance, insufficient oversight, talent scarcity and over-reliance on technology for decision-making. Financial products and service providers could attempt to disclaim liability for investor or market harm resulting from the use of AI systems, or could attempt to shift responsibility to others in the AI system supply chain. Depending on the facts and circumstances, there could be enforcement challenges if AI systems are used in connection with violations of law, in terms of identifying and holding responsible persons accountable, and gathering and presenting evidence due to an AI system’s complexities.

Market Integrity and Model risk (manipulation, herding, feedback loops) - if the use of common models and datasets for trading-related applications were to become widespread, this could increase systemic risk if large numbers of market participants are prompted to make the same decisions at the same time. AI-driven strategies amplify volatility, create correlated behaviour, and can enable market abuse as AI models may be poorly designed, insufficiently tested and/or trained on biased, incomplete or flawed data. Moreover, deep fakes and AI-generated misinformation can influence market prices.

Mitigating and Managing Risks

Market participants should incorporate the following principles when adopting AI applications to manage, mitigate and govern risks associated with AI adoption

  • Transparency: AI systems should be understandable in terms of how they operate. Users and clients should receive accurate and complete disclosure around the use of AI in connection with the provision of financial products and services.

  • Reliability, Robustness, and Resilience: AI systems should perform consistently, reliably, and as intended over time.

  • Investor and Market Protection: AI systems used in the financial sector are subject to applicable investor and market protection frameworks.

  • Fairness: AI should not be used in a way that results in unfair bias or discrimination.

  • Security, Safety, and Privacy: There should be adequate measures around data quality and provenance, privacy, and cybersecurity.

  • Accountability: There should be a clear assignment of roles and responsibilities for AI usage by financial service providers, including for risk management and governance of AI systems, and for the impact and errors of AI systems inside and outside of the firm.

  • Risk Management and Governance: There should be an effective mechanism in place to establish a strategy for AI, provide appropriate training, and oversee the development, implementation, use, and monitoring of AI use cases. This often includes risk monitoring and management, including model, data, and third-party risks.

  • Human Oversight: AI systems should be used as a tool to augment, and not replace, human decision-making and judgment.

Relevance of AI to the Capital Market in the Eastern Caribbean

While much of the early adoption has taken place in the advanced economies, the implications for emerging and smaller markets are arguably even more profound.

For the capital markets in the Caribbean, characterised by small issuer and investor bases, limited liquidity and constrained regulatory resources, AI presents a unique opportunity to break down structural barriers, improve market efficiency, and increase investor participation.

AI should not just be regarded as a simple technological upgrade but rather as a strategic market-development tool for transforming the Eastern Caribbean Securities Market. Three major potential impacts for the Eastern Caribbean include:

Enhanced Regulatory Capacity: AI-enabled surveillance and analytics, such as Sup Tech and Reg Tech, allow regulators to adopt more risk-based, data-driven supervision through automation. This gives regulators the flexibility to reassign staff or restructure for overall increased efficiency.

Promoting Financial Inclusion: The introduction of AI-driven digital investment platforms can lower entry barriers, especially for retail investors and small to medium-sized issuers, thereby expanding overall market participation.

Supporting Regional/Cross-border Integration: Synthesised AI tools and harmonised data standards can strengthen cross-border surveillance, support cross-listings, and create opportunities for wider investor participation and capital raising in markets.

Looking Ahead

The rules, regulations and standards set by capital market regulators will impact responsible AI adoption. It is important for regulators to understand that consistent and effective regulation will require ongoing cross-border and cross-sector collaboration, through bodies like IOSCO.

The Eastern Caribbean Securities Regulatory Commission (ECSRC) will continue to track and monitor AI regulatory developments regionally and globally, as it executes its mandate to promote investor protection, market stability and development.

Strategic considerations include:

  • Developing AI-aware regulatory guidance, this should include expectations around governance, model risk management, and explainability and accountability standards.

  • Investing in shared regional Sup Tech and Reg Tech solutions: Securities regulators and financial regulators should consider sharing and collaborating on the development of AI surveillance and analytical tools.

  • Strengthening market data infrastructure: Improve issuer and licensing disclosure standards and digital reporting frameworks to support AI adoption.

  • Leveraging the use of Regulatory Sandboxes, allowing controlled experimentations through pilot in the sandbox of AI, driven products and services, while managing systemic risk.

  • Building Skills and Institutional Capacity: Enhance internal capacities in data analytics, AI, AI governance, and digital supervision.

AI has the potential to boost and modernize the ECSM and other capital markets. However, to maintain and cultivate safe, efficient, resilient and transparent markets, Regulation governance and oversight by the authority is critical. Rules and regulations must ensure that AI is used responsibly as a tool for inclusion, sustainable market growth and market integrity. Regulations should also aim to explicitly prohibit the use of AI to infiltrate and exploit capital market vulnerabilities.