Another month and another publication on Artificial Intelligence (AI) by regulators is out. This time it is by the De Nederlandsche Bank (DNB). The principles advocated by DNB have a nice catchy acronym, SAFEST standing for soundness, accountability, fairness, ethics, skills and transparency (or “SAFEST”). The principles are close to those advocated by the Monetary Authority of Singapore (MAS) in a publication earlier in the year which also had a catchy acronym, FEAT which stood for Fairness, Ethics, Accountability and Transparency (FEAT). The common principles in these two publications are; Accountability, Fairness, Ethics, Transparency and these principles essentially form the core of the regulatory expectations as they come up wherever regulators express their views regarding use of AI and Machine Learning (ML) in financial services.
The ECB is approaching the use of AI in financial sector on a much broader scale than MAS and DNB and as part of Europe’s AI strategy has created an independent expert group the High-Level Expert Group on Artificial Intelligence (AI HLEG). The AI HLEG produced a draft of the ‘Ethics Guidelines for Trustworthy Artificial Intelligence (AI)’ in December 2018 and finalized them in April 2019. The principles advocated by the AI HLEG include the above common concerns of Fairness, Transparency and Safety and Soundness but also include a distinct principle of respect for Human Freedom and Autonomy. The total set of principles advocated by AI HLEG are the following;
- Respect for Human Autonomy
- Prevention of Harm (Safety and Soundness)
- Explicability (Transparency)
In addition the above 4 ethical principles, the AI HLEG has also put out a list of 7 requirements needed to promote trust in AI when used in the financial sector and advocated them to be implemented throughout the lifecycle of an AI system’s lifecycle. These 7 requirements are;
- Human Oversight
- Technical Robustness
- Privacy and Data Governance
- Diversity, Non-Discrimination and Fairness
- Societal and Environmental Well-being
A year earlier than the ECB, in 2018 the German regulator BaFin had conducted a study on the use of Big Data and Artificial Intelligence in financial services and published the results of the study in July 2018. The study concluded that the use of Big Data and Artificial Intelligence in financial sector contained unique risks at the macro, firm and the society level while advocating their continued growth and usage. To address the risks at the Macro level to ensure financial stability and market supervision the study advocated principles of Transparency, Third Party Risk Management and Technological safeguards. At the next level to address the risks at the Firm level the study advocated the principles of Trust, Transparency (‘No Black Boxes’), Robust Governance and Information Security controls. Finally at the broader society level, the study addressed principles of Non-Discrimination, Consumer Rights and Data Privacy. It is not surprising that some of these principles have evolved into the ethical principles and requirements put forward by the AI HLEG.
The precursor to all of these reports by the national regulators was the 2017 report by the Financial Stability Board which was prepared jointly with regulators from various jurisdictions. In this publication, the FSB concluded that use of AI in financial services “brings a number of potential benefits and risks for financial stability that should be monitored”. Amongst the risks that the FSB noted were (1) new and unexpected forms of interconnectedness between financial markets due to interaction between AI algorithms used at various firms (2) network effects and scalability of new technologies giving rise to third-party dependencies and new systemically important firms outside regulatory domain i.e. Big Tech companies (3) lack of interpretability or auditability of AI.
Compared to the flurry of activity by regulators in Europe and and Asia, it seems too quiet in the US regulatory front and no separate guidance or rule on AI has been published by any of the regulators so far. However in their defense, one of the FRB governors in 2018 emphasized the relevance and applicability of current guidance on model risk (SR 11-7) to apply to AI also. Implicitly this means that in US the regulators are expecting principles of model risk i.e. conceptual soundness to support safety and soundness, transparency, independent validation and performance monitoring to be applicable to AI/ML models also. In more recent news, last month the FDIC Chairman stated that “transparency of AI/ML models and the ability to interpret and understand their results is vital to ensure compliance with regulatory obligations”. However a unified joint guidance/rule by the US regulators on AI is pending and has asked for under an Executive Order issued by the President titled “American AI Initiative” at the start of the year. It will not come as a big surprise if the guidance/rule whenever it comes will include some of the elements mentioned above.
In conclusion, the exact structure of the regulations on use of AI in financial services will continue to evolve as the regulators figure out how to contain the risk from these technologies while fostering their growth since their usage with safeguards has been universally advocated globally by all regulators. From the common principles mentioned in some of the above publications/statements it is apparent that a convergence is happening and coalescing of these principles could maybe be around the corner.