The 2018 report from the World Economic Forum (WEF) brought to light in a comprehensive way how all aspects of the financial industry will be impacted by Artificial Intelligence (AI). In a follow up report published last year, recognizing the inevitability of AI, WEF has now highlighted the key challenges that are preventing widespread usage of AI and how financial institutions, regulators and policy makers can be proactive in addressing them to fully utilize benefit AI. In the 2019 report, 5 concerns have been highlighted that prevent AI from fully being deployed within Financial Services with solutions to address them;
- AI Explainability
- Bias and Fairness
- AI Fiduciary Role
- Concern over Algorithmic Collusion
- Fear of Systemic Risk
In my opinion, the top three are more pressing; they are the first order effects that need to be addressed immediately and the last two are second order effects and further out in the risk spectrum. This view is also reflected in the state of current research with more papers being published on AI Explainability and Bias and Fairness compared to Algorithmic Collusion. Below is a short summary of the top three concerns as outlined in the report with references to related articles/papers that I found helpful while interpreting the report.
AI Explainability
Currently, AI Explainability is the most pressing concern and a key risk according to the policy makers and regulators. The lack of explainability even underlines all of the other 4 concerns and financial firms that proactively address explainability of AI algorithms will gain first-mover advantage and secure customer trust and regulatory approval. However a fundamental point made in the report is that having a uniform requirement of explainability across all uses cases is not feasible and various forms of explainability are needed; this is a clear message for the policy makers. This view has slowly started forming hold among practitioners and a good resource for this approach is summarized in the paper published in Nature magazine in 2019. Similar to the paper published in Nature magazine, the WEF report also calls for a shift in the current thinking of either pursuing no AI or having a ‘fully transparent AI’ as insisted by policy makers. There are 4 general use cases outlined with varying forms of explainability for each;
- Deploy AI without explanation – In cases where the potential adverse impact from AI is negligible and where accuracy is more important than transparency, it maybe sufficient for an AI to be accurate without the additional need to be transparent. This paradigm has already been adopted outside finance, most visibly for recommendation systems as in case of movie recommendations from Netflix. The same approach could be adopted in financial services for recommendation systems with no adverse financial impact.
- Interrogate or Interpret the AI algorithm – In cases where transparency is vitally important for example to address bias and fairness in financial decisions e.g. credit card approvals than the AI models will need to be investigated using transparent technical techniques; some approaches in this area not mentioned in the report but being explored are use of LIME, Shapley Values.
- Provide Context for the AI decision – In some other cases, instead of providing more technical explanations as mentioned above, it maybe better to provide the context and reasoning in an easy to interpret format as suitable for consumers, regulators and business users. Here the current research is at a more nascent stage but there are methods like building a counterfactual predictive model which is currently being explored at BBVA that seem promising.
- Provide Guardrails for AI actions – In cases where the priority is to contain the actions from AI models than they should be deployed with robust controls and guardrails. So this will apply in cases where there are regulatory constraints/rules and most likely will apply to lot of cases in financial services under in capital, pricing and consumer-facing applications. As far as framework, an approach similar to what was advocated by regulators for electronic trading algorithms could be taken based on the guidance published by all global regulators in 2015.
Bias and Fairness
Bias and Unfairness are the most cited concerns regarding AI after Explainability and the most public failures of AI have driven by them. Bias creeps up in decision making within financial services even without presence of AI since firms have to evaluate risk but unfortunately there are lot of cases where the decisions are based on non-risk factors and are hence discriminatory. AI will exacerbate the bias due to greater depth and breadth of data being used and there is a higher mistrust due the lack of explainability. Bias in AI can be caused by 4 factors (1) Human Bias (2) Data Bias (3) Model Bias and (4) Second Hand Bias which is a new term that appears in this report and refers to adverse financial decisions caused by algorithms impacting users/groups who have already been discriminated in areas outside of finance e.g. unfairly incarcerated or lack of education or job due to bias.
- Human Bias – this bias is caused by the human developers involved in the collection of data, development of training data for AI algorithms who could introduce their biases unintentionally or intentionally e.g. to maximize profits. Actions to reduce this source of bias include promoting diversity and training within the AI developers and also benchmarking AI models against traditional models.
- Data Bias – this bias is caused due to data carrying the bias or data being inappropriate and non-representative and is quite prevalent even for financial models not using AI. In the WEF report, approaches to manage data bias have been categorized under qualitative and quantitative approaches. The qualitative approaches include defining “nutritional labels”(source and limitations of data) for data; independent review of the data and testing models against more robust datasets. Quantitative approaches include statistical techniques to make sure data is representative and technically preventing algorithms from inferring protected personal traits. Lastly, firms can make sure that data collection and storage is only done for data that is needed which goes against current practices of amassing vast amounts of data and then figuring out their use later.
- Model Bias – this is bias caused due to flaw in the design/logic of the AI or as mentioned in guidance on model risk (SR 11-7) by Fed it is a flaw in “conceptual soundness” of the AI algorithm. Approaches to handle this bias include explaining the model; independently validating the model; adjusting for bias or benchmarking against a traditional model.
- Second Hand Bias – this one is the most challenging type of bias since it occurs outside the financial system but causes adverse financial outcomes for certain individual and groups. An illustrative example of this bias is the recruitment model that was built in Amazon which was discriminating against women by learning from the history of hiring practices at Amazon as reflected in the data. In contrast to other forms of bias, this bias can be reduced only by sacrificing some statistical accuracy since the bias is already embedded in the practices in the society and reflected in the data. Financial firms will have to actively take steps to make sure that their risk-based decisions using outputs from AI tools do not discriminate against specific individuals and groups.
Algorithmic Fiduciary
The last of the concerns I want to bring up from the report is the the the concern raised that AI tools cannot perform fiduciary duties. Use of AI enabled systems for financial/investment advisory purposes is now common and most established financial firms are either using them or exploring using them. Investment advisors are held to fiduciary standards under the Investment Advisors Act of 1940 in the US; the language of the law can be interpreted to allow for non-human parties to be registered as investment advisors with the same fiduciary responsibilities. As of 2019, robo-advisory firms such as Wealthfront and Betterment have been registered as investment advisors and are held to same fiduciary standards as traditional firms. While the legal requirements vary across countries, generally fiduciaries are required to make decisions in best interest of clients, in good faith and with loyalty and care. The report notes that currently AI tools being used are not sophisticated enough to carry on nuanced conversations; verify information provided by clients; perform a comprehensive assessment of all of the financial needs of a client across all products and lastly cannot explain their decisions. Even with advances in natural language processing and unstructured data analysis (of financial records of clients), being able to replicate personal connection between a human advisor and a client will be hard to overcome in near future. Either a new approach for AI advisors will need to be developed by policy makers or AI tools will be continue to be used with a human advisor involved.
Summary
The key takeaways across all of the top 3 concerns are that in order to unlock the full transformative power of AI in financial sector, Firms, Regulators and Policy Makers will need to work together to (1) come up with new solutions and have an openness to new forms of governance and this (2) will drive policy shifts across the Financial sector for for Explainability, Fairness/Bias and Data Privacy. Secondly, there is an opportunity for financial firms to gain an edge by developing a framework for ‘Trusted AI’ or ‘Responsible AI’ to differentiate themselves from technology companies and gain back customer trust.