Artificial intelligence (AI) plays a central role in current processes of technological change in AI for financial services. Its prominent place on innovation agendas speaks to the significant benefits that AI for Financial Services technologies can enable for firms, consumers, and markets. At the same time, AI systems have the potential to cause significant harm.
In light of this fact, recent years have seen a growing recognition of the importance of AI adoption being guided by considerations of responsible innovation. The adoption of AI in financial services is underpinned by three distinct elements of innovation: machine learning (ML), non-traditional data, and automation. AI systems can combine all three elements or a subset of them. When considering a particular AI use, it is useful to distinguish between these three elements of innovation and examine their respective role. Doing so is crucial for an adequate understanding of AI-related risk, as each element can give rise to distinct challenges.
ML, non-traditional data, and automation give rise to various challenges for responsible innovation. These challenges provide the foundation for understanding the causes of AI-related risks. They are often related to the following four background considerations:
Against the background of these considerations, AI can give rise to specific concerns. These include concerns about
(i) AI systems’ performance,
(ii) legal and regulatory compliance,
(iii) competent use and adequate human oversight,
(iv) firms’ ability to explain decisions made with AI systems to the individuals affected by them,
(v) firms’ ability to be responsive to customer requests for information, assistance, or rectification, and
(vi) social and economic impacts.
In light of these concerns, recent years have seen a rapidly growing literature on AI ethics principles to guide the responsible adoption of AI. The principle of transparency, in particular, plays a fundamental role. It acts as an enabler for other principles and is a logical first step for considering responsible AI innovation.
The use of AI in financial services can have concrete impacts on consumers and markets that may be relevant from a regulatory and ethical perspective. Areas of impact include consumer protection, financial crime, competition, the stability of firms and markets, and cybersecurity. In each area, the use of AI can lead to benefits as well as harms.
The general challenges that AI for Financial Services poses for responsible innovation, combined with the concrete harms that its use in financial services can cause, make it necessary to ensure and to demonstrate that AI systems are trustworthy and used responsibly. AI transparency, the availability of information about AI systems to relevant stakeholders, is crucial in relation to both of these needs.
Information about AI for Financial Services systems can take different forms and serve different purposes. A holistic approach to AI transparency involves giving due consideration to different types of information, different types of stakeholders, and different reasons for stakeholders’ interest in information. Relevant transparency needs include access to information about an AI system’s logic (system transparency) and information about the processes surrounding the system’s design, development, and deployment (process transparency).
For both categories, stakeholders that need access to information can include occupants of different roles within the firm using the AI for Financial Services system (internal transparency) as well as external stakeholders such as customers and regulators (external transparency). For system and process transparency alike, there are important questions about how information can be obtained, managed, and communicated in ways that are intelligible and meaningful to different types of stakeholders. Both types of transparency, in their internal as well as their external form, can be equally relevant when it comes to ensuring and demonstrating that applicable concerns are addressed effectively