In an environment of advancing market trends, technological developments, and regulatory provisions, pragmatic and easily implementable solutions are more valuable than ever. Excitingly, artificial intelligence is maturing to a point where it can offer some powerful options for businesses.

Before getting ahead of ourselves, however, it's important to (re)examine what artificial intelligence can and should be used for. It is often thought that AI is a replacement—even a partial replacement—for human intelligence. This is the wrong approach. While humans are capable of (for example) emotional and social intelligence, computers are per their design restricted to logical mathematical intelligence.

However, in this latter area they are extremely powerful: with respect to the (speedy) processing of (large) datasets as well as to memory and data storage, they undeniably outperform human beings. For those facets of intelligence that only humans can handle, like creativity, leadership, and consciousness, however, computers require supervision and support. For example, oversight and matters of accountability are firmly human activities.

Data, data, data

At a recent workshop, experts Marc Hemmerling (of the ABBL) and David Hagen (of the CSSF) explained that most AI issues, at their core, regard data and data access. The European Commission's agenda, unsurprisingly, follows the same problematic areas. Given that the power of an AI system rises and falls with the quality of the data it is fed with, the regulator's stance is clear: the responsibility for obtaining the right level of data quality remains with the institution deploying the AI system.

One company travelling the artificial intelligence road is PayPal. At the same workshop, Claire Alexandre (of PayPal) talked about how the company greatly benefits from AI. Given that PayPal currently has 267 million active users and operates in 20 languages, the power and scalability of AI are crucial. Furthermore, since more and more payment transactions are done via social media, handling workflows in real-time is a must.

However, Claire Alexandre explained, this is also the speed at which fraudsters work, and the primary field of AI applications for PayPal is in fact fraud prevention. Here, a key to success is for the AI system to look at stories rather than individual data points, a task well-suited for machine learning systems. At PayPal, AI has thus become a core element for managing risks with the ultimate purpose of creating trust—the main factor in winning customers in the payment services industry.

AI applications are being deployed as we speak

David Hagen (CSSF) also noted, at the workshop, that he was impressed by the number of organizations in Luxembourg already using AI. However, widespread usage means that definitions and guidance are needed, thus the recent CSSF white paper "Opportunities, Risks, and Recommendations for the Financial Sector." In Luxembourg, primary areas of focus within artificial intelligence are natural language processing and machine learning, which can be subdivided into deep learning, reinforcement learning, and (un)supervised learning.

A key difference between AI models from the "earlier days" and those of today is the increased importance of continuous monitoring and validation. From the perspective of the CSSF, it is crucial that the decision-making processes of AI systems, in the course of their day-to-day work, are traceable. However, in the opinions of stakeholders around the industry, there remain big question marks surrounding the precise format needed to document these processes.

And those question marks will, for now, remain: even though AI solutions have been in use for quite some time, it is nevertheless still too early, in the eyes of the regulators, to impose classifications and certifications that would standardize AI tools and workflows. Such standardization would also hinder the innovation process and ultimately competition. PayPal's Claire Alexandre confirmed that, in practice, a step-by-step approach for the development of AI systems would be more appropriate, advancing systems over time and adapting them to the ever-changing environment. Indeed, in the AI context, the entire notion of "standard" as we know it might be replaced by whole new levels of agility.

Teamwork makes the dream work

Regulators, industry players, and organizations like the ABBL need to (continue to) collaborate if an age of safe and powerful artificial intelligence is to be ushered in. AI, says Jean Hilger (of the BCEE), constitutes uncharted territory for all stakeholders. A common language is therefore needed. For example, the audit trail requirements for a loan decision done with AI should be defined and agreed upon. Only when regulators' expectations match those of the institutions—at both managerial and IT levels—can frustration be prevented in audits.

This requires institutions to rethink their internal operating models. For example, as David Hagen points out, AI requires IT and business departments to work more closely together. Mindsets need changing when it comes to how IT projects are implemented in the first place. What is needed is a versatile team that combines a broad set of skills and that consists, for example, of business experts, business analysts, data scientists, software engineers, and application managers, to name only a few. Structures need to be built to frame the accompanying processes ranging from idea creation via operations, to the abandonment of applications. It is furthermore paramount that the collaborative relationship of this team endure throughout the entire lifetime of the AI system.

Beyond that, AI encompasses a playing field in which highly technical subject matters are incorporated into more traditional business processes—it is therefore paramount that top management be constantly informed by technical experts on the abilities and constraints of a particular model before it is applied to a real business task. In this sense, an AI model is not only to be explained from the results side, but also from the perspective of the statistical models that it relies on.

The right balance

When the first steam-powered cars were introduced in the 18th century, the so called "Red Flag Act" provided a safety net: a person with a red flag walked in front of every car, thus limiting its speed in a very pragmatic way. In the age of AI, similar safety nets may be required. A question concerning all stakeholders is how these are to be designed and at what point will they be too strong and bring innovation to a halt.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.