Artificial Intelligence Works Well in Banking – Computerworld

Discussion topic “Should AI replace humans in identity confirmation?” during the Asseco Banking Forum 2019, he pushed his participants to seek answers to broader questions about the role and future of artificial intelligence in banking and in business.

During Asseco banking forumeditor of Computerworld magazine, Thomas Bitner, led a panel discussion, which was largely, but not primarily, about using AI to verify customer identities. This happened because the panelists: Tomasz Chmielewski, Chief Expert of Digital Ecosystem, ING Bank Śląski, Tomasz Leś, Senior Product Portfolio Manager, Asseco Poland, Krzysztof Polcyn, Innovation Director, EFL Group and Robert Tórz, Director of Individual Customer and Mobile Banking Department, SGB-Bank SA, they eagerly referenced the full spectrum of AI applications in banking – providing non-obvious examples of the technology’s possibilities and limitations. The issue of identity brings most of these issues together in one lens.

All participants in the discussion agreed that

legal and business considerations will not eliminate the human being from the identity verification process. The decision, increasingly supported by AI, will however rest in his hands. What is the nature of this support?

Such a question provokes a broader discussion about which areas and processes will be colonized by AI sooner: “front-end” service processes or deep internal business processes. ING’s Tomasz Chmielewski pointed out that the use of learning algorithms in back-end processes predated the current AI fad.

“Certain categories of tools can now be integrated with AI, but we have been using them for years to research available data to optimize the rules and procedures used in customer contact”

– said Tomasz Chmielewski. From this point of view, the current application of AI for identity verification is a continuation of such work. These algorithms suggested which elements of the customer’s interaction and communication with the bank have distinctive characteristics and allow identification, such as the way of typing on the keyboard or using a smartphone. “These additional rules are very effective and efficient when it comes to cutting the next batch of the simplest frauds based, for example, on stealing a password,” says Tomasz Chmielewski.

Asseco’s Tomasz Leś expects AI to continue in other areas of banking: this technology is now helping to win the battle to eliminate fraud – but there are others areas, for example the offer based on models and discoveries developed by AI.

“Up to a point, the AI ​​cues are just more accurate, but they don’t go beyond the model used by humans. The real change comes when AI starts producing suggestions, recommendations for action, and detecting correlations that are extremely difficult for humans to find.

For example, a client who goes to bed late and is usually late, logs in to work and spends money on the weekend – with a high probability, in 3 months he will buy a red Porsche. It is difficult to apply such predictions to predictions that have been used by humans so far,” explains Tomasz Leś. Mechanisms already exist that allow such conclusions – the open question is whether banks are prepared to use these non-obvious suggestions in their day-to-day operations.

SGB’s Robert Tórz said the banking industry is ready for AI solutions – and a matter of decision is the rapid adaptation of existing solutions. However, what may delay this decision is the question of liability. In light of recent regulations, it de facto falls on the bank anyway. “The fact is that there is a high level of trust in the banks. AI is used to seal this system. By making it safe to operate, it certainly builds that level of trust,” said Robert Tchórz.

This certainty was further underscored by Tomasz Chmielewski, who turned the question around: can banking still function without AI? For example in the title area of ​​customer identity and security. “There are examples of fraud scenarios that even have to exist to be avoided. Even in a situation where the majority of identity confirmation premises prompt acceptance of the transaction, the AI ​​is able to generate an alert based, for example, on the aforementioned behavioral premises. But to have a chance of preventing such well-planned attacks, you need to have AI implemented. You should also use it, give yourself a chance to study the signals from which the AI ​​will have a chance to learn. More and more frequent testing – more and more signals are beyond human strength,” said Tomasz Chmielewski.

The topic of responsibility for the functioning of AI as a factor influencing the wider adaptation of technology by banks was addressed by Krzysztof Połcyn of the EFL Group. “This is one of the biggest and most interesting issues. Let’s refer to the GDPR: the regulation does not say anything explicitly about AI. But if we look at, for example, biometric data, we see that we cannot process it without the customer’s consent. Profiling – also possible only on the basis of consciously obtained consent. Because

companies that create AI and companies that implement it should have people who are very knowledgeable about GDPR in the context of AI”

– emphasizes Krzysztof Połcyn. At the same time, as he points out, the situation is largely under control thanks to the GDPR. A simple proof of this is provided by a visit outside the EU and an experience of contact with AI solutions where the GDPR does not apply. “Each submission of an email address brings a flood of commercial offers, usually imprecise, which proves badly the quality of the AI ​​algorithms used there. The data must be protected, otherwise it will be chaos” , said Krzysztof Połcyn.

Panelists agreed that until AI decides how it uses data, it will be possible to comply with GDPR regulations.

“As long as we don’t feed the AI ​​with sensitive data, there won’t be any problems. Otherwise, it would actually be a threat”

– noted Tomasz Chmielewski.

Tomasz Leś, however, pointed out that the security guaranteed by the GDPR is not given once and for all – it requires constant work and monitoring of the situation. It is in the nature of AI that in addition to using real data, it creates its own metadata and discovers the relationships between them. “The example of Facebook closest to shore proves that with a few clicks, the user unwittingly reveals his political views.

Formally, the owner remains clean: he did not transmit the data to the algorithm, but he generated them himself and de facto came into their possession”

– said Tomasz Leś. And then the key question is how much we give the decision to the machine and how much will be left to a person who will carefully decide whether they can legally use premises-based indices using sensitive data.

The problem of responsibility for the results of AI’s work returns. These mechanisms – at least currently – are most often available as cloud-based services from IT vendors. Perhaps the banks will try to transfer at least some of this responsibility to the suppliers?

“The problem exists. AI can be smart enough to learn – but if a customer goes bankrupt and decides to take legal action: who will respond? The responsibility to the customer will certainly remain with the bank,” said Krzysztof Połcyn.

“We will not send anyone, for example, to Apple. This is in accordance with an ancient rule of Roman law: the master is responsible for the actions of a slave. In this case, the AI ​​is the slave of the bank “

– added Tomasz Chmielewski, noting however that in the future the law will have to take into account the situation in which banks or other entities will use the identity provided by another provider. He will certainly have to take some responsibility. “Work is underway to give AI solutions a limited legal personality. In order to be insured at least partially. This might at least partially solve the problem. However, whether the cloud provider is ultimately responsible – it’s essentially the only way to process such a large amount of data – or the bank – remains an unclear question. And also why banks will be cautious about adapting AI. The law must be unambiguous,” noted Krzysztof Połcyn.

Identity confirmation, to which AI applies, is not only a cost for banks, but also becomes a source of revenue. “We are already making money on this service – we are in KIR’s business node. Every company can ask us to confirm the customer’s identity,” Tomasz Chmielewski said.

The implementation of AI also brings benefits by reducing costs.

“A good example is the service of a trusted identity provider: the simplification of identity verification translates directly into the level of customer experience”

Said Robert Torz. Tomasz Leś echoed him: “Companies, including banks, are always afraid of serving customers poorly. The tool is secondary. AI mechanics that work well won’t compromise quality, there are many areas where they work perfectly. An example is debt collection – where it is easier for a customer to “talk” to a machine about difficult topics than to a real person. The quality of the contract delivery process can be won over the competition. “I guess the first sector to benefit could be the energy sector, where the commodity price is basically uncompetitive. But

simplifying the service and making it easier to verify the identity of the customer will give a real advantage to the CX currency”

– adds Krzysztof Połcyn.

From the discussion on identity verification through AI solutions, the panelists returned to the main challenge facing companies and the initial reason why they opted for AI.

“CX is the most important thing we should care about today. Millions of dollars worth of systems will be sunk if they don’t take into account the quality of customer experience that comes from the relationship with the bank or the business “

– summarized Tomasz Chmielewski.

Leave a Comment