Artificial intelligence could bring nasty surprises, warns Financial Stability Board  

A robot holds a newspaper during a demonstration at the World Economic Forum (WEF) annual meeting in Davos, on January 22, 2016. 
'Smart' robots could be used to crunch data, automate client interaction and spot fraud Credit: FABRICE COFFRINI

The rapid use of artificial intelligence in banking could trigger financial stability risks and some unexpected surprises unless proper testing and training is put in place, the Financial Stability Board has warned.

Banks, insurers and asset managers are rushing to swap humans with computer systems able to do the same jobs, with 'smart' robots able to crunch data, automate client interaction, spot fraud or price insurance contracts. 

But the race to replace people with machines "has the potential to amplify financial shocks" and could be used by cybercriminals to manipulate market prices, the FSB said, adding that firms were in an 'arms race' to adopt AI because their competitors are. 

While the FSB acknowledged that the use of AI shows "substantial promise" and could make the financial system more efficient, it urged the industry to monitor usage closely as a number of risks were on the horizon. 

Institutions could become dependent on the technology giants making the robots, for example, opening them up to risks created by third-party providers which fall outside the remit of financial regulators. 

Mark Carney
Bank of England Governor Mark Carney is head of the FSB, which represents central banks and regulators for the G20 economies Credit: Andrew Harrer/Bloomberg

"These competition issues – relevant enough from the perspective of economic efficiency – could be translated into financial stability risks if and when such technology firms have a large market share in specific financial market segments," the FSB wrote. 

The 45-page report also called for more specialist staff to oversee the models, which could lead to "unintended consequences" if they are too opaque. 

"If multiple firms develop trading strategies using AI and machine learning models but do not understand the models because of their complexity, it would be very difficult for both firms and supervisors to predict how actions directed by models will affect markets," it said. 

The FSB, which represents central banks and regulators for the G20 economies, also noted that many of these systems had been developed in a period of low volatility and so "the models may not suggest optimal actions in a significant economic downturn or in a financial crisis".

The report comes a month after the former boss of Barclays, Antony Jenkins, warned that the heyday enjoyed by big banks in the lead-up to the financial crisis will never return as the rise in AI threatens some of their services.

He said financial regulators could benefit from the use of AI, pointing out that 40,000 people in Canary Wharf currently work in compliance "making sure a bank is doing what it is supposed to be doing" and could be replaced by computers. 

License this content