What happens if code goes rogue?
Only a third of the biggest banks in Europe and North American are reporting on their use of artificial intelligence, despite the potentially lethal consequences if a rogue mutation should ‘escape from the lab’. The lack of transparency was exposed by a study by Evident, an independent intelligence platform aims to encourage businesses to adopt AI. Confidence needs to be established in the people who are responsible for AI development, said Evident, and recent events have tarnished the reputation of the Fintech sector.
The collapse of the Silicon Valley Bank was probably most people’s first look at Fintech culture, and the antics of fidget-spinning fantasist CEO Sam Bankman-Fried has put a stain on the reputation of the industry. The fintech sector has never been in greater need of disinfecting through exposure to sunlight. Many banks are working to overhaul and improve their approach to risk management according to the authors of the Evident AI Index. However, its research into openness found that while AI is already used by banks for many critical processes, from authenticating customers to risk modelling, 8 of the 23 largest banks in the US, Canada and Europe currently provide no principles for judging publicly responsible AI.
Evident analysed millions of publicly available data points to assess how banks report against four main areas of responsible AI: creation of AI leadership roles, publication of ethical principles, collaborations with other organisations, and publication of original research. “AI could be the key driver of better risk management and decision-making across the global banking sector,” said Alexandra Mousavizadeh, Evident Co-Founder and CEO. It is vital that banks develop AI in a way that meets high ethical standards and minimises unforeseen consequences. However, Evident’s research found a worrying lack of transparency around how AI is already used and how it may be used in the future. This could damage stakeholder trust and stifle progress.
Some banks are taking proactive steps to address AI concerns and developing internal programmes to address responsible AI. The problem is that there is no standard for responsible AI reporting, and many banks withhold the details of their efforts. “At this critical time for the sector, the banks need to show leadership and start reporting publicly on their AI progress,” said Mousavizadeh. The Index found that Canadian banks are most transparent on responsible AI reporting, with European banks the least. Only three banks, JPMorgan Chase & Co, Royal Bank of Canada and Toronto-Dominion Bank, have a demonstrable strategic focus on transparency around responsible AI. Each showed evidence of creating specific responsible AI leadership roles, publishing ethical principles and reports on AI, and partnering with relevant universities and organisations.
Approaches to hiring AI talent also differ across the Atlantic. North American banks are more likely to hire specific responsible AI roles, usually from Big Tech firms, and European banks tend to lead responsible AI within their data ethics teams. Evident Co-founder Annabel Ayles said two Canadian banks, RBC and TD Bank, performed well because Canada has had a national AI ethics conversation. The top-ranking banks also tend to have strong research hubs, which helps them address the technical challenges of implementing ethical standards into AI.
Banks are trying to work out the links between responsible AI and data ethics. European banks view responsible AI through a lens of data ethics, due to the dominance of GDPR legislation. “Perhaps missing a trick by not creating AI-specific roles and thinking holistically about the broader risks posed by AI,” said Ayles.