Tuesday, November 11, 2025

AI Governance for the Global Financial System

Tom Worthington (me) and Jason Grant Allen from SMU
at Insights Forum 2025
Greetings from the Insights Forum roundtable on "AI Governance, stability and competitive dynamics: Aligning priorities for safe and effective AI adoption". I am not exactly sure what the forum is, or who by, But I happened to be in Singapore for another conference, so thought I would come along. Chatham House rules apply, so I can say what is said but not by whom. From some of the preliminary remarks I get the impression previous forums would have focused on blockchain and before that whatever tech was trendy.

The topic seems to be on risks from AI, rather than benefits. Speakers appear to be from international finance and banking. One speaker is expresing concern about the loss of bank to bank relationships in the Pacific. The result is remittances become more expensive and difficult. They suggest regulators can use AI to check compliance of banks quickly. This doesn't sound a compelling case when applied to traditional banks, as there aren't many and they have highly trained staff. However, it could open the market to new entrants. But then AI could be used to generate plausible fake details for scammers. 

Worryingly all the panelists appear to be experienced bankers, with no actual AI experts. I had been mentioned their companies have AI experts, but they haven't been invited to speak. We just hired someone who is a real AI expert. One panelists just said "Not just a talking head like me". They seemed to think this amusing. Would any other forum have someone saying how proud they were not to be competent. 

I asked the panel if the senior leadership teams in financial organisations have AI expertise, or are they all lawyers and accountants. Worryingly the first answer was that they spent several hours a day. That doesn't sound a good way to make decisions at our global institutions. 

One speaker made a useful point that banks face a threat from within where spies use a false identity to get a job in a company to steal information or money. Of course this happened in the past but is easier with remote employees.

This event raises the question as to if universities have looked in a similar way at their use of AI. Rather than just wondering about students cheating, how can AI provide better services to students. As an example, AI could suggest course credit for prospective students, based on past study and work l. This could allow universities to offer to cut a year or more off study time and tens of thousands of dollars. The AI could collate the required information and evidence. This would be a powerful incentive to enrol versus current practice, where the university says "enrol and we will see what we can do .. sorry now you have paid you are getting no credit". This is a personal experience I have had. At CIT I was assigned an expert to help me with ROL (Reconciliation of Prior Learning). I was given 80% of my qualification by RPL. In contrast a university held out the prospect of RPL. Already having two qualifications and a decade of experience (including designing a course for the university) I expected at least 33% credit, but got none.

ps: I bumped into Jason Grant Allen from SMU at the forum. 


No comments:

Post a Comment