
Decoding AI and machine learning in banking
Read Time: 10 mins
Written By:
Britta Bohlinger
Whether you’re drawn into a case as a fraud examiner, an auditor, or a customer encountering identity fraud, understanding credit risk modeling within financial organizations is essential. This knowledge is crucial regardless of your role because it impacts how financial products, from bank accounts to mortgages and student loans, are managed and secured globally. Appreciating how financial institutions assess the risks tied to each customer — whether an individual or a corporation — is fundamental for anyone engaged with financial services. This understanding shapes how we perceive the safety and reliability of financial products that affect every day financial decisions.
Credit risk modeling, a cornerstone of banking operations, has evolved significantly with the advent of AI and ML. These technologies offer sophisticated tools to analyze vast datasets, predict loan defaults with greater accuracy and tailor financial products to individual customer profiles. However, this shift also introduces complexities in data integrity, algorithmic bias and the transparency of decision-making processes.
The Basel Committee on Banking Supervision’s Principles for Effective Risk Data Aggregation and Risk Reporting, first issued in January 2013, aims to enhance the banking sector’s capability to manage risk data effectively, particularly for global systemically important banks (G-SIBs). (See “Global systemically important banks: assessment methodology and the additional loss absorbency requirement,” Basel Committee on Banking Supervision, Nov. 27, 2023.) The principles cover several areas, including risk data aggregation capabilities, risk-reporting practices, and the importance of robust governance and data architecture to support these functions. (See “Principles for effective risk data aggregation and risk reporting,” Basel Committee on Banking Supervision, January 2013.)
Machine learning, while powerful, operates on the principle of “garbage in, garbage out.” The quality of datasets and the objectivity of algorithms are paramount. Biases in data or design can lead to skewed risk assessments, unfairly affecting loan approvals or interest rates. Herein lies a significant challenge for fraud examiners: ensuring these innovative models do not inadvertently facilitate financial fraud or discrimination.
Credit scoring systems are pivotal in financial institutions; they use sophisticated scorecards that assign individuals a three-digit score, typically ranging from 300 to 850. This score helps determine one’s borrowing capability. However, interpreting these scores presents challenges, as the scoring system often categorizes individuals without sufficient context or explanation. For example, someone with a score of 720 falls within the “good” range, but the underlying factors contributing to this score — like timely payments or credit utilization — aren’t explicitly detailed. This lack of transparency can perplex both customers and loan officers, leading to potential misunderstandings or misjudgments in lending.
In the U.S., a significant challenge has arisen from the use of postal codes within algorithms to determine mortgage eligibility. These algorithms, while designed to streamline the approval process by assessing geographical data, inadvertently expose racial biases. For instance, certain neighborhoods that are predominantly inhabited by minority ethnic groups might receive unfavorable terms or outright denials, not due to individual creditworthiness but due to historical socioeconomic factors affecting those postal codes. This unintended consequence of algorithmic decision-making illustrates how AI can perpetuate existing societal biases if not carefully monitored and adjusted.
Banks employ both supervised and unsupervised machine learning models to detect and prevent financial crimes. These models analyze patterns in transaction data to identify anomalies that may indicate fraud. However, one significant challenge is their dependency on historical data, which may not fully capture new and evolving fraudulent tactics. For instance, as cyber criminals adopt more sophisticated methods, the models might fail to recognize these patterns, leading to blind spots in fraud detection. Continuous updates and training with new datasets are essential to maintain the effectiveness of these systems.
The importance of addressing these challenges is highlighted in regulatory documents such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF), launched in April 2024, and the Bank of England’s FS2/23. Both frameworks emphasize the need for transparency, fairness and bias mitigation. (See “Technical and Policy Documents,” National Institute of Standards and Technology and “PRA Regulatory Digest - October 2023,” Bank of England, Nov. 1, 2023.)
Although the integration of AI and ML into banking operations clearly offers significant advantages, it also presents unique challenges that must be addressed. For Certified Fraud Examiners (CFEs), understanding these nuances is critical — not only to leverage AI effectively but also to ensure that it’s used in a manner that’s fair, transparent and secure. As these technologies continue to evolve, so too must the strategies employed by those responsible for overseeing their use in the financial sector.
Operational risks in deploying AI and ML models extend beyond data integrity to include issues of cybersecurity, model governance and ethical use. The human factor remains crucial, underscoring the need for skilled fraud examiners who can navigate the nuances of AI applications, identify potential weaknesses and advocate robust ethical standards.
Real-life insights include the following:
Principles for the Sound Management of Operational Risk (2021): Strengthening operational resilience, information and communication technology (ICT) continuity and business continuity plans. (See “Principles for the Sound Management of Operational Risk,” Basel Committee on Banking Supervision, June 2011.)
The Veritas initiative emphasizes responsible AI use in finance, focusing on fairness, ethics, accountability, and transparency (FEAT) principles. It involves collaborative efforts among financial institutions and tech firms to integrate FEAT principles into AI systems. (See “Veritas Initiative,” Monetary Authority of Singapore, Oct. 26, 2023.)
The future of AI in banking will undoubtedly bring further innovations, along with regulatory and ethical considerations. The development of global standards for AI use in financial services, transparency in algorithmic decision-making and the protection of consumer privacy are areas requiring careful attention. By leveraging these technologies responsibly and maintaining a keen eye on their implications, we can harness their potential while safeguarding the integrity of the financial system. As fraud examiners, our journey is one of continuous learning and adaptation, ensuring that as the banking world evolves, we’re always several steps ahead in the prevention of fraud and the mitigation of its related risks.
Britta Bohlinger, CFE, is a compliance auditor focusing on data governance and the auditing of systems and processes within the financial sector on behalf of a governmental authority. With a substantial background in investment banking, Bohlinger utilizes an in-depth understanding of risk management to address the complexities of fraud prevention and AI applications in banking. A Certified Fraud Examiner (CFE) and Agile-certified professional, Bohlinger has been an active member of the ACFE since 2014 and has dedicated herself to mentoring aspiring fraud examiners within the organization since 2018, promoting innovative and ethical practices in fraud examination.
Contact her on LinkedIn.
Links for Reference:
Unlock full access to Fraud Magazine and explore in-depth articles on the latest trends in fraud prevention and detection.
Read Time: 10 mins
Written By:
Britta Bohlinger
Read Time: 16 mins
Written By:
Jason Zirkle, CFE
Read Time: 6 mins
Written By:
Laura Harris, CFE
Read Time: 10 mins
Written By:
Britta Bohlinger
Read Time: 16 mins
Written By:
Jason Zirkle, CFE
Read Time: 6 mins
Written By:
Laura Harris, CFE