Skip to content

In the world of business, artificial intelligence is widely viewed as an increasingly critical tool for enhancing productivity, enabling the pace of worker production to increase and improving customer service.

However, new AI tools are also a boon for scam artists and fraud.

On the frontlines of that conflict are financial institutions, which are themselves turning to AI to fight the threats.

"The attack speed and volume is increasing, and it kind of takes AI to defend against the AI," said Mark Ward, senior vice president and chief information security officer for Johnson Financial Group, a Racine-based institution with $7 billion in assets.

Banks can play a central role in detection even when they aren't specifically targeted by the attacks. Phishing, smishing and deepfakes are among the various ways scammers can target bank customers. For those methods, the primary line of defense lies with the individual — don't click that link, answer that call or call that number, for example — but when a customer does get fooled, a bank can detect an unusually large money transfer or withdrawal, particularly when those atypical transactions grow in number.

"I can prevent maybe the second or third transaction from occurring," said Todd Shaffer, chief risk officer for Johnson Financial Group. "Maybe not the first one."

Johnson Financial tries to mitigate those instances through education, but AI and large language models can ramp up the effectiveness of their protections.

The goal is to understand how a user typically acts, making anomalous behavior stand out.

AI tools will accelerate that ability to ingest data and allow the bank to react better and faster, Shaffer said.

What separates the large financial institutions from regional and community banks in their implementation of AI is big banks can afford to develop their own systems that apply to their needs. Smaller banks typically contract with outside vendors.

For example, JPMorgan Chase & Co. (NYSE: JPM) has some 5,000 technology employees and more than 20 global tech hubs led by Chief Information Officer Lori Beer, who oversees a tech budget of $17 billion. One of the largest hubs is located in Columbus, Ohio, where much of the financial institution's development of AI tools dedicated to security takes place.

"Protecting the bank is one of the biggest responsibilities," Beer told Columbus Business First, an affiliated publication of the Milwaukee Business Journal.

"If we look at ourselves as a business of trust, safety and security has to be job one, resiliency too," she told Business First. "All the digital interfaces for the customer go through a robust security review."

The bank incorporated machine learning and AI and prepared for generative AI years before the popular debut of tools such as ChatGPT. Among recent GenAI tools built in Columbus is a cyber threat modeler that assists cybersecurity engineers by examining the architecture of a software application, matching it against cybersecurity standards and identifying possible vulnerabilities. That speeds the ability to assess and deploy more applications while staying safe, Beer said.

Other large financial institutions with a big Milwaukee-area presence — such as BMO and Wells Fargo — are also developing their own AI-equipped systems to address their specific needs.

When it comes to fraud detection, all financial institutions have the same goal — identify and intercede fraudulent activity in real time.

Generative AI and other AI tools — whether acquired from third party vendors or developed in-house with the use of an institution's vast resources — are being deployed across the financial system to level the playing field.

As seen in BizJournals.com