
How Financial Services Can Responsibly Scale AI
As AI adoption accelerates across industries, financial services takes the front line of both innovation and risk. From fraud detection to customer personalization, AI is reshaping how institutions operate. But the sector’s high stakes and regulatory complexity demand a uniquely careful approach.
At the recent AI Infra Summit in Santa Clara, Jeniece Wnorowski and I sat down with FinTech expert Anusha Nerella for a Data Insights conversation about how financial organizations can responsibly scale AI, stay ahead of fraudsters, and build teams equipped for the future.
“Many institutions are still in the early stages of AI deployment, while bad actors are moving fast and experimenting aggressively,” Nerella said.
This dynamic creates an urgent need for stronger, more agile defenses. Nerella emphasized that financial firms must accelerate their AI implementation cycles without sacrificing the governance and compliance guardrails that define the industry.
Regulatory and Team Culture Shifts
Asked what the broader technology ecosystem should do to support responsible AI in finance and enterprise, Nerella returned to the importance of regulatory alignment.
“Everything has to go through the regulatory and compliance [process] in order to make it responsibly…applicable to the enterprise sector,” she said.
But regulations alone aren’t enough. Nerella believes that financial institutions must rethink team structures and knowledge transfer to keep pace. She advocates for what she calls “reverse training,” in which organizations bring in engineers well-versed in AI frameworks and libraries, then combine their expertise with the strategic experience of senior leaders.
By fostering two-way collaboration between new AI talent and experienced financial professionals, companies can build stronger, future-ready teams.
“It becomes… a collaborative effort for sure,” Nerella explained. “It’s an equal opportunity here because whoever [has] decades of experience…might have limited exposure towards AI-based frameworks or library utilization or hands-on experience.”
This equal exchange of knowledge, she argued, is essential for success.
Start Small, Stay Governed
For organizations just beginning their AI journeys, Nerella’s advice is both practical and pointed: don’t try to boil the ocean. She recommends starting with “two or three clear use cases with ROI” and ensuring that governance and control mechanisms are in place from the outset.
“When you follow all these basic principles, then you will be able to see…result-oriented AI-based implementation from your end,” she said.
Throughout the conversation, she underscored that AI success in financial services requires human-in-the-loop collaboration.
TechArena Take
The financial sector’s high regulatory stakes, complex legacy systems, and relentless fraud threats make its AI journey distinct. Nerella’s insights highlight that the path forward isn’t just about technology—it’s about culture, compliance, and collaboration.
To build responsible and trusted AI systems, financial organizations must:
- Accelerate adoption cycles without compromising governance.
- Foster reverse training models that marry AI expertise with institutional knowledge.
- Start with targeted, ROI-driven use cases, rather than sprawling transformations.
As the industry races to stay ahead of increasingly sophisticated fraud tactics, success will depend on balancing agility and accountability.
As AI adoption accelerates across industries, financial services takes the front line of both innovation and risk. From fraud detection to customer personalization, AI is reshaping how institutions operate. But the sector’s high stakes and regulatory complexity demand a uniquely careful approach.
At the recent AI Infra Summit in Santa Clara, Jeniece Wnorowski and I sat down with FinTech expert Anusha Nerella for a Data Insights conversation about how financial organizations can responsibly scale AI, stay ahead of fraudsters, and build teams equipped for the future.
“Many institutions are still in the early stages of AI deployment, while bad actors are moving fast and experimenting aggressively,” Nerella said.
This dynamic creates an urgent need for stronger, more agile defenses. Nerella emphasized that financial firms must accelerate their AI implementation cycles without sacrificing the governance and compliance guardrails that define the industry.
Regulatory and Team Culture Shifts
Asked what the broader technology ecosystem should do to support responsible AI in finance and enterprise, Nerella returned to the importance of regulatory alignment.
“Everything has to go through the regulatory and compliance [process] in order to make it responsibly…applicable to the enterprise sector,” she said.
But regulations alone aren’t enough. Nerella believes that financial institutions must rethink team structures and knowledge transfer to keep pace. She advocates for what she calls “reverse training,” in which organizations bring in engineers well-versed in AI frameworks and libraries, then combine their expertise with the strategic experience of senior leaders.
By fostering two-way collaboration between new AI talent and experienced financial professionals, companies can build stronger, future-ready teams.
“It becomes… a collaborative effort for sure,” Nerella explained. “It’s an equal opportunity here because whoever [has] decades of experience…might have limited exposure towards AI-based frameworks or library utilization or hands-on experience.”
This equal exchange of knowledge, she argued, is essential for success.
Start Small, Stay Governed
For organizations just beginning their AI journeys, Nerella’s advice is both practical and pointed: don’t try to boil the ocean. She recommends starting with “two or three clear use cases with ROI” and ensuring that governance and control mechanisms are in place from the outset.
“When you follow all these basic principles, then you will be able to see…result-oriented AI-based implementation from your end,” she said.
Throughout the conversation, she underscored that AI success in financial services requires human-in-the-loop collaboration.
TechArena Take
The financial sector’s high regulatory stakes, complex legacy systems, and relentless fraud threats make its AI journey distinct. Nerella’s insights highlight that the path forward isn’t just about technology—it’s about culture, compliance, and collaboration.
To build responsible and trusted AI systems, financial organizations must:
- Accelerate adoption cycles without compromising governance.
- Foster reverse training models that marry AI expertise with institutional knowledge.
- Start with targeted, ROI-driven use cases, rather than sprawling transformations.
As the industry races to stay ahead of increasingly sophisticated fraud tactics, success will depend on balancing agility and accountability.



