Banking on AI: From POC to 10,000 Users. How Nordea Is Rewiring Enterprise AI Innovation for the Digital Era

Banking on AI: From POC to 10,000 Users. How Nordea Is Rewiring Enterprise AI Innovation for the Digital Era Banking on AI: From POC to 10,000 Users. How Nordea Is Rewiring Enterprise AI Innovation for the Digital Era

In an industry where trust is currency and regulation defines the rules of the game, the path to AI at scale is rarely linear. Banks do not get to “move fast and break things,” rather, they must move fast and prove things. They must design for innovation and governance in equal measure. Few European financial institutions embody this dual mandate as clearly as Nordea, the Nordic region’s largest bank, which is now rolling out internal generative AI capabilities to 10,000 employees.

In Episode 171 of the AIAW Podcast, Mattias Fras (Head of AI Adoption) and Olof Månsson (Lead AI Engineer) open a rare window into how Nordea is building this future. What emerges is not just a story about a chatbot or an AI platform. It is a case study in how a large enterprise can shift from proof-of-concept theatre to AI systems that actually reach people, change workflows, and ultimately reshape how a bank operates.

This article summarizes the podcast discussion and unpacks the methods, decisions, compromises, and convictions behind Nordea’s journey – from the early laptop pilots to a production-grade platform used across the organization. It is a story about modular engineering and governance-by-design, but equally about organizational rewiring, cultural patience, and the humbling complexity of deploying frontier technologies inside century-old institutions.

At the heart of Nordea’s experience lies a thesis worth examining: Innovation at scale is only possible when governance becomes an engineering discipline and when experimentation becomes a corporate muscle rather than a hobby.

When Experimentation Meets Regulation

The conversation begins where enterprise AI projects often stall: the leap from experiments to something real. For Nordea, that story started two and a half years ago, when Månsson and a colleague built the first internal chatbot prototype on their laptops. In today’s GenAI timelines, that era feels ancient almost “Eons back in AI time,” as he puts it, but it proved decisive.

The prototype worked well enough to draw senior leadership attention and fueled internal excitement, especially after employees had started using ChatGPT at home. Curiosity was no longer the barrier. The challenge was something far more difficult: how to channel this energy into enterprise reality.

Advertisement - [email protected]

Banks sit at the convergence of legal constraints, data sensitivity, risk management, and societal scrutiny. According to a 2024 European Banking Authority report, financial institutions face a “multi-dimensional regulatory load” regarding AI systems, covering data lineage, model explainability, cybersecurity exposure, human oversight, and operational resilience (EBA AI Guidelines, 2024). In practice, this means no model moves into production without extensive justification – and no system is allowed to become a black box.

This context shapes everything Nordea does. When Månsson describes AI work in banking, he frames governance not as a compliance tax but as a technical discipline: “Governance sounds super boring, but it can also be quite engineering-heavy with resilience, monitoring, proving that it’s secure.”

Here sits one of the episode’s central insights: In highly regulated sectors, the majority of AI innovation happens not inside the model but around it. Within the scaffolding that makes the model safe, observable, and governable.

This mirrors a broader industry shift. Research from Accenture in 2024 found that over 70% of enterprise effort in deploying GenAI went into “data preparation, integration, guardrail engineering, and lifecycle governance” rather than model experimentation. In other words, building the “wrapper” matters as much as building the core.

Månsson and Fras embraced this reality early. Instead of pitching AI as a magic solution, they deliberately framed it as a systems-engineering challenge, and it is one that required security teams, privacy experts, cloud architects, risk officers, and product owners to collaborate from day one. As Fras notes: “You need to talk to the people who deeply understand their field. Convince them, and they will convince the rest.”

This approach shaped the culture around AI at Nordea: sober, collaborative, and highly pragmatic.

Designing for Scale: Nordea’s Modular Architecture

A striking pattern emerges as the episode unfolds: Nordea didn’t scale GenAI by scaling use cases. They scaled GenAI by scaling the architecture.

This distinction matters. Many enterprises fall into the trap of building isolated GenAI pilots like HR assistants, policy summarizers, and customer-service bots. Each with its own governance process, each requiring new risk assessments, each reinventing similar components. Nordea deliberately rejected this pattern.

The team instead built a modular, model-agnostic platform on top of AWS Bedrock, enabling controlled access to multiple foundational models. This platform is not tied to any single LLM provider. This was an intentional hedge against technological uncertainty. As Fras says: “We wanted to avoid any lock-in… Who do you bet on today, even? Being modular and agnostic was super important.”

This mirrors a broader strategic trend. According to Gartner’s 2024 AI Infrastructure report, over 60% of large enterprises now describe LLM portability as a “critical requirement,” both for resilience and for cost optimization as model providers compete aggressively.

Nordea’s architecture reflects four design convictions:

1. Slice Complexity Into Certifiable Components

Instead of attempting one giant end-to-end system, which no risk committee would approve, the team validated smaller components: secure access to Bedrock, logging layers, API gateways, guardrails, data-routing logic. Once a layer was certified, it became reusable.

As Månsson phrases it: “It’s a lot easier to prove that smaller pieces are safe, and then you always point to the previous work you’ve done.”

This pattern is increasingly known as governance scaffolding – a set of technical controls that allow fast iteration without repeating compliance work.

2. Build Once, Reuse Everywhere

Once Nordea validated the platform, adding use cases did not require redesigning governance from scratch. The bank could simply layer new business logic on top.

This created a self-service foundation for internal teams, accelerating adoption dramatically.

3. Treat Governance as a Feature, Not a Bottleneck

The platform embeds governance into its design. Everything from monitoring, auditability, role-based access, and data classification pathways. This approach aligns with the emerging “AI Safety by Design” principles promoted by NIST and the OECD (2024), emphasizing proactive risk mitigation at the architectural level.

4. Keep the System Replaceable

Agnostic design means Nordea can switch underlying models as performance shifts. In a market where model capabilities double multiple times per year, this flexibility is crucial. A 2024 McKinsey analysis notes that “model half-life” or the time after which a model is eclipsed by new entrants, has dropped below 12 months in frontier LLM development.

Nordea designed with this in mind. Their platform was built not just for today’s models, but for whatever comes next.

The AI Hub as Accelerator, Orchestrator, and Now… Something New

Nordea launched its AI Hub in early 2021 as a cross-functional center tasked with coordinating AI across business units, identifying use cases, and ensuring governance alignment. In effect, it served three roles simultaneously:

  1. Innovation engine (build prototypes, inspire new ways of working)
  2. Governance integrator (interface with compliance, legal, security, data teams)
  3. Organizational connector (break silos and create shared foundations)

But as the conversation reveals, Nordea recently reorganized the hub. Fras’s new title, Head of AI Adoption, signals a subtle but important shift: AI is no longer an experiment to be championed by a central team. It is a capability that must spread across the bank.

This transition reflects the natural maturity curve seen in other regulated industries. The U.S. FDA, for example, describes the AI life cycle in healthcare as evolving from “pilot” to “validated tool” to “embedded clinical system.” Banking is undergoing a similar movement. The central AI team becomes less of a lab and more of a catalyst.

Fras articulates this evolution candidly. The hub was designed to attract use cases and stakeholders; now the task is to rewire behaviors across the enterprise. As he describes: “It takes years to change behaviors in large organizations… even in places like Microsoft.”

This is not a technical challenge; it is a sociological one. And it’s why Nordea created a specific role dedicated to adoption as a recognition that technology only matters when people actually use it.

From Laptop Pilots to 10,000 Users: What It Really Took

The leap from two developers with laptops to a 10,000-user internal rollout wasn’t linear rather a disciplined sequence of milestones, each unlocking the next.

Phase 1: Early Prototyping (2022–2023)

Small experiments proved internal demand and generated leadership attention. Demos at Nordea’s internal AI conference acted as “lighthouses,” showing not just feasibility but potential.

Phase 2: Laptop Pilots (2024)

2024 became the year of structured prototyping. Several use cases were tested locally, including:

  • an internal-guidelines chatbot
  • a research assistant for marketing tone-of-voice work
  • process support tools for specific business functions

These pilots validated not only technical performance but user behavior and safety requirements.

Phase 3: Production-Grade Platform (Late 2024)

Nordea moved the GenAI platform into full production, bringing with it operational monitoring, governance controls, and integration layers.

Phase 4: Scaling Use Cases (2025)

A major milestone for the team: the internal chatbot moved from 3,000 to 5,000 to 10,000 users. With each expansion came new challenges such as expectation management, integration with Confluence and intranet systems, and ensuring clarity in cases where internal documentation contained contradictory information.

Scaling wasn’t glamorous. It required real engineering and organizational patience: “Sometimes a release is a breeze, sometimes you’re sweating… We tried to split it into multiple parts so that if we only got 80%, that 80% still went to production.”

This incrementalism, combined with a modular architecture, prevented the classic enterprise trap of “big bang releases” that collapse under their own weight.

Horizon 1 vs. Horizon 2: Understanding the Two Speeds of AI

One of the episode’s most illuminating insights is the distinction between personal-productivity AI (Horizon 1) and enterprise-grade, workflow-transforming AI (Horizon 2). Small note here. There are actually 4 Horizons, but we managed to get through the first two.

LinkedIn and media conversations often conflate them, which frustrates practitioners. Horizon 1 tools, such as copilots embedded in office suites, are valuable but limited. They assist individuals but rarely transform core processes.

Horizon 2 systems, however, integrate directly into critical workflows like customer onboarding, fraud detection, loan adjudication, risk assessments. These require deep data integration, audit trails, regulatory alignment, and multi-system orchestration.

Fras and Månsson argue that organizations need both, but must not confuse them.

GenAI’s rapid mainstream adoption has created unrealistic expectations about how quickly Horizon 2 systems can be deployed. As Fras puts it: “You can’t expect people to be in Horizon 2 already. GenAI is still so young.”

This distinction aligns with external research. MIT Sloan’s 2024 report on enterprise AI maturity found that fewer than 15% of companies had moved beyond Horizon 1 into integrated AI systems that materially alter business operations. Most organizations remain stuck experimenting at the edges.

Nordea’s platform approach, however, positions the bank for that next horizon. Not by rushing into it, but by laying the foundation deliberately.

Governing the Future: Convergence of Data, Software, and AI Governance

Perhaps the most forward-looking segment of the episode is the discussion about the convergence of governance disciplines. Historically, banks treated:

  • software governance
  • data governance
  • AI governance

as distinct domains, each living in separate organizational silos.

But GenAI blurs these boundaries. AI systems ingest data, transform it, reason over it, call APIs, generate new data, and act within software-defined workflows. The separation of governance disciplines no longer reflects the reality of how AI-enabled systems operate.

Månsson captures this succinctly: “In the end, it’s the system that needs the governance… and that system includes an AI component.”

This reflects a global shift. Standards bodies such as ISO and NIST have recently emphasized socio-technical system governance, which evaluates AI alongside its surrounding processes, interfaces, controls, and human decision points.

For banks, this means rethinking:

  • model validation frameworks
  • data lineage and access pathways
  • audit and explainability mechanisms
  • change management and version control
  • system-level monitoring and resilience

Fras notes that internal regulatory pathways are often unnecessarily complex. Not because of intent, but because employee journeys have historically received less design attention than customer journeys. This is common across the sector. A 2024 BCG study found that over 65% of financial institutions identified internal process complexity as a greater barrier to AI deployment than external regulation.

Nordea’s approach to this is building platform-level governance, clarifying navigation for employees, and embedding controls early, which positions them well for the coming wave of agentic and workflow-embedded AI systems.

Lessons for Enterprise Leaders: What Nordea’s Journey Reveals

Nordea’s experience offers a set of practical, grounded lessons for senior data and AI leaders navigating the same terrain.

1. Start Small, but Architect Big

Proofs of concept are useful, but their real function is to de-risk architecture decisions. Build with scale in mind from the beginning.

2. Governance Must Be Engineered, Not Documented

In regulated industries, PDF-based governance creates bottlenecks. Computational governance, like automated monitoring, access controls, and lineage tracking, enables speed and compliance.

3. Treat AI as a Sociotechnical Transformation

Technology alone is insufficient. Behavior change, communication, and adoption functions must be built intentionally. Nordea’s creation of a “Head of AI Adoption” role is a blueprint for others.

4. Avoid Model Lock-In

With model performance shifting rapidly, flexibility is strategic. Use abstraction layers and modular design to keep options open.

These functions are not gatekeepers but enablers. Bringing them in early reduces downstream rework and builds trust.

6. Use “Lighthouses” to Spark Momentum

High-visibility demos accelerate internal alignment, but they must be backed by architectural rigor and realistic expectations.

7. Distinguish Between Horizon 1 and Horizon 2 AI

Executives must not confuse productivity enhancements with process reinvention. Both matter, but they require different mindsets, funding models, and governance.

8. Build Reusable Capabilities, Not Isolated Use Cases

The platform Nordea created allows new AI applications to emerge far more quickly. The compound effect of reuse is significant.

The Broader Implications for AI in Banking

Nordea’s story is not unique, but it is exemplary. It shows a banking sector at an inflection point. For years, financial institutions invested in traditional machine learning for fraud detection, risk modeling, and credit scoring, and often confined within siloed environments. GenAI, however, moves AI from the back office to the frontlines of daily work.

This shift has several implications:

  • AI becomes a universal interface – Employees will increasingly interact with internal systems through natural language, not rigid workflows. Early signs are already visible in research assistance, document navigation, and policy interpretation.
  • AI governance becomes multi-layered and continuous – Regulators are already signaling more dynamic oversight. The European AI Act, for instance, requires ongoing monitoring for high-risk systems, not just one-off validation.
  • Geopolitical shifts influence AI strategy – Model availability, cloud sovereignty, data localization, and even alliances between tech providers and states affect which models banks can adopt. Fras and Månsson hint at this indirectly through their emphasis on model agnosticism and the difficulty of betting on any one provider.

Banks will compete not only on financial services, but on AI enablement

Internal tooling becomes part of the employee value proposition. The banks that provide powerful, safe, intuitive AI tools will operate faster and attract better talent.

Nordea may not claim to have all the answers, but their trajectory suggests a template for others: patient, modular, governance-led, innovation-focused.

Toward a Responsible Future: The Art of Building and Governing Simultaneously

In the end, Fras distills the team’s philosophy into one memorable insight: “If I don’t embrace governance, I should go work for a startup.”

This is not resignation but a form of professional clarity. In banking, innovation that ignores governance is not innovation at all; it is risk creation. The art lies in doing both simultaneously – building the new while ensuring the proper, as the podcast title puts it.

What Nordea demonstrates is the emergence of a new operating model for enterprise AI: one where the organizational system and not just the model is the innovation. One where governance serves as both a safety mechanism and an accelerant. And one where engineers and risk professionals co-create the future rather than negotiate after the fact.

The journey from POC to 10,000 users is rarely smooth. But Nordea’s experience shows it can be done and, more importantly, done responsibly.

This article was enhanced with the help of AI tools, drawing on the podcast transcript and complementary online research. To go deeper into the source material, listen to the full episode and make your own learnings.

Full episode here.

Add a comment

Leave a Reply

Advertisement - [email protected]