In this interview, we speak with Anders Bresell from Telenor IoT! With 20 years of experience in data insights, Anders has led Data Science and Big Data teams. He has also developed data monetization strategies. At the upcoming Data 2030 Summit, he will discuss Data-as-a-Product, sharing how Telenor integrates this concept to enhance AI scalability and efficiency. His talk will cover the journey from concept to practical application, including successes and challenges.
As Telenor tackles data management challenges, Anders’ insights on Data-as-a-Product stand out. He highlights how this concept transforms data into a marketable product with specific quality standards and accountability. This approach enhances data reuse across business contexts and ensures high-quality data for AI.
Hyperight: Can you tell us more about yourself and your organization? What are your professional background and current working focus?
Anders Bresell: My name is Anders Bresell. I love data and all the amazing things you can do with it. I started in pharmaceutical R&D and spent a few years as a management consultant in Predictive Analytics. For the past 9 years, I’ve had the time of my life leading a team of Data Scientists and Data Engineers at Telenor IoT.
Hyperight: During the Data 2030 Summit 2024 in Stockholm, you will share more on Data-as-a-Product. What can the delegates at the event expect from your presentation?
Anders Bresell: Well, I am usually invited to speak about data success stories. However, this time, it is more about Data Strategy. To be frank, we are too early in our execution to know if we will be successful. One thing is sure though; our data strategy team is in total alignment that Data-as-a-Product is one of the few viable ways to scale AI. My plan is to share our reasoning on the topic and outline our plans for making the business embrace the concept. I will also provide examples of early indicators of successful implementations.
Hyperight: How do you define Data-as-a-Product in the context of Telenor’s operations? What distinguishes it from traditional data management approaches?
Anders Bresell: There are many books and blog posts on Data-as-a-Product, and I wouldn’t claim we took any unique definition or approach. To simplify, it is about treating your data as something that someone would buy. It comes with certain quality attributes and expectations. And if something breaks then someone has the responsibility and competency to fix it.
This may not be a problem when domain experts have built their own data solution and they are the only consumers of it. However, the challenge arises as soon as any other domain team needs to access the same data. Let’s call this secondary use of data. Another way of thinking of it is, data has value beyond its primary use case. In these use cases, where other business processes or teams need access to the data, it sets a whole set of requirements on the ability to expose high-quality data outside the core team. The core team is typically the data owners and domain experts. There are strong arguments that if Data-as-a-Product is implemented at the start, it will lead to the effective use of data beyond its primary use case, beyond the domain expert team. This is key if we want to see scale out effects of AI-use cases. Interestingly, the value of data grows when it’s reused across multiple contexts. When done correctly, this can significantly enhance insights with minimal additional costs.
Hyperight: How would you describe the evolution of this journey so far, what are the learning points for Telenor today? What was the starting point, where are you now, and what is next?
Anders Bresell: The starting point was a new CTO on group level, Amol Phadke. He launched an AI-first initiative and tasked all units to gather experts from across the company. The goal was to create a set of dedicated work streams to determine what was needed for Telenor to successfully implement AI.
In one workstream, me and a handful of other Data Seniors, led by Cathal Kennedy, began architecting what was initially under loosely defined terminology resembling the concepts of a data platform. However, we quickly shifted towards a data strategy that was heavily influenced by the ideas of Data-as-a-Product.
We concluded that to stand out in the current era of AI transformations, you must be stronger than your competitors. And this is achieved by implementing and using AI in a way that sets you apart. This allows you to be more effective and impactful than the rest.
Hyperscale partnerships, foundation models, and competency are essential components of an AI strategy. However, since these elements are accessible to most companies, they aren’t the factors that set you apart. There are, however, two things you can use to your advantage. First, that is the proprietary data you have, use it to train on or integrate with your models. Secondly, how effective are you at integrating data-flows into key business processes? If you can implement the concepts of Data-as-a-Product, there are strong arguments that you also have created those data enablers required to distinguish your AI assets from the rest. Furthermore, you have also removed the integrational blockers that often hinders sustainable production use cases of data and scale out of AI.
Data-as-a-Product is hard to implement. In my opinion, it involves changing company culture, fostering accountability, and instilling self-leadership. This is more important than any framework, incentive model, or technology solution.
Often, we have discussed one of the common pitfalls of Data-as-a-Product. The main challenge is it takes extra effort, energy and involves additional risk for a team that delivers the data to another team – with no return of value for themselves. There are few incentives to encourage such behavior, and it relies on the altruistic ideas that someone will do it for free. Internal contracts of data exchange are just not as clear as those between companies, or between companies and customers. Furthermore, we told ourselves that incentive models needed to be adjusted. But none of us could recall any good examples where incentive models successfully changed the culture. It is like the old saying, ‘Culture eats strategy for breakfast.’ We cannot succeed if we don’t change the culture. As a first step, we believed we needed to educate the organization and preach the concepts of Data-as-a-Product. We also needed to collate best practice, tools and templates to support the organization. As a result, we launched an e-learning course on our global learning platform, this summer. This is what we have delivered so far on a global level. To be frank, if we would’ve stopped here, we would’ve removed the blocker of transformation, and not drive the transformation itself.
It is hard to convey the vision of Data-as-a-Product, if we, as leaders, lack the authenticity of implementing and operating data products ourselves. In parallel, we looked at our own units, to try out the concept in reality. These various pilots, or first stabs, is what currently is in execution throughout Telenor globally. Unfortunately, I have not seen the outcome, so I cannot share it with you today or in my talk. What I can share is the learnings and observations from my own team.
My data team has adopted many principles of Data as-a-Product for several years. We did not do it because of incentives, upskilling, an epiphany or a silver-bullet framework. We adopted the principles of DaaP because we believed we owe it to the rest of the organization to make sure the data in our platform comes to the absolute best use in the company. To accomplish this, we needed to scale out use-cases, without scaling our data team. This implied the data products we built and shipped, needed to scale out by themselves and not by the increasing team members. The solution was to design our platform with a focus on democratizing data, automated deployment patterns, drive self-service, discoverability, as well as to design API’s and deliver mechanisms that could easily be integrated and elevate any business process. As a domain expert team, we concluded there is no one better suited to operate and monitor the solution, than our own team that builds and uses the platform daily. A few years ago, we embraced what we called a DevOps mindset. This approach meant that we didn’t just hand off what we built; we also ensured it continued to deliver business value over time.
With these motives articulated by ourselves, the team started to develop behaviors core for designing, building and operating data products. These behaviors were industrialization of data processing, reusable design patterns, 360-monitoring, and deploying interfaces towards the data platform that allowed systems to easily feed and consume data and transactions. As an example, this year we launched a full CI/CD data product deployment pipeline. It takes our data lake assets and processes them through ETL specifications, datasets, and data applications. All the way to automatically populating the data product inventory, including documentation and descriptions. The inventory is exposed via an API that allows other systems to list the data products, fetch info needed to request the product, and finally generate custom instances of the data product upon the actual request. It might be that we would have come up with all of this ourselves, but I am a firm believer that the data strategy work at group level helped us articulate what was needed and how to tie it all together by embracing all aspects of Data-as-a-Product. The end goal of Data-as-a-product is to make the data generate sustainable and reliable business value over time.
Did we ever talk about Data-as-a-Product with the other parts of business? Not really. We just adopted the principles ourselves. We are now strong advocates of the Data-as-a-Product concept, supported by our knowledge and numerous successful examples. I believe the takeaway is that, DaaP does not require a specific technology, strategy or framework. Even though it provides useful guidelines, tools, methodology, and direction, success starts with understanding the “Why.” This involves grounding efforts in creating long-term, sustainable value from data. When you consider what your team needs to deliver on that promise, Data-as-a-Product becomes more than just policies. It transforms into principles you live by and advocate for, rather than something that complicates and delays implementation.
Hyperight: Could you elaborate on any challenges you faced when working as Head of Data? If so, what strategies did you employ to navigate and overcome them?
Anders Bresell: For many years, we focused on two main responsibilities. Building the data platform and applying data science to support the business. The actual operations and monitoring of the platform was done by a separate team. We believed back then that this was a state of the art organization with Agile, Scrum and all that.
However, this setup had many flaws. It was only when we took back operational responsibility and implemented a full DevOps approach that we began to see real quality improvements. We stated, “What we build, we also operate.” With this in mind, we ensured proper monitoring was in place, established quality KPIs, defined SLOs, and began measuring and following up on them. Additionally, we adopted an API-first approach as an integration layer and used event-driven buses for real-time integrations. We applied this approach to our own modular architecture, which made us comfortable exposing our data products to other teams. This allowed them to integrate these products into their production systems and business processes
Another challenge was to scale the pool of data scientists to deliver more advanced analytics. Traditionally, scaling meant hiring more data scientists – both costly and slow. To address this, we identified recurring themes that could be industrialized, automated, and made self-service. This allowed us to free up data scientists to focus on innovation rather than repetitive analyses. To enhance the use and value of data, we also focused on democratizing it. This involved lowering the barriers for data-savvy business users. This made it easier for them to access and utilize the data and tools. We educated, trained and focused on providing data products in form of reports, ML-model predictions, dashboards, APIs, visual tools, etc. There are always things to improve, but I consider this an overall very successful journey. We are proud of where we are now and we do things today that make us move even faster tomorrow.
Hyperight: From your experience, what makes the concept of Data-as-a-Product so crucial in harnessing the full potential of AI at scale?
Anders Bresell: First of all, AI may now be easier than ever to develop and put into production. But those production implementations will fail miserably if we are not able to rely on continuous flow of high quality data. Production systems are in need of contract-like assurance – and this is one of the key promises of Data-as-a-Product. Someone is accountable and takes pride in ensuring that others can effectively use the data in their production systems and key business processes. Only when this is in place can we establish long-term, sustainable foundations for production-level AI. With these fundamentals, we ensure that investments in AI will generate business value over the long term. Rather than only during a pilot phase or until changes disrupt a data flow.
Hyperight: As a Data Scientist by heart, what data trends do you expect to see in the upcoming 12 months?
Anders Bresell: I anticipate a growing number of impressive AI implementations that will become increasingly multimodal and more accessible to everyday business users.
However, I also expect a renewed focus on data and information management. Without it, organizations will struggle to make effective use of their proprietary data. I am not saying that traditional data governance principles have lost their place. Rather that there are other primitives that will drive this focus and that is the promise and mindset of Data-as-a-Product.
Those that make best use of their own proprietary data are those most likely to come out in front of competitors. And the reason is that AI tooling and foundation models will become bread and butter, and not something you develop yourselves. You will buy it off-the-shelves or use open source solutions. AI is for everyone. But what you can feed it with, how effectively you can scale it out, and how or where you apply it – that will set you apart.
Add comment