Hyperight

Data 2030 Summit DataOps talks you can’t miss

Data 2030 Summit DataOps talks you can’t miss
Photo by Hyperight AB® / All rights reserved.

The value of data is growing in importance at a daily rate. Still many organisations still don’t have mature processes of transforming data into valuable insights to inform business decisions. Besides, as data and analytics are becoming vital business processes, data pipelines become more complex and data teams just grow in size. All of this is creating a dark vault of manually-created, non-scalable and non-reusable tools built in isolation. The result is a sluggish data pipeline development that fails to keep up with the needs of a data-driven enterprise and error-prone operational environment that is slow to respond to change requests, finds Eckerson Group in their whitepaper on DataOps: Industrializing Data and Analytics. On top of failing to keep up with business demands, organisations can’t ensure their data is governed and protected, compiling with a multitude of global industry and data privacy regulations.

All these issues entreat for a new approach to be adopted that would help operationalise their data platforms and scale, increase agility and cycle times, govern data flow at every stage of the lifecycle while delivering higher data quality and better insights faster from their data. 

This is the vision of DataOps as an approach that accelerates the creation and delivery of data pipelines, automates data flows and ensures data solutions are delivered and meet business needs in the fastest and most effective way. Or as Wayne W. Eckerson and Julian Ereth, the authors of the Eckerson Group’s whitepaper put it “DataOps applies rigour to developing, testing, and deploying code that manages data flows and creates analytic solutions.”

Experts define DataOps as data management for the AI era. It’s one of the fundamental pillars of a modern data management strategy. However, since it’s a fairly new methodology, the majority of companies are clueless about where and how to start implementing DataOps.

This is why the Data 2030 Summit features a whole day dedicated on DataOps unravelling technical sessions focusing on techniques to applying agile, collaborative and user-friendly approach to building and managing data pipelines.

DataOps talks at the Data 2030 Summit

To get right into it, these are some of the inspirational DataOps talks you will have the opportunity to hear at the 3rd day of the Data 2030 Summit:

DataOps – Lean Principles and Lean Practices | Lars Albertsson – Founder | Scling
Lars Albertsson - Founder | Scling

DataOps is the transformation of data processing from a craft with manual processes to an automated data factory. Lean principles, which have proven successful in manufacturing, are equally applicable to data factories.

In his talk, Lars Albertsson, will present how lean principles can be applied in practice for successful data processing. During this session, you will learn: 

  • Data engineering is the craft of building resilient data factories.
  • DataOps is lean thinking applied to data processing.
  • Identifying and eliminating waste is a key element for successful DataOps.
  • The most common forms of waste that prevent DataOps success.

Lars Albertsson is the founder of Scling, providing data-value-as-a-service – customer-adapted data engineering, analytics, and data science. Before founding Scling, Lars has worked at Google, Spotify, Schibsted, and as an independent consultant, helping organisations create value with data processing technology.

How to structure a modern DataOps team | Manuel Brnjic – DataOps Team Lead | WALTER GROUP
Manuel Brnjic - DataOps Team Lead | WALTER GROUP

Manuel is leading the DataOps Team at WALTER GROUP. In his previous roles, he worked on agile software projects, where he was involved as a solution architect. Now his focus is all about data and working towards how to enable an enterprise in becoming a truly data-driven company.

Throughout his experience, he quickly realized that an old-fashioned centralized data lake is not the best answer to nowadays needs. Especially for IT departments with old legacy systems, trying to get data-driven there are several cultural and organizational challenges to cope with on a continuous basis. Manuel is in charge of building a new data platform by considering the whole data life cycle. It is not just about providing a proper data lake alternative for the data science team. His team also ensures that data governance is set up appropriately across all data domains.

At the Data 2030 Summit, Manuel will impart some knowledge on how to structure a modern DataOps team.

The awesome evolution of data processing. From dull numeric sequences to colorful drawings of the future. Human contribution is here to stay. | Andrea Piro – DataOps Manager | A.P. Moller – Maersk
Andrea Piro - DataOps Manager | A.P. Moller - Maersk

Data processing has always been there. Living in the technology era, we feel like automation, integration and decisions can run unmanned. Is DataOps just another step toward full machine autonomy?

Andrea Piro is going to answer this question during his Data 2030 Summit session. Andrea will talk about how humans and machines can individually and differently contribute to the company and society progress.

Things you will take away from this session:

  • Team transformation and build
  • SLAs suitability
  • Competitive advantage is a recipe based on new ingredients
  • The trust blockchain, when a customer reaches the backend.
How a Data Pipeline Playbook Helps to Succeed With a Digital Transformation | Lotte Ansgaard Thomsen – Lead Big Data Engineer | Grundfos
Lotte Ansgaard Thomsen - Lead Big Data Engineer | Grundfos

Many companies have challenged themselves to succeed with digital transformation, which is a new area without best practices, common language and clear measures of success. 

Lotte Ansgaard Thomsen will address how they at Grundfos have tried to mature and improve their data projects by writing a Data Pipeline Playbook and organizing Data Engineers in a Data Pipeline Community. She will also describe the reasoning behind these approaches and technical goals.

Some of the key points you will learn at Lotte’s session are: 

  • Maturity of a Digital Transformation
  • Creation of a common language
  • Data foundation for AI projects
  • Technical goals of a Data Pipeline

Apart from the above exciting talks, you will also learn how to leverage DataOps to operationalize ML and AI, how DataOps is used in regards to containers and applied AI, continuous evaluation and improvement of deployed models in production, Open ML and AI DevOps tool suite of the future, and more.

In between all these innovations sessions, you will also have the opportunity to join in roundtable discussions on DataOps-related topics to share your opinion and gain valuable insights from your peers. 

Data 2030 Summit DataOps talks you can’t miss
Photo by Hyperight AB® / All rights reserved.

Some of the engaging roundtable discussions will focus on:

  • Overcoming enterprise obstacles with DataOps
  • How to organize your data to be trusted and business-ready for your journey to AI
  • How to apply agile (DevOps) techniques to your data and analytics work
  • How to enable collaboration across key roles involved in the delivery of Data
  • Pipelines
  • DataOps – How to start with a focus on requirements definition, development,
  • and tracking value
  • How to apply DataOps techniques to struggling or failing projects and pipelines.

Add comment