5 Common DataOps Mistakes (and How To Avoid Them)
What exactly is DataOps? According to Rob Parker, it’s both a mindset and a practice. And the goal is to deliver usable data solutions faster.
Rob should know, having built a career on wrangling data to drive business results—something he continues to do in his current role as the Senior Director of Data and Analytics for GitLab.
“Decades ago, the typical software development cycle was to gather requirements, build something, test it, share it with end users, and then fix what you got wrong,” he says. “DataOps aims to bypass all of that by taking a more iterative and automated approach. Instead of upfront design and requirements phases, you identify the smallest thing you can deploy to add value to your business, and you integrate that into your data software.”
“DataOps is both a mindset and a practice. And the goal is to deliver usable data solutions faster.”
At GitLab, they’ve implemented tools like Snowflake and DBT in order to automate development, testing, and deployment to their production site. “Each of these tools works within the GitLab tool chain. So I can write the software, build something very small, and automatically deploy it into the production environment literally within hours,” says Rob. “That’s a huge game-changer, because it allows us to de-risk a data project, get stuff done faster, and deliver value to the business faster.”
To help companies that are in the process of exploring or formalizing their DataOps efforts, Rob shared five common mistakes to avoid.
1. Overlooking the importance of cultural mindset
If your organization doesn’t already have a DataOps mentality, it can be challenging to shift the culture effectively. The default when it comes to data is to focus on getting it right, which typically leads to a lot of testing, user acceptance testing (UAT), more testing, and then—finally—deploying.
But that approach doesn’t apply to modern DataOps. “To successfully shift the organizational mentality into the right space to support DataOps, you need to switch from focusing on making massive changes to focusing on making smaller, iterative changes to move the needle,” explains Rob.
So instead of waiting until you’ve built a complete dashboard with 50 fields and 45 charts, you create and deploy a single data product to see if people use it. Then you collect their feedback and iterate from there, which might mean modifying the existing product or deploying a second product. Basically, you’re breaking the one big thing you would have built before into smaller and smaller pieces.
Doing this successfully requires that you have a technical champion on your team—someone who can build out your technical stack so it can be adopted and automated. The good news is that most people in the data space are receptive to this new approach. They just need some guidance and leadership on how to get started.
2. Buying a data warehouse just because you want one
Rob points out that many companies can get by without a data warehouse because so many of today’s tools have great native reporting capabilities built right in. “Salesforce, Zuora, and Workday, for example, all have fantastic reporting and analytics capabilities,” Rob says. “And if these tools are giving you the analytics you need, why would you need a data warehouse?”
An organization must reach a particular maturity level before it needs a data warehouse. Usually, that need arises when things break down with cross-silo or funnel reporting. For example, if you need to report on conversions of leads, but the relevant activities to generate that report don’t flow directly between the various tool environments.
“Where a data warehouse adds value is in its ability to aggregate all the data in one place,” Rob says. “And then, with everything in one place, you’re able to report on that funnel’s analytics—whether it’s the customer journey, lead-to-sales, or any other funnel.”
3. Failing to align with business needs
“DataOps is not a technical architecture solution,” Rob explains. “It’s about helping the business make better decisions, do more, and be more efficient and effective.”
The goal of the DataOps team is to derive insights that didn’t exist before. And in order to understand which insights you’re missing, you need to be deeply engaged with the business. This means meeting regularly for conversations with key business stakeholders, whether via a steering team or an agile-type standup meeting. It’s critical to keep your perspective clear and focused on what the business needs.
4. Choosing the wrong technology
Another mistake Rob sees companies make is choosing a technology that isn’t really solving the business need. In a lot of cases, this is because the DataOps team either doesn’t understand or has misinterpreted the actual business needs.
So—again—have those conversations, ask the hard questions, dig more deeply into exactly what insights you need to drive business outcomes. And then apply everything you’ve learned as you assess various technologies and products.
5. Looking inward instead of outward
“You don’t want to spend too much of your time thinking about how your data team should be organized or what your roadmap will look like,” Rob says. “The majority of the most successful data teams think of themselves as almost a consulting expert in data for the rest of the business. This perspective helps them focus more on looking outward instead of inward—focusing on the business needs.”
DataOps trends and possible future advancements
Rob thinks we’re close to a world in which a business analyst can spin up a pipeline on their own without having to engage a data team. “You really only need a login and password into the source system in order to stand up, at the least, a very basic pipeline into your warehouse,” Rob says. “From there, you can write some SQL to interrogate data very, very quickly.”
Where things get a little trickier—and potentially require a more specialized skill set—is when you need to blend data. An example: If you want to join your sales and billing data to your marketing and product data to create a single-source-of-truth reporting across your business and customer journey, that gets more complicated.
In the not-too-distant future, Rob hopes to see additional advancements that will continue to broaden the scope of what organizations can do in the DataOps field, even without a lot of specialists. For instance, he’s eager for someone to develop tools that deliver what he calls “auto insights.” These tools would be familiar with an organization’s data, be able to scan it, look at historical trends, drill down into deeper layers (beyond highest KPI level), and automatically generate insights that would otherwise remain undiscovered.
“It’s the thing you’re not looking at that gets you in the end,” Rob says. “So a tool that can automatically interrogate your data and deliver ideas of the things you may want to consider would be incredibly valuable.” Rob imagines this kind of tool will be less for a data scientist or a data engineer, and more for a data analyst or functional analyst.
While we’re waiting for these new technologies to arrive, organizations can focus on building a strong DataOps foundation by taking Rob’s advice and sidestepping the most common mistakes.
Achieving true product-led growth takes a winning combination of free parts of your product, virality, paying users, and more. Startups spend years (and thousands of dollars) trying to figure out the right model for viral growth – and many never do. So how do you succeed at PLG. Find out here.
Eraser founder, Shin Kim, shares why his company, Eraser, a whiteboard for engineering teams, built an AI sidecar that ultimately drove 30% of all product sign ups. Learn more here.
Miro’s Kate Syuma shares how the company’s growth team iterated smart to improve the user onboarding journey for their popular collaborative platform.