Why I Never Used Data as a Product Manager

Why I Never Used Data as a Product Manager

Once upon a time, I was a software engineer. I started to code when I was a little kid, because I wanted to build a robot that could play football. While this wasn’t a particularly successful project, I soon moved on to building software products instead. In high school, I landed my first full-time software engineering job.

I spent many years building products and honing my craft – the craft of engineering. A lot of this time was focused on data and AI products for the enterprise. I built internal data tools to better understand our customers, and customer-facing reporting platforms for our users to crunch their own numbers.

But at some point, I realized that it didn’t matter how great our engineering was if the product didn’t work to help the user. I realized it doesn’t matter how great the technology is if the business doesn’t make sense.

The switch

So, I moved to the dark side and became a product manager.

And at the flick of a switch, I became terrible at using data.

I was technically capable of doing it, sure. I could write SQL, or I could whip up a Jupyter notebook, where I could write some Python code to analyze whatever data I needed to look at.

But the reality was, I almost never did. Having once chastised PMs for acting just based on gut feeling, I was now one of them. A part of the problem.

Sure, I would occasionally look at things like our onboarding funnel to see where people were dropping off. But I had no idea whether getting people through onboarding even had an impact on our north star metrics. I had no idea what the knock-on effects of certain user behavior or of changes we were making were. No clue.

Why was I like this?

We had a product analytics tool that we sent tracking events to (whenever we remembered this was a thing). We had a business intelligence (BI) tool in place, with hundreds of dashboards. We had a data warehouse with all product-related data in one place.

But I didn’t use any of it.

The BI tool with hundreds of dashboards sure had a lot of stuff in it, but this meant it was impossible to find the right report, or to even know if it exists. Or you’d find four different dashboards for the same thing, each subtly different.

The solution? Obviously, let’s create the fifth dashboard, this time one to rule them all. Until someone creates the sixth one.

The product analytics tool had a bunch of events coming from our UI. By default, that was all the data it had, though. Want to segment a feature’s usage based on how many active teammates with write access the user has? Tough luck, you’ll have to ask the engineers.

Exploring data is, when done well, a highly iterative process. It’s rare that you think of the perfect report, put it together, uncover a massive insight, and move on with your day.

More likely, you’ll look at some data. Then you’ll realize you’re looking at the wrong thing, and look at something else. This gives you an idea, so you check a couple of follow-up questions. That one thing was weird, so you double click into trying to understand that. You zoom back out and see whether that thing you found is true more broadly.

You want to explore and to iterate. To build up your intuition, and to learn. You’re not trying to design a perfect little dashboard.

But I could code, right? Why not just pull in data from the data warehouse and analyze it using whatever tool I had?

Time and friction

Most data questions you have aren’t actually going to have earth-shattering results. This doesn’t mean you should stop testing your hypotheses or answering questions.

Instead, it means you need to get the friction of getting these answers to be incredibly low.

For me, data analysis had started to feel like this entirely separate ‘mode’ that I needed to switch to. Instead of focusing on whatever my task at hand was, I needed to put on a data scientist’s hat. It felt like I’d need to schedule a Thursday afternoon to go and ‘crunch those numbers,’ instead of simply getting an answer in the context I cared about it in.

While I could technically do this, I almost never did. I couldn’t justify the context switching and sunk time.

Asking for help

Ironically, we kind of considered ourselves a data-driven company. One side effect was that we’d built up a great data team. I could go to them for help any day of the week!

In reality, though, there’s a limit to how many questions you can throw over the fence to someone else to deal with. It’s actually also often extremely hard to perfectly frame a data question without going in and looking at the data available to you first! It’s the entire point about exploring a problem space rather than building a report, again.

The data team also happened to constantly be busy with a million different things – splitting their focus between building some of our AI products, working on data infrastructure and answering everyone else’s questions that were brought up to them. This meant slow turnaround times; by the time they could get back to me, I had often already had to make the decision or move on.

I also felt ashamed. I was supposed to be able to do this myself (and I technically was). So I almost never went to the data team, either.

Getting fed up

Over time, this dissonance started becoming too strong to ignore. I felt like I was a data person at heart, but these had become empty words.

I realized that while people talking about ‘using data’ generally meant checking their KPI dashboards, this wasn’t where the real value comes from.

Data becomes actionable and useful when testing, confirming or rejecting hypotheses. The value comes from answering specific ad-hoc questions, and from iteratively exploring the problem I was trying to understand – not just ‘building a report’.

Fast forward to today. Last month, we were proud to announce Supersimple, a self-service business intelligence platform designed for fast-growing SaaS companies. We designed the entire platform specifically around helping people iterate towards answering complex ad-hoc questions – without having to worry about anything like SQL or code.

This keeps people focused on the business problems they’re solving, not on implementation details. This in turn means people actually using data. We’re in a fortunate position where we’re able to dogfood our own product to understand our own business – and it feels great.

Read more about Supersimple for product teams.