Productizing Analytics: A Retrospective
Lessons learned from trying to have a metrics layer cake and eating it too.
Like many companies that formed before the era of “Big Data”, Mozilla needed to undergo a paradigm shift in order to bring data into the heart of its decision-making. While I was there, we established a cross-functional team that went on to build a centralized experimentation platform, brought Data Science, Marketing Analytics and Data Engineering under one reporting structure, developed processes that enabled a healthy back and forth between business leaders and data practitioners, and began to capitalize on the fruits of a massive data warehouse migration.
We were also under immense pressure to support a large number of stakeholders whose requests to deliver analyses and rapid insights far outgrew the capacity of our small team (no, what? you too?). Leadership increasingly became convinced that our data team’s output could not scale with the needs of the business without enabling some kind of self-service analytics capability.
In late 2020, a few members of the Data team conducted a survey of candidate analytics tools, and while there was some discussion and debate on the direction at the time, they ultimately chose to add Looker to our tool stack, as they believed it would enable non-data practitioners to answer a bulk of their own questions through the use of its “explores” feature and help democratize data throughout the company.
Addressing the short term
One of the issues you often encounter when rolling out any new platform is that it feels like you’re building the car while you're driving it. There was an insatiable hunger for data, and the faster we could enable product owners to answer more questions on their own in a self-reliant way, the faster the Data Science team would be able to spend more of their time on more difficult, meaty data problems.
When I joined the project, Looker deployment was already well underway. The rollout team went with a simplified “hub and spoke” approach, where the hub was entirely auto-generated in order to ensure that data in Looker would stay up-to-date with the data warehouse without manual intervention. They did a lot of user interviews with different customers in the Product org, and had a process for onboarding each new group into a new content folder with a tutorial and information on how to navigate the process of building dashboards. Their goal was to onboard at least eight product teams in the first half of 2021.
There is a tradeoff between short-term “keeping stakeholders happy” on one side, and tech debt and the potential for low data team morale on the other. The Looker rollout team’s choice to make the onboarding of eight stakeholder teams a measure of success was not a bad one at all, as it was very early in development. It allowed them to learn, and they approached it with a conscious acknowledgement that they might have to dismantle and rebuild some of their work once they understood the benefits and limitations of their technical choices.
It did mean, however, that a few months in, the direction of the Looker rollout was in need of a change. Other members of the data team were still on the hook for helping build dashboards on top of robust Looker explores for their stakeholders to use, regardless of whether or not they were part of the central Looker rollout effort, and by onboarding each stakeholder independently from one another, the platform started to run into similar problems as those that have sparked the metrics layer discussion. How do you build an analytics platform that is flexible enough to let people explore and answer their own questions, while at the same time make sure there are guardrails in place so that similar-sounding metrics that are calculated slightly differently don’t compete with each other, causing a loss of hard-earned trust in the data team?
In the shorter term, to make sure we kept an eye on only one source of truth for our high-level business metrics, we chose to put all dashboards and reports that the data team built and approved into one central “Canonical Metrics” folder in Looker. The metrics in those dashboards were considered “production”, and had a much stricter set of requirements than anything built in the spaces created for stakeholders to sandbox in. We also enforced a dashboard “truth hierarchy”, that is, if in the process of exploring data in Looker, a stakeholder developed a metric that “disagreed” with a metric already established in the canonical dashboards, the burden of proof was on them to understand the misalignment. The data team was available to help with quick investigations through a company Slack channel, but the goal was to teach stakeholders how to fish and to avoid having to spend all of their time chasing down numbers that didn’t quite agree1.
Moving to a longer-term vision
When standing up a new data platform, it’s often the case that developers tend to focus more on giving stakeholders access to data than understanding what they’ll do with that data once they have it. As data practitioners, we instinctively want to build a platform in the same direction as the data flows, starting from where it’s collected, cleaning it up, and then putting it somewhere accessible to a wider audience. Each step along the way is a really difficult infrastructure problem to solve. Often, it feels like seeing stakeholders access the data after such a humongous feat is a success in and of itself, and only after all of that do we spend much time thinking about what they might do with it.
To balance the data team’s bottom-up approach in developing data models in Looker, my focus while wearing a Data Product Manager hat became creating a top-down vision that could drive development from the point of view of the business needs, with the goal of meeting engineering somewhere in the middle. Partnering with our stellar program manager, we set out to create a structure that would define a formal Analytics Program.
First, we created a vision for what we wanted the platform ultimately to be: “A modern, industry-standard business analytics toolkit that enables Mozilla to grow and thrive.”
Next, we wrote a shared document that clearly outlined the vision, problem statement, scope, overall goals, and most importantly, the operating principles. Here were the five principles we came up with:
Discoverability: Canonical (production) metrics, as well as the explores that supported them, needed to be easy to find if someone only had a link to the landing page. This principle established the need for some kind of intuitive metrics taxonomy.
Minimize cognitive load: Metrics that the data team created needed to be simple and visualizations easy to understand. Metrics definitions should be as easy as a tool-tip away to find. Any opportunity for confusion should be minimized.
Democratize data intuition: This was possibly the fuzziest principle, but the goal was to make sure that the underlying data model (manifesting in Looker explores) was designed to make it easy for non-data experts to start learning how to answer their own questions without the help of a data expert in the room. This was a crucial step towards enabling self-service on the platform.
Robust usability: Users of the analytics tools needed to be able to explore data with low latency, and the data team needed to have a strategy for maintenance and SLAs.
Dev-friendly for Analysts: While the first four principles called out the needs of the product teams as customers, data practitioners who supported those teams also needed to be considered customers of the platform. This last principle was aimed at making sure that Data Scientists and Analysts would have as optimal workflow as possible to go from design to development to dashboard production, and that whatever they built would be easy to maintain.
With the vision and these five principles, we pulled together a small team made of managers, data scientists and data engineers to create a tactical roadmap to start building towards the vision. They also acted as champions of the program to get buy-in from the rest of the data team.
Defining Success
While all of this was kicking off, it happened to line up with Mozilla’s goal-setting season, which was done semi-annually with OKRs. Since we were just forming as a product team and it was a bit early to do a more formal OKR process, I decided to set set exploratory KRs that allowed us to measure baselines.
One of the best things of the whole program was getting to work with the User Research team, who graciously offered some of their time and resources. Because of the way we defined the five principles, we needed to understand how our customers felt about using the platform. Did they find it intuitive? Were metrics actually easy to discover? Were they actually able to answer questions with data that helped them do their jobs?
To begin to measure our progress towards the first three principles and ensure a great analytics customer experience, we set our first two KRs:
Complete three “usability reviews” on dashboards in design phase. This would be done by a user researcher who would act as a fresh set of eyes on proposed (with wireframes) or already prototyped dashboards.
Set CSAT baseline by creating a user interview protocol based on what business questions users want to answer with data, and completing at least three user interviews. Here, we wanted to ask questions that could be tracked longitudinally, such as “How easy was it to find the answer to [insert business question here]?”
For the fourth “Robust usability” principle, I worked with the Data Engineering manager to come up with a seperate KR that would establish baseline Looker user experience expectations for latency as well as SLAs in case of any outages.
Finally, measuring success against the last principle (“Developer friendliness”) was a little trickier. We opted to start with a KR that the provenance of every data model in Looker had to be documented, which would make it easier for any new developer to ramp up with product data they were less familiar with.
Retrospective
Some time has passed, and I’ve had the opportunity to reflect on how setting up the analytics program at Mozilla went.
One thing that worked really well was the ability to create alignment with stakeholders early in the process by writing up clear requirements documents. Scope creep for data tool development is so common, especially when the tool is expected to magically answer any question they might have of the data. Establishing early on what exactly we were going to build and what set of questions that particular data model would answer saved us some frustrating conversations down the line.
As I mentioned before, getting User Research involved was also a lot of fun. Not only did they teach the data team some new qualitative techniques that helped us understand how our stakeholders felt about our tools, the project also helped our two teams grow closer together. If you have a user research team at your company and don’t know them very well, think about reaching out to say hi. You probably have more in common than you think.
On the more challenging side, getting buy-in from the rest of the data team to change their development practices wasn’t as smooth, especially when I requested that they spend a little more time documenting what they planned on doing before doing it. With the onslaught of requests coming into the team, asking them to take a minute to design dashboards required a major shift in how the team operated. Luckily the data org leadership backed me up and made sure it happened in practice, otherwise it never would have worked. And once the team got used to it, I received feedback from a few converts that they actually found that creating design documents was useful.
As for things I would do differently, I missed out on an opportunity to explicitly create a KR to measure some higher-level business impact. At the end of the day, keeping stakeholders satisfied isn’t exactly the ultimate value-add of a platform that seeks to democratize data. Understanding how that data was used in decision-making and seeing real company growth might be difficult to measure, but it is the goal.
I’m also not sure that we quite nailed down the best success metric for the last “Dev-friendly” principle of the program. Were we ultimately treating Data Scientists as first class customers of the data stack? Or were they merely trading efficiency gains from having to answer fewer ad hoc questions for a less efficient analytics workflow? Had I stayed at Mozilla longer, I would have wanted to dive into this more. There were certainly a lot of reflections on how Looker was set up and how we would do that integration differently, but I’ll save that discussion for another day.
In the end, I think having some structure, especially around best practices and the idea of customer satisfaction as success criteria really helped push the analytics program in a good direction. Even if tools come along that purportedly solve the metrics layer problem, there is still a huge human element involved in developing data systems that are used without data-expert-hand-holding. I’m curious to see how our industry evolves as it continues to tackle it.
Thanks Frank Bertsch for helpful feedback on this post!
Opinions expressed here are my own and do not express the views or opinions of my past employers.
I’m hazing over the fact that the “hub and spoke” approach that I referenced was supposed to make metrics standardization easier. In theory, maybe, in practice not so much, and that deserves its own blog post. Someday.