It’s here! Download our 2021 Holiday Playbook now.

Custom Data Analytics: 4 Steps to Getting It Right

Anath DiPillo
Anath DiPillo

Data analytics are the backbone of business insights, and these days, there are plenty of solid reporting systems to choose from. However, sometimes complex products—like apps—are better served by custom solutions than out-of-the-box tools like Google Analytics.

Opting for custom analytics opens up infinite opportunities in terms of the types of analyses you can run, and therein lies the challenge: How can you focus your reporting so it’s helpful, not overwhelming?

Here are four steps that will ensure your custom analytics solution delivers the insights you need.

Get your building blocks in place

Regardless of what types of analysis you plan to run, there are four essential elements of data reporting. Think of these as your foundation: make sure they’re rock-solid before proceeding.

Cohort definitions. Cohorts let you slice and dice data based on audience segments, so the first thing you need to do is define what these cohorts are. Start with the most obvious groupings (for example, app users and non-app users), and eliminate any noise that may skew results (like internal testers). Later, you can add segments based on behaviors and activities.

Lexicon clarity. Different analytics platforms have unique naming conventions for events and properties, and your developers may have their own lingo, too. Don’t assume you know what something means! Your definition of a ‘session start’ may not be the same as your platform’s (or your developer’s). Go through each event and make sure there is shared understanding and language to describe it.

Firing accuracy. Once you have your lexicon down, you’ll want to make sure every event fires when it should. The best way to do this is to pick up your product, walk through it screen by screen, and monitor the events coming through your pipeline.

Data cleanliness. Tidying data at the outset will save you time and trouble down the road. Standardize event names and labels across platforms, create any new aggregated events, and, yes, check for typos.

Set a goal for each dashboard

It’s easy to get excited and start building reports for a giant catchall dashboard that... ultimately won’t tell you much. Instead, create a shortlist of dashboards you need, like “audience profile” or “first interactions.” From there, ask each stakeholder to write down a single goal (not a metric!) for each dashboard. Share these goals and align your team around them.

Pare down KPIs

There’s a big difference between designing KPI-driven reports and conducting research with data. Some views are important for developing audience hypotheses that help you design your cohorts: these might include distribution reports that show behaviors of the top 10% vs. all other users. You may have a goal of moving users into this 10% cohort, but the report itself is useful for research, not as an indicator of everyday health. Your dashboards should be built for the latter.

Now that you have a goal in mind for your dashboard, brainstorm the reports that will frame the KPIs you need to monitor. One way to do this is through a brainwriting exercise. Everyone starts with a piece of paper where they write three reports they need on this dashboard. Then, they pass it to the next person, who adds to that list. At the end of the exercise, everyone votes on the top 5-10 reports. Remember, the brain can only hold 5-9 pieces of information at a time, so any more than that will become confusing.

Build and troubleshoot reports one at a time

It’s tempting to jump from dashboard to dashboard, going down rabbit holes of new views. Stay focused on a single dashboard, building it out one report at a time. This makes the process more efficient and ensures you complete the essentials in a timely manner.

If you’re unsure about the validity of a report, test it sooner rather than later. Create a small cohort of users, like your coworkers, and have them take action. Then, filter your reports using that cohort and take note of any abnormalities in the data.