Skip to content

Cyber Sale: Save big on Moz Pro! Sign up by Dec. 6, 2024

5ac41c002d9e29 12467168

How to Make Effective, High-Quality Marketing Reports & Dashboards

Dominic Woodman

The author's views are entirely their own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.

Table of Contents

Dominic Woodman

How to Make Effective, High-Quality Marketing Reports & Dashboards

The author's views are entirely their own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.

My current obsession has been reporting. Everyone could benefit from paying more attention to it. Five years, countless ciders, and too many conferences into my career, I finally spent some time on it.

Bad reporting soaks up just as much time as pointless meetings. Analysts spend hours creating reports that no one will read, or making dashboards that never get looked it. Bad reporting means people either focus on the wrong goals, or they pick the right goals, but choose the wrong way to measure them. Either way, you end up in the same place.

So I thought I'd share what I’ve learned.

We’re going to split this into:

(We’ll lean on SEO examples — we’re on Moz! — however, for those non-SEO folks, the principles are the same.)

What is the goal of a report versus a dashboard?

Dashboards

Dashboards should:

  • Measure a goal(s) over time
  • Be easily digestible at a glance

The action you take off a dashboard should be:

  • Let’s go look into this.

Example questions a dashboard would answer:

  • How are we performing organically?
  • How fast does our site load?

Reports

Reports should:

  • Help you make a decision

The action you take off a report should be:

  • Making a decision

Example questions a report would answer:

  • Are our product changes hurting organic search?
  • What are the biggest elements slowing our website?

Who is this data for?

This context will inform many of our decisions. We care about our audience, because they all know and care about very different things.

A C-level executive doesn’t care about keyword cannibalization, but probably does care about the overall performance of marketing. An SEO manager, on the other hand, probably does care about the number of pages indexed and keyword cannibalization, but is less bothered by the overall performance of marketing.

Don’t mix audience levels

If someone tells you the report is for audiences with obviously different decision levels, then you’re almost always going to end up creating something that won’t fulfill the goals we talked about above. Split up your reporting into individual reports/dashboards for each audience, or it will be left ignored and unloved.

Find out what your audience cares about

How do you know what your audience will care about? Ask them. As a rough guide, you can assume people typically care about:

  • The goals that their jobs depend on. If your SEO manager is being paid because the business wants to rank for ten specific keywords, then they’re unlikely to care about much else.
  • Budget or people they have control over.

But seriously. Ask them what they care about.

Educating your audience

Asking them is particularly important, because you don’t just need to understand your audience — you may also need to educate them. To go back on myself, there are in fact CEOs who will care about specific keywords.

The problem is, they shouldn’t. And if you can’t convince them to stop caring about that metric, their incentives will be wrong and succeeding in search will be harder. So ask. Persuading them to stop using the wrong metrics is, of course, another article in and of itself.

Get agreement now

To continue that point, now is also the time to get initial agreement that these dashboards/reports will be what’s used to measure performance.

That way, when they email you three months in asking how you’re doing for keyword x, you’re covered.

How to create a good dashboard

Picking a sensible goal for your dashboard

The question you’re answering with a dashboard is usually quite simple. It's often some version of:

  • Are we being successful at x?

...where x is a general goal, not a metric. The difference here is that a goal is the end result (e.g. a fast website), and the metric (e.g. time to start render) is the way of measuring progress against that.

How to choose good metrics for dashboards

This is the hard part. We’re defining our goal by the metrics we choose to measure it by.

A good metric is typically a direct measure of success. It should ideally have no caveats that are outside your control.

No caveats? Ask yourself how you would explain if the number went down. If you can immediately come up with excuses that could be answered by things out of your control, then you should try to refine this metric. (Don’t worry, there's an example in the next section.)

We also need to be sure that it will create incentives for how people behave.

Unlike a report, which will be used to help us make a decision, a dashboard is showing the goals we care about. It’s a subtle distinction, but an important one. A report will help you make a single decision. A dashboard and the KPIs it shows will define the decisions and reports you create and the ideas people have. It will set incentives and change how the people working off it behave. Choose carefully. Avinash has my back here; go read his excellent article on choosing KPIs.

You need to bear both of these in mind when choosing metrics. You typically want only one or two metrics per goal to avoid being overwhelming.

Example: Building the spec for our dashboard

Goal: Measure the success of organic performance

Who is it for: SEO manager

The goal we’re measuring and the target audience are sane, so now we need to pick a metric.

We’ll start with a common metric that I often hear suggested and we’ll iterate on it until we’re happy. Our starting place is:

  1. Metric: Search/SEO visibility
    1. “Our search visibility has dropped”: This could be because we were ranking for vanity terms like Facebook and we lost that ranking. Our traffic would be fine, but our visibility would be down. *Not a good metric.
  2. Metric: Organic sessions over time
    1. “Our organic sessions have dropped”: This could easily be because of seasonality. We always see a drop in the summer holidays. *Okay, also not a good metric.
  3. Metric: Organic sessions with smoothed seasonality
    1. Aside: See a good example of this here.
    2. “Our organic sessions with smoothed seasonality have dropped”: What if the industry is in a downturn? *We’re getting somewhere here. But let’s just see...
  4. Metric: Organic sessions with smoothed seasonality and adjusted for industry
    1. “Our organic sessions with smoothed seasonality and adjusted for industry have dropped”: *Now we’ve got a metric that’s getting quite robust. If this number drops, we’re going to care about it.

You might have to compromise your metric depending on resources. What we’ve just talked through is an ideal. Adjusting for industry, for example, is typically quite hard; you might have to settle for showing Google trends for some popular terms on a second graph, or showing Hitwise industry data on another graph.

Watch out if you find yourself adding more than one or two additional metrics. When you get to three or four, information gets difficult to parse at glance.

What about incentives? The metric we settled on will incentivize our team get more traffic, but it doesn’t have any quality control.

We could succeed at our goal by aiming for low-quality traffic, which doesn’t convert or care about our brand. We should consider adding a second metric, perhaps revenue attributed to search with linear attribution, smoothed seasonality, and a 90-day lookback. Or alternatively, organic non-bounce sessions with smoothed seasonality (using adjusted bounce rate).

Both those metrics sound like a bit of a mouthful. That’s because they’ve gone through a process similar to what we talked about above. We might’ve started with revenue attributed to search before, then got more specific and ended up with revenue attributed to search with linear attribution, smoothed seasonality and a 90-day lookback.

Remember, a dashboard shouldn’t try to explain why performance was bad (based on things in your control). A dashboard's job is to track a goal over time and says whether or not further investigation is needed.

Laying out and styling dashboards

The goal here is to convey our information as quickly and easily as possible. It should be eyeball-able.

Creating a good dashboard layout:

  • It should all fit on a single screen (i.e. don’t scroll on the standard screen that will show the results)
  • People typically read from the top and left. Work out the importance of each graph to the question you’re answering and order them accordingly.
  • The question a graph is answering should be sat near it (usually above it)
  • Your design should keep the focus on the content. Simplify: keep styles and colors unified, where possible.

Here’s a really basic example I mocked up for this post, based on the section above:

  • We picked two crucial summary metrics for organic traffic:
    1. Organic sessions with smoothed seasonality
      • In this case we’ve done a really basic version of “adjusting” for seasonality by just showing year on year!
    2. Revenue attributed to organic sessions
  • We’ve kept the colors clean and unified.
  • We’ve got clean labels and, based on imaginary discussions, we’ve decided to put organic sessions above attributed revenue.

(The sharp-eyed amongst you may notice a small bug. The dates in the x-axis are misaligned by 1 day; this was due to some temporary constraints on my end. Don’t repeat this in your actual report!)

How to create a good report

Picking a sensible decision for your report

A report needs to be able to help us make a decision. Picking the goal for a dashboard is typically quite simple. Choosing the decision our report is helping us make is usually a little more fraught. Most importantly, we need to decide:

  • Is there a decision to be made or are we knowledge-gathering for its own sake?

If you don’t have a decision in mind, if you’re just creating a report to dig into things, then you’re wasting time. Don’t make a report.

If the decision is to prioritize next month, then you could have an investigative report designed to help you prioritize. But the goal of the report isn’t to dig in — it's to help you make a decision. This is primarily a frame of mind, but I think it’s a crucial one.

Once we’ve settled on the decision, we then:

  • Make a list of all the data that might be relevant to this decision
  • Work down the list and ask the following question for each factor:
    1. What are the odds this piece of information causes me to change my mind?
    2. Could this information be better segmented or grouped to improve?
    3. How long will it take me to add this information to the report?
    4. Is this information for ruling something out or helping me weigh a decision?

Example: Creating a spec for a report

Here’s an example decision a client suggested to me recently:

  • Decision: Do we need to change our focus based on our weekly organic traffic fluctuations?
  • Who’s it for: SEO manager
  • Website: A large e-commerce site

Are we happy with this decision? In this case, I wasn’t. Experience has taught me that SEO very rarely runs week to week; one thing our SEO split-testing platform has taught us time and time again is even obvious improvements can take three to four weeks to result in significant traffic change.

  • New decision: Do we need to change our focus based on our monthly organic traffic fluctuations?

Great — we’re now happy with our decision, so let’s start listing possible factors. For the sake of brevity, I’m only going to include three here:

  • Individual keyword rankings
  • Individual keyword clicks
  • Number of indexed pages

1. Individual keyword rankings

  • What are the odds this piece of information causes me to change my mind?
    • As individual keyword rankings? Pretty low. This is a large website and individual keyword fluctuations aren’t much use; it will take too long to look through and I’ll probably end up ignoring it.
  • Could this information be better segmented or grouped to improve?
    • Yes, absolutely. If we were to group this by page type or topic level, it becomes far more interesting. Knowing my traffic has dropped only for one topic would make me want to go to push more resources to try and bring us back to parity. We would ideally also want to see the difference in rank with and without features.
  • How long will it take me to add this information to the report?
    • There are plenty of rank trackers with this data. It might take some integration time, but the data exists.
  • Is this information for ruling something out or helping me weigh a decision?
    • We’re just generically looking at performance here, so this is helping me weigh up my decision.

Conclusion: Yes, we should include keyword rankings, but they need to be grouped and ideally also have both rank with and without Google features. We’ll also want to avoid averaging rank, to lose subtlety in how our keywords are moving amongst each other. This example graph from STAT illustrates this well:

2. Individual keyword clicks

  • What are the odds this piece of information causes me to change my mind?
    • Low. Particularly because it won’t compensate for seasonality, I would definitely find myself relying more on rank here.
  • Could this information be better segmented or grouped to improve?
    • Again yes, same as above. It would almost certainly need to be grouped.
  • How long will it take me to add this information to the report?
    • This will have to come from Search Console. There will be some integration time again, but the data exists.
  • Is this information for ruling something out or helping me weigh a decision?
    • Again, we’re just generically looking at performance here, so this is helping me weigh up my decision.

Conclusion: I would probably say no. We’re only looking at organic performance here and clicks will be subject to seasonality and industry trends that aren’t related to our organic performance. There are certainly click metrics that will be useful that we haven’t gone over in these examples — this just isn’t one of them.

3. Number of indexed pages

  • What are the odds this piece of information causes me to change my mind?
    • Low, although sharp jumps would definitely be cause for further investigation.
  • Could this information be better segmented or grouped to improve?
    • It could sometimes be broken down into individual sections, using Search Console folders.
  • How long will it take me to add this information to the report?
    • This will have to come from Search Console. It doesn’t exist in the API, however, and will be a hassle to add or will have to be done manually.
  • Is this information for ruling something out or helping me weigh a decision?
    • This is just ruling out, as it's possible any changes in fluctuation have come from massive index bloat.

Conclusion: Probably yes. The automation will be a pain, but it will be relatively easy to pull it in manually once a month. It won’t change anyone's mind very often, so it won’t be put at the forefront of a report, but it’s a useful additional piece of information that’s very quick to scan and will help us rule something out.

Laying out and styling reports

Again, our layout should be fit for the goal we’re trying to achieve, which gives us a couple principles to follow:

  • It’s completely fine for reports to be large, as long as they’re ordered by the odds that the decision will change someone's mind. Complexity is fine as long as it’s accompanied by depth and you don’t get it all at once.
  • On a similar point, you’ll often have to breakdown metrics into multiple graphs. Make sure that you order them by importance so someone can stop digging whenever they’re happy.

Here’s an example from an internal report I made. It shows the page breakdown first and then the page keyword breakdown after it to let you dig deeper.

  • There’s nothing wrong with repeating graphs. If you have a summary page with five following pages, each of which picks one crucial metric from the summary and digs deeper, it's absolutely useful to repeat the summary graph for that metric at the top.
  • Pick a reporting program which allows paged information, like Google Data Studio, for example. It will force you to break a report into chunks.
  • As with dashboards, your design should keep the focus on the content. Simplify — keep styles and colors unified where possible.

Creating an effective graph

The graphs themselves are crucial elements of a report and dashboard. People have built entire careers out of helping people visualize data on graphs. Rather than reinvent the wheel, the following resources have all helped me avoid the worst when it comes to graphs.

Both #1 and #2 below don’t focus on making things pretty, but rather on the goal of a graph: to let you process data as quickly as possible.

  1. Do’s and Don’ts for Effective Graphs
  2. Karl Broman on How to Display Data Badly
  3. Dark Horse Analytics - Data Looks Better Naked
  4. Additional geek resource: Creating 538-Style Charts with matplotlib

Sometimes (read: nearly always) you’ll be limited by the programs you work in, but it’s good to know the ideal, even if you can’t quite reach it.

What did we learn?

Well, we got to the end of the article and I’ve barely even touched on how to practically make dashboards/reports. Where are the screenshots of the Google Data Studio menus and the step-by-step walkthroughs? Where’s the list of tools? Where’s the explanation on how to use a Google Sheet as a temporary database?

Those are all great questions, but it’s not where the problem lies.

We need to spend more time thinking about the content of reports and what they're being used for. It’s possible having read this article you’ll come away with the determination to make fewer reports and to trash a whole bunch of your dashboards.

That’s fantastic. Mission accomplished.

There are good tools out there (I quite like Plot.ly and Google Data Studio) which make generating graphs easier, but the problem with many of the dashboards and reports I see isn’t that they’ve used the Excel default colors — it’s that they haven’t spent enough time thinking about the decision the report makes, or picking the ideal metric for a dashboard.

Let’s go out and think more about our reports and dashboards before we even begin making them.

What do you guys think? Has this been other people's experience? What are the best/worst reports and dashboards you’ve seen and why?

Back to Top

With Moz Pro, you have the tools you need to get SEO right — all in one place.

Read Next

How to Create an SEO Forecast [Free Template Included] — Whiteboard Friday

How to Create an SEO Forecast [Free Template Included] — Whiteboard Friday

Nov 22, 2024
Directional Reporting in GA4 — Whiteboard Friday

Directional Reporting in GA4 — Whiteboard Friday

Aug 02, 2024
The Value of SEO Beyond Traffic and Leads — Whiteboard Friday

The Value of SEO Beyond Traffic and Leads — Whiteboard Friday

Jun 28, 2024

Comments

Please keep your comments TAGFEE by following the community etiquette

Comments are closed. Got a burning question? Head to our Q&A section to start a new conversation.