I’ve learned a lot in my first 2-3 months as a product designer at Automattic, but one of the biggest things I’ve learned is how to better analyze data and other forms of feedback in order to make more informed design decisions. While designing for such a large audience feels daunting at times, I’ve found there are things we can do to better understand people’s struggles and constraints – so that we can make good design decisions and produce positive outcomes.
At Automattic, we have access to some helpful analytics data. This is great, but can also be bad thing if used blindly. It’s crucial to spend time determining how to weigh hard data findings like analytics against ‘soft data’ like usability testing, conversations, and real-life usage. In other words: it’s easy to look at numbers and charts and think it’s telling you the whole story, but spoiler alert: more often than not, it’s not.
A Story: Usage vs. Value
My first project on the WordPress mobile apps was to design the next iteration of the formatting toolbar on the Post Edit screen. One part of this was doing research around usage of the HTML part of the editor on mobile, which our data showed wasn’t getting much usage. There was some
heated enthusiastic discussion around this – whether we should keep or remove the HTML editor from the mobile apps.
On the surface, the HTML editor doesn’t seem like it gets a lot of usage. But as I began talking to users and colleagues, a deeper thread surfaced: the primary job our HTML editor on mobile was essentially: saving users in a pinch. They were reaching for the HTML editor either when they didn’t have access to their computer or when they just wanted to make a quick little change that the default editor wouldn’t allow. This was a lifesaver in some circumstances.
As we work to continue to improve the default editor and publishing experience, we should see less of this, but if we hadn’t taken the time to talk to users and really uncover the value behind the feature, we would’ve made a call based on only the numbers and crippled some users. This would’ve been a deal-breaker for some. Now that we have a better understanding of the value, we are able to better weight priority and try different placements, to figure out what works best for users.
How Can We Improve?
Like much of the design process, there is no silver bullet for using data to inform design decisions. Each situation is unique, and must be approached mindfully.
📣 Better Communication & Collaboration
A big part of working with data is documenting and discussing how you’re using the data to inform design decisions. On my team, we’ve been using a Google Doc as a central source of truth for each project to document data, mockups, and design details. We are continuously improving this process, but it seems to work for smaller teams. Google Docs might not be perfect for your team, so I would recommend playing with different tools to find what works best for you.
❓ Check Your Ignorance at the Door
One crucial lesson I’ve learned from John Maeda and other colleagues is that we humans are full of bias (whether we like it or not), and we should at very least be aware of our biases. It’s crucial that we don’t try using data to prove our stances, but instead try to let the data and learnings craft our stances.
✏️ Understand Edge Cases, Optimize for Primary
Whenever possible, go the extra mile to try and understand edge cases and power users. With that said, no matter how much data we absorb, there are always going to be unknowns. In these cases, it’s best to optimize for primary use cases and accommodate as many as possible from there.
👋 Be an Advocate
Above all, it is our duty to be advocates for every person using our products, regardless of how they arrived at – or are using them. The least we can do is use the tools at our disposal to try and understand their struggles. As they say: With great power comes great responsibility.