Remember project briefs?
The project brief is law.
In my early career, that’s what I had been told. If a brief came across my desk that was unclear, imprecise or missing details, the client needed to take a hard look at what they were asking for before I could even begin to work out ideas. We’d complain about how their objectives were so far-reaching, that their audience included everyone, and that there were uncountable ways to execute on it — which meant there’d be countless comps and revisions to try and get inside the clients head.
We’d send them away to work out their ideas and to solidify better plans on their own, with instructions on how to write better project briefs in the future.
When they did have their ideas worked out, we’d never get things in front of users, mostly building design by consensus. Launches happened with little fanfare from actual users, with lots of stakeholders patting themselves on the back. Was success ever measured? It was rare and what metrics were tracked were spun to show their successes.
I’m so glad things have changed.
Getting stakeholders to involve experience designers earlier
The clients never changed. Instead, I looked at my role in the process differently — my clients weren’t equipped to answer a lot of the questions that would help me make informed design decisions — I had to teach them what was necessary — do what they thought was right and then validate if they were. Sometimes that would come at the very end or after a product launch, through analytics, surveys, or qualitative interviews.
As I showed them to examine their products and ideas through a user’s perspective, I was continually asked to become involved earlier and earlier in new projects — I moved my way backwards through the design process introducing new tools and techniques to my clients. What was solely UI design then became wireframing and visual design, then user flows and journey maps, and then to personas, ideation and strategy.
With each new method introduced in the process we become more confident in what we set out to create. Generally, most companies now have a better understanding of experience designers, but it took a while to get here didn’t it? If you’re working with people that don’t quite get it, I understand your struggles - the only way to convince them of the design process, is to just show them what their missing.
Eliminating ideas too soon
But there is still one thing I sometimes struggle with — clients still have a general idea of what they want to do. They might not know how to pull it off, but they’ve already sold themselves on their idea (or bought into an idea to early) — that it’ll work and people will use it. I want to challenge them to start generating ideas that they don’t fall in love with — to become even more user-centric in their thinking and practice by testing those ideas as early as possible.
I’ve seen rooms of people come to consensus — either through lengthy discussion, debate, or voting on an idea that they believe will be worth pursuing — eliminating nearly all of the ideas that were generated. Some of those ideas deserved to be tested but there is a perception that testing is time consuming and costly. So the winning idea is chosen because it balances its feasibility to be built, its cost, and the belief that it will succeed because it exists - with those other ideas becoming lost in the ether.
We need to stop that belief.
Methods of validating ideas and building what people will actually want to use
If you haven’t read it yet, The Right it by Alberto Savoia introduces a handful of pretotyping (Yes, pretotype tools that focus on this exact issue — clarifying ideas through quick and inexpensive testing, collection, validation and analysis of that data to make objective and informed decisions at the beginning of the design process. His professional experience in researching the failure of products has led him to create a toolkit of methods that can be used to verify if people will actually use your idea once you bring it to market.
Although some of his pretotyping techniques focus on services, a number of them are applicable to digital products. The techniques stress the importance of bringing your idea to a very small slice of your potential market — to gauge the interest in using the product, by faking its existence, by running it for a short period of time, or repurposing an existing similar product. Along with the techniques, he offers up measurable ways to validate the tests to give more confidence in your potential ideas — an approach that could save time, cost, resource, and potentially bring better ideas to market.
It’s a fun book and I’m going to reference it a few more times here - I’ve actually gifted this to some coworkers in the past and probably will again.
The more unclear, the better
Getting clients to buy into using ideation validation methods is another story.
If anyone approached me today with a project brief, I wouldn’t complain about how their objectives were far-reaching and that their audience included everyone — I’d be excited to work on a project that hasn’t been solidified in their mind, that we’d get the opportunity to validate our assumptions through pretotyping and to get real data before ever committing to an uninformed idea.
In fact, for me, the more unclear the ask is, the better.
The problem with some oft used metrics
There is a general understanding of the importance of metrics — daily active users, click through rates, conversion rates and NPS scores are at the heart of what makes business progress and what differentiates good products from mediocre ones.
In addition, specific user experience metrics, like perceived usefulness and engagement scores are starting to take hold alongside the more traditional metrics, but often these UX metrics struggle to define the reason to why they are what they are and what causes them. There’s not a lot of correlation between the value of a metric and what may actually be contributing to those values, under the surface.
Interpreting attitudinal and behavioral metrics
Even the most useful metrics need to be interpreted — the problem is that they could relate to any aspect of how it was designed, explained, or implemented.
As an example, take the number of user submitted photos to an online photo sharing site that’s struggling to increase their user base — if the site has a low number of uploads per user, that metric does nothing to inform on whether there is an issue in the task flow, the visual design, or even the core idea of the product. These types of metrics can act as benchmarks for improvements to your product (make the upload process easy, incentivize the user to upload) but there’s a ceiling to where you can make improvements if your product isn’t something that people actually want to use in the first place. Possibly, there wasn’t enough real measurement on the original idea before creating a site that would attempt to solve a problem that didn’t really needing a solution.
You can continue to make those improvements to your dying website, with added cost and little return, or, in hindsight, you could have examined how you generate and test your ideas before you ever launch a product that people have little interest in.
Embracing not-so-great ideas and their failures
Although we have these baseline metrics, which can be useful in making improvements to good products, we have inconsistent or nonexistent metrics for evaluating ideas. We only look at what others have done through competitive analysis, make ‘best-practice’ decisions, and take a ‘good enough’ approach to new product designs. There’s often little thought or understanding on how to measure the value of an idea that doesn’t involve outright asking people what they’d think if it existed — which, more often than not, leads to false positives and reinforced (false) confidence in what you are creating.
Those ideas aren’t truly tested until prototypes get in front of potential users after much time, money and resource has been spent. Scrapping a bad idea that late into the process means having to spend as much or more time regenerating ideas and eventually testing with more prototypes. The amount of rework, at that point, usually informs the decision to just increment on a mediocre execution of a less-than-ideal product.
We often say we embrace failure, but in practice, we don’t know we’ve failed until it’s past the point of no return.
So how can we identify failure early on and gain confidence that our good ideas can succeed in the market? Savoia’s “TRI Meter” stresses the importance of having multiple ideas and a willingness to iterate over your ideas quickly — not when you’ve committed major resources, but when you first need to know if someone would be interested in using your new product or service — real metrics that are more reliable than the outcome of focus groups or surveys.
Learning to truly embrace failure comes much more easily when you’ve only committed an insignificant amount of time and money. The more ideas you can test and the more you can change those ideas for more testing, the more confident you can be that the idea you land on will have a higher success in the market.
Seriously, go read his book.
The future of market validation metrics
It’s easy to generate ideas on how to solve a problem, but we need a way to more accurately gauge which ideas are worth pursuing and better methods for eliminating ideas that aren’t heavily influenced by stakeholder opinion.
And, of course, we need to convince these stakeholders to let multiple ideas have their chance before anyone has latched on to a solution.
More importantly, we need to have better metrics for validating ideas, and measuring their influence on the outcomes of actual solutions.
I look forward to a day where I put together a report on market opportunities using tried-and-true viability metrics.