Research Techniques
3-dimensional framework assists with product design decisions.

3-dimensional framework assists with product design decisions
How to apply, analyse and validate hypotheses, ideas and features.
While it’s not realistic to use the full set of methods on a given project, nearly all projects would benefit from multiple research methods and from combining insights.
Unfortunately many design teams only use one or two methods that they are familiar with.
The key question is what to do when. To better understand when to use which method, it is helpful to view them along a 3-dimensional framework with the following axes:
- Attitudinal vs. Behavioural
- Qualitative vs. Quantitative
- Context of use
This distinction can be summed up by contrasting “what people say” versus “what people do” (very often the two are quite different).
The purpose of attitudinal research is usually to understand or measure people’s stated beliefs, which is why attitudinal research is used heavily in marketing departments. While most usability studies should rely more on behaviour, methods that use self-reported information can still be quite useful to designers.
For example, card sorting provides insights about users’ mental model of information space and can help determine the best information architecture for your product, application, or website. Surveys measure and categorize attitudes or collect self-reported data that can help track or discover important issues to address.
Focus groups tend to be less useful for usability purposes, for a variety of reasons, but provide a top-of-mind view of what people think about a brand or product concept in a group setting.
Between these two extremes lie the two most popular methods we use: usability studies and field studies. They utilize a mixture of self-reported and behavioural data and can move toward either end of this dimension, though leaning toward the behavioural side is generally recommended.
On the other end of this dimension, methods that focus mostly on behaviour seek to understand “what people do” with the product or service in question. For example, A/B testing presents changes to a site’s design to random samples of site visitors but attempts to hold all else constant, in order to see the effect of different site-design choices on behaviour, while eye-tracking seeks to understand how users visually interact with interface designs.
Between these two extremes lie the two most popular methods we use: usability studies and field studies. They utilize a mixture of self-reported and behavioural data and can move toward either end of this dimension, though leaning toward the behavioural side is generally recommended.
The Qualitative vs. Quantitative Dimension.
The distinction here is an important one, and goes well beyond the narrow view of qualitative as “open ended” as in an open-ended survey question.
Rather, studies that are qualitative in nature generate data about behaviors or attitudes based on observing them directly, whereas in quantitative studies, the data about the behavior or attitudes in question are gathered indirectly, through a measurement or an instrument such as a survey or an analytics tool.
In field studies and usability studies, for example, the researcher directly observes how people use technology (or not) to meet their needs.
This gives them the ability to ask questions, probe on behavior, or possibly even adjust the study protocol to better meet its objectives. Analysis of the data is usually not mathematical.
By contrast, insights in quantitative
methods are typically derived from mathematical analysis, since the instrument of data collection (e.g., survey tool or web-server log) captures such large amounts of data that are easily coded numerically.
Due to the nature of their differences, qualitative methods are much better suited for answering questions about why or how to fix a problem, whereas quantitative methods do a much better job answering how many and how much types of questions.
Having such numbers helps prioritize resources, for example, to focus on issues with the biggest impact.
The following chart illustrates how the first two dimensions affect the types of questions that can be asked:

The Context of Product Use.
The third distinction has to do with how and whether participants in the study are using the product or service in question. This can be described as:
- Natural or near-natural use of the product
- Scripted use of the product
- Not using the product during the study
- A hybrid of the above
When studying natural use of the product, the goal is to minimize interference from the study to understand behavior or attitudes as close to reality as possible. This provides greater validity but less control over what topics you learn about.
Many ethnographic field studies attempt to do this, though there are always some observation biases. Intercept surveys and data mining or other analytic techniques are quantitative examples of this.
A scripted study of product usage is done to focus the insights on specific usage aspects, such as on a newly redesigned flow. The degree of scripting can vary quite a bit, depending on the study goals. For example, a benchmarking study is usually very tightly scripted and more quantitative in nature, so that it can produce reliable usability metrics.
Studies where the product is not used are conducted to examine issues that are broader than usage and usability, such as a study of the brand or larger cultural behaviours.
Hybrid methods use a creative form of product usage to meet their goals. For example, participatory-design methods allows users to interact with and rearrange design elements that could be part of a product experience, in order discuss how their proposed solutions would better meet their needs and why they made certain choices.
Concept-testing methods employ a rough approximation of a product or service that gets at the heart of what it would provide (and not at the details of the experience) in order to understand if users would want or need such a product or service.
Most of the methods in the chart can move along one or more dimensions, and some do so even in the same study, usually to satisfy multiple goals.
For example, field studies can focus on what people say (ethnographic interviews) or what they do (extended observations); desirability studies and card sorting have both qualitative and quantitative versions;and eyetracking can be scripted or unscripted.