The Maximum Difference (aka Maximum Discrimination or Max-Dif) Methodology: a Questionable Solution in Search of a Problem.
This is a new methodology designed to help researchers improve the level of discrimination in studies where respondents are asked to rate a set of attributes and benefits in terms of importance. Sometimes in such studies people rate all the characteristics the same, usually high. Every characteristic rated may get a 4 or 5 on a five point scale.
The Max-Dif methodology is simple. It calls for giving the respondent the list of attributes and benefits (A/Bs) and asking them to pick the one they deem most important. Then pick the one they think is least important.
Then go back and repeat the process over and over again until all of the A/Bs have been evaluated.
It has one advantage. It will give you the maximum differentiation at the level of the individual respondent if, in fact, you have a small number of, let’s say less than 12, attributes and benefits. So if you have a small number of A/Bs and you’re concerned about discrimination, this is a way to go.
A close alternative to this methodology is to ask people to pick the three most important (we would say desirable) attributes and benefits followed by the three least important or desirable. You would then move on to ask about the next three that are most important followed by the next three that are least important, etc., etc. This approach is faster than the single item analog and can handle more attributes and benefits.
But back to the basic Max-Dif Methodology
- If it’s a long list of A/Bs it takes too much time to administer and doesn’t yield reliable data. Respondents are overwhelmed by the task and begin to answer randomly.
- You can overcome this problem by embedding the attributes in a set of attributes using an orthogonal design. Let’s call each set a scenario or a frame. It creates more problems than it’s worth. You lose the ability to look at the data on an individual respondent basis. The problem is identical to conjoint measurement where each respondent may see only 8 or 12 scenarios but the effects of maybe fifty or even one hundred features are captured. Each of these individual features are only seen by a small number of respondents.
- The data that is generated provide only rank order information. You don’t have any sense of whether the attributes are great or whether they’re awful.
- The scale that appears to be most commonly used is an “importance” scale. Thus it has the same problems as any importance scale. It overstates rational, tangible traits and understates the value of emotional, intangible states.
- Although this approach improves discrimination at the individual respondent level, it doesn’t necessarily improve discrimination at the aggregate level. This is particularly true if you’re forcing respondents to go through an exercise where they’re being asked to make fine distinctions between characteristics that they’ve hardly thought of before. This yields discrimination but unreliable discrimination and doesn’t necessarily improve aggregate level discrimination.
As stated at the beginning, Max-Dif is a questionable solution in search of a problem. It is a methodology that should be avoided rather than adopted.