Research experts know that truly understanding consumer behaviours and motivators is about more than just asking questions in a survey. Optimised research techniques have been shown to result in more accurate data, which fuels more impactful insights, by focusing on respondent engagement and the collection of high-quality open-ended responses. Leveraging empathetic techniques can reveal the motivations, preferences, and emotions behind consumer behaviour, which can be missed in traditional surveys question languages.
As the leading expert in data collection best practices, Kantar recently conducted a research engagement which included asking some questions in a more optimised way versus a more basic approach. In this research experiment, we asked respondents a series of questions about their travel habits and sentiments and measured the impact on responses of different survey techniques.
Guiding Respondents with Word Limits
In the beginning of our study, we sought out to improve the quality of open-ended question responses through improving respondent attention.
The experiment: Respondents were asked to describe what they were most looking forward to in the next 12 months. We then split the sample into two groups: Group A had a 25-word limit with a countdown, while Group B had no restriction or word limit.
Group A question text: In fewer than 25 words, describe ONE thing that you are looking forward to MOST over the next 12 months? A word countdown was shown at this question, counting down from 25.
Group B question text: Describe ONE thing that you are looking forward to MOST over the next 12 months?
Both groups were then reviewed by our proprietary AI enabled open-ended evaluator tool.
The results: After the survey was completed, researchers saw that the group with the word limit provided over 60% more total words than the unrestricted group. Results from the AI evaluator tool showed that both groups received similar quality scores (averaging 7.3 out of 10).
The lesson: Carefully designed constraints in a survey can enhance focus and engagement which leads to higher-quality answers. Adding restrictions, like a word counter, can make a question feel like a challenge, leading to better engagement and richer responses.
Reducing Repetition with Multi Code Questions
Too much repetition in a survey can lead to respondent disengagement and loss of attention. Research experts at Kantar tested two survey question formats (basic vs. empathetic) to determine which approach yielded more consistent responses and reduced respondent fatigue.
The experiment: In the basic version, respondents evaluated 16 words individually on a three-point scale (not at all, partially, or perfectly) to describe their most recent vacation. This list included contradictory options such as “relaxing” and “stressful” or “luxurious” and “budget-friendly,” and respondents were not given any specific instructions beyond selecting the words that described their trip.
Basic version question text: “What words would you use to describe your most recent vacation?”
Basic version response options:
- Relaxing
- Stressful
- Adventurous
- Tame
- Cultural
- Romantic
- Luxurious
- Budget-friendly
- Fun
- Exotic
- Familiar
- Underwhelming
- Rejuvenating
- Chaotic
- Disappointing
- Lonely
In the empathetic version, respondents simply selected the words that described their vacation from the same list. The question wording was also adjusted to be more engaging.
Empathetic version question text: “Vacations can be brilliant but not every vacation lives up to our expectations! What words would you use to describe your most recent vacation?”
In the basic version we classed a contradiction as a respondent either stating each word in the pair either “didn’t not describe their vacation at all”, or “described their vacation perfectly” and in the empathetic version a respondent needed to select both options to have contradicted themselves.
We allowed a respondent to make one contradictory pair and reviewed the number of respondents who had contradicted themselves more than once.
The results: In the basic version, 23% of respondents selected two or more contradictory pairs of statements. This number dropped to just 1% among empathetic survey respondents. Moreover, the number of respondents selecting three or more contradictory pairs fell to nearly 0% in empathetic surveys, compared to 9% in the basic version.
The lesson: Reducing repetitive question formats—such as asking respondents to rate each word separately—helps maintain engagement, improves data quality, and lowers dropout rates. Instead, multi-code selection questions (where respondents pick from a list), offer a more intuitive and less fatiguing experience. Best practice is to limit repetitive loops to fewer than 10-12 items unless absolutely necessary. It will also help to reduce your length of interview.
Relevant Questions Lead to Deeper Insights
Relevance is a key factor in maintaining survey engagement and ensuring high-quality responses. Irrelevant questions can slow down an interview, increase respondent fatigue, and generate unnecessary noise from those without a strong opinion.
Our research experts again tested in a basic and empathetic version of a survey to determine how relevance impacts survey engagement and response quality. This was done by tailoring open-ended follow-up questions based on respondents' ratings of their most recent vacation.
The experiment: Respondents were asked to rate their last vacation on a scale of 1 to 10 and were then asked to explain the reasons for the rating. In the basic version, we displayed the open-ended question “Why did you give that rating?” regardless of whether they had a strong opinion. In the empathetic version, only respondents who gave a low (3 or less) or high (8 or more) rating were asked a follow-up question, with wording designed to match their sentiment:
Empathetic low score question text: Oh dear, that doesn’t sound good! Could you tell us why you didn’t enjoy your vacation?
Empathetic high score question text: Great! Can you tell us what made your vacation so enjoyable?
The results: This tailored approach was found to elicit 20% more words on average compared to the basic surveys question, which provided no prompt. This increase in response volume is an indication that survey respondents took the extra time to provide additional depth in their reply, which traditional methods would have overlooked. We also saw a significant increase in the quality of the empathetic OE responses (6.8/10 versus 4.4/10 for the basic version) indicating that not only did we received longer responses when asking relevant questions, they were also of better quality.
The lesson: Respondents often find it hard to explain mid-range ratings, leading to lower-quality answers. By asking follow-up questions only when someone has given a clear opinion, we keep them engaged and ensure the question feels relevant. For example, asking why someone rated something 5/10 isn’t very useful since their opinion isn’t particularly strong.
Conclusion
Empathy in surveys can take many forms. It can be as simple as wording the questions in a more friendly, conversation way, but also about reducing the load on respondents and removing any judgement biases to encourage them to be open and honest about their opinions.
In considering the needs of the respondent, researchers are able to collect richer and more reliable data that allow organisations to make better-informed decisions and foster stronger relationships with their audiences. Whether understanding consumer preferences for vacation packages or identifying barriers to adoption, empathetic approaches unlock insights that would otherwise remain hidden.
Get more answers
For more findings from this study, access the complete Connecting with the Tourism Community report. Find additional insights into how global consumers are travelling, the resources they use to plan trips and how they make decisions about where to go.
About this study
This research was conducted online among more than 10,000 consumers across ten global markets (including Australia, Brazil, China, France, Germany, Singapore, South Korea, UK, US and the UAE) between 12-24 November 2024. All interviews were conducted as online self-completion and collected based on controlled quotas evenly distributed between generations and gender by country. Respondents were sourced from the Kantar Profiles Respondent Hub.