Results Summary
What was the research about?
Researchers often use surveys to learn about what patients prefer. The wording of survey questions may affect how patients answer.
In this study, the research team compared different ways of asking patients with type 2 diabetes questions in a national survey. The questions asked patients about managing their diabetes and the medicines they prefer. The team wanted to see how accurately the different ways of asking questions measured patients’ preferences. The study looked at whether patients thought the different ways of asking questions
- Were easy to understand and answer
- Led to answers that matched what patients really wanted
What were the results?
Managing diabetes. The team compared two ways to ask patients questions about issues important for managing their diabetes, such as being able to get healthy food. One way asked patients to rate each issue on a number scale with five choices. The choices ranged from -2 for a strong, negative impact on managing diabetes to +2 for a strong, positive impact on managing diabetes. The other way showed patients a short list of issues and asked them to choose what was best and worst in helping them manage diabetes.
Patients found it easier to understand and answer the questions that asked them to rate issues on a number scale. Rating issues on a number scale led to answers that were closer to what patients really wanted. But those who answered questions using the number scale tended to pick only the positive choices. Patients who chose between best or worst had more balanced answers.
Medicine preference. The team also compared two ways to ask patients questions about which medicines they prefer. One way showed patients several pairs of medicines. Each pair had two medicines with different characteristics shown side-by-side. The team asked patients to choose the medicine they liked better from each pair. The other way showed patients one medicine at a time. Patients chose which characteristic of that medicine was the best and which was the worst. Patients found it easier to understand the questions that asked them to choose the medicine they liked better from each pair. But patients didn’t find either way of asking questions easier to answer or closer to what they really wanted. Answers to questions that asked patients to choose the best and worst characteristics were more consistent.
Who was in the study?
The study included 1,103 adults with type 2 diabetes. Of these patients, 52 percent were white, 23 percent were black, 21 percent were Hispanic, and 4 percent were of another race. The average patient age was 62, and 50 percent of patients were male.
What did the research team do?
The research team created different ways to ask patients questions about managing their diabetes and which medicines patients prefer. The team sent the questions to patients in a national survey.
Then, the team assigned patients by chance to one of the two ways of asking questions. Patients either rated how important issues were for managing diabetes using a number scale or they chose which issues from the list were best and worst in helping them manage their diabetes. For questions about medicine preferences, patients either picked the medicine they liked better from two choices or chose which characteristics of one medicine were the best and worst.
Next, the team compared patients’ answers to the different ways of asking questions. The team looked to see if patients thought the questions were easy to understand and answer and led to answers that matched what they really wanted.
The research team worked with patients, community members, and diabetes experts during the study.
What were the limits of the study?
The study didn’t look at all ways to ask survey questions. Also, the study only included adults with type 2 diabetes who had internet access.
Future research could look at other ways to ask patients with type 2 diabetes or other health problems questions about their care preferences.
How can people use the results?
Patient organizations, clinicians, and researchers can use the results when deciding how best to collect information about what patients prefer.
Professional Abstract
Objective
To compare stated-preference methods for collecting information about the priorities and preferences of patients with type 2 diabetes
Study Design
Design Elements | Description |
---|---|
Design | Empirical analysis |
Data Sources and Data Sets | National survey of 1,103 adults with type 2 diabetes conducted through GfK KnowledgePanel, a probability-based online panel representative of the US population |
Analytic Approach |
Bivariate and multivariate statistical analyses compared
|
Outcomes |
Primary: correlation between Likert and BWS priority scores, correlation between DCE and BWS preference estimates Secondary: perceived ease in understanding and answering survey items, perceived consistency between survey responses and actual preferences |
Stated-preference methods are survey instruments used to collect patient preference information. Few studies have compared their effectiveness in measuring patients’ priorities and preferences or preference heterogeneity.
The researchers worked with a diabetes action board consisting of patients, community members, and diabetes experts to develop a national survey to compare approaches to stated-preference measurement. The researchers created different versions of the survey with varying approaches to asking stated-preference questions and then randomized respondents to complete them. Of the 1,103 patients surveyed, half were male, 52% were white, 23% were black, 21% were Hispanic, and 4% were other races. The mean age was 62.
The researchers compared stated-preference methods for two assessment purposes:
- Identifying self-management priorities: comparing Likert scaling to best-worst scaling (BWS). The researchers identified 11 facilitators and barriers to diabetes self-management, such as access to healthy food. The researchers randomly assigned patients to complete a survey prioritizing the 11 facilitators and barriers with either Likert scaling or BWS questions. For example, Likert scale questions asked patients to rate on a scale from -2 to +2 the impact of these factors on their ability to manage their diabetes. BWS questions asked patients to indicate which factors, such as access to healthy food, had the best and which had the worst impact on their diabetes self-management.
- Identifying medication preferences: comparing discrete choice experiments (DCE) to BWS. The researchers identified six product characteristics that can influence patients’ preferences for medications. The researchers randomly assigned patients to complete a survey with multiple sets of either DCE or BWS questions about medications. For example, DCE questions asked patients to consider multiple characteristics of two different medications and choose the one medicine preferred. BWS questions asked patients to review multiple characteristics of one medication and choose which characteristic was the best and which was worst.
Results
- Identifying self-management priorities: comparing Likert scaling to BWS. Mean priority scores from Likert scaling and BWS were highly correlated (Pearson’s Rho=0.97). A higher percentage of patients who completed the Likert scale items agreed that the survey items were easy to understand and answer and consistent with their actual preferences compared with patients who completed the BWS items (p<0.01). Patients who completed the Likert scale items tended to use only the positive responses on the scale, while BWS generated more balanced responses.
- Identifying medication preferences: comparing DCE to BWS. Preference estimates from DCE and BWS were highly correlated (Pearson’s Rho=0.91). Patients who completed the DCE items were more likely to strongly agree that the choice tasks were easy to understand than patients who completed the BWS items (p=0.028). Researchers found no difference in how patients rated the DCE and the BWS items as easy to answer and reflective of their actual preferences. BWS resulted in preference estimates that were more consistent, with smaller measurement error, compared with those from DCE.
Limitations
Patients completed different versions of the survey and did not respond to questions for all stated-preference methods assessed. This study did not examine all types of stated-preference methods. In addition, the study only included patients with type 2 diabetes who had internet access.
Conclusions and Relevance
Clinicians, researchers, and patient advocacy organizations can use the results to inform their choice of survey method to learn about patients’ priorities and preferences.
Future Research Needs
Future studies could test other stated-preference methods and include people with other health conditions. Also, researchers could examine ways to incorporate patients’ priorities and preferences into clinical practice.
Final Research Report
View this project's final research report.
Journal Citations
Related Journal Citations
Peer-Review Summary
Peer review of PCORI-funded research helps make sure the report presents complete, balanced, and useful information about the research. It also confirms that the research has followed PCORI’s Methodology Standards. During peer review, experts who were not members of the research team read a draft report of the research. These experts may include a scientist focused on the research topic, a specialist in research methods, a patient or caregiver, and a healthcare professional. Reviewers do not have conflicts of interest with the study.
The peer reviewers point out where the draft report may need revision. For example, they may suggest ways to improve how the research team analyzed its results or reported its conclusions. Learn more about PCORI’s peer-review process here.
In response to peer review, the PI made changes including
- Addressing reviewer questions about their approach to missing data by explaining that with a lack of established methods for addressing missing data in choice experiments, the researchers chose to include all specific choice tasks with participant responses. Further, the researchers pointed out that data were missing infrequently in the study, so they did not feel that biased results due to missing data were a major issue.
- Updating their description of patient and stakeholder engagement to more clearly articulate how the researchers used community-based participatory research (CBPR) principles as a guide for study development and implementation.
- Revising the study conclusions to summarize the study’s effort to compare different methods for eliciting patient preferences and priorities. The researchers acknowledged that although they could not identify a “gold standard” to assess the validity of the results of this research, they did note some implications for other researchers engaging in this work.