Unmasking Authenticity: Using patient voices to transform healthcare-provider insights into business impact.

Unmasking Authenticity: Using patient voices to transform healthcare-provider insights into business impact.

Healthcare Providers (HCPs) are often a primary lens for understanding treatment landscapes, patient needs, and product potential. However, we often see a convergence of issues that can impact the authenticity of the insights collected:

  • HCPs are often “frequent fliers” which can lead to a participant versus HCP mindset. Many HCPs participate in research on an almost weekly basis, partly driven by the demand of research. And often, the questions we ask as researchers are predictable and don’t challenge HCPs’ thinking. As a result, HCPs can fall into a “participant mindset”, leading to reduced engagement and surface-level answers to frequent research topics and questions.   
  • HCPs are humans and susceptible to bias. HCPs’ memory can be clouded by the sheer volume of patient interactions they have on a daily basis. This may make it difficult for HCPs to recall specific dialogue or behaviors and they may inadvertently leave out critical details that give us insight into their reality. In addition, HCPs may have established beliefs about how they perceive and approach certain therapeutic areas, which may lead to missing – or subconsciously ignoring – information that provides important context. 

All of this impacts the depth and accuracy of insights and can lead to missing crucial nuances of the real-world clinical practice that ultimately drives business decision-making. 

The question: How do we more effectively uncover authentic insights rather than practiced answers?

Increasing Authenticity via the Patient Voice: 

Over the years, we’ve found that even the best projective techniques or innovative activities don’t always address the issues of disengagement, bias, or predictability. These techniques can come across as gimmicky to the participant, creating an additional barrier that prevents us from uncovering authentic insights. 

We have found the most success by rooting the HCP in the patient and the patient experience. By leveraging approaches that bring the patient into research, we force the HCP to step outside of the interview context, back into their clinic, and into an “HCP mindset.”

We have created four techniques to help better capture nuances and uncover more accurate insights. An overview of each of the techniques is shown in Figure 1. All four root the conversation in the patient voice and mentally drive HCPs back into the clinic.

Figure 1. Quick snapshot of the techniques, why they bring value, and the research topics where they are most effective. 

What Why Where
DigitalPersona™ Leveraging AI to introduce a human element vs. standard text.  Creates a more natural, human interaction, allowing for stimulus feedback that more closely aligns with reality. 
  • Target Patient / Patient Type
  • Segmentations
  • Visual Aid Testing
  • Lexicon / Dialogue Exploration
Lights.Insights.Action™ Incorporating patient actors to mimic real-world interactions.  Enables observation of more natural, in-the-moment behavior, reducing the potential for HCP recall bias, while circumventing any potential patient-privacy concerns. 
  • HCP / Patient Dialogue
  • Drivers & Barriers
  • Messaging
  • Solution Exploration & Testing
WorldBuilding™ Real-time illustrations of HCP feedback.  Uses visual cues to focus the HCP on specific patient behaviors and characteristics, preventing the HCP from speaking in generalities.  
    • Target Patient / Patient Type
  • Patient Segmentation
Bridge Groups HCP observing patients, in real-time, discussing their experiences and perspectives.  Identifies breakdowns in communication and empathy between the two groups, where “seeing is believing” for HCPs. 
    • Pre-Positioning
    • Target Patient / Patient Type
    • Drivers & Barriers
  • Lexicon / Dialogue Exploration

A Project Example when Using Lights.Insights.Action

We witnessed the impact and value that Lights.Insights.Action™ delivered on a recent study for a pharmaceutical client, whose oncology asset had been in the market for about a year. The medication had a unique side effect that required unconventional monitoring and management. The client team wanted to better understand the overall adverse event management experience – from the point it was discussed with the patient to when it was actively managed by the HCP – to help uncover barriers to prescribing. 

One hypothesized barrier was that HCPs were presenting the side effect profile in a way that dissuaded patients from wanting to try the product, and this was a key research question the team needed to address. 

A more traditional option would be to directly ask HCPs, “How do you discuss this product with your patients?” This likely would have led to perfectly valid answers, however:

  • The low prevalence of the tumor type means there are overall few conversations about the medication, HCPs may have struggled to remember exactly what they discuss with patients and the specific language they use. 
  • They may unconsciously tell us what they think they should tell the patient, rather what they actually tell the patient. 

This was the perfect use case for our Lights.Insights.Action™ technique, as we wanted to understand actual – versus stated – behavior, and to hear the specific language used during the patient discussion. The technique integrates a roleplaying exercise by leveraging a patient actor. We gave the HCP necessary background information about the patient and grounded them in the moment: their patient chart, demeanor, and reason for being in the office that day. During the roleplay, the HCP reviewed potential treatment options and explained them as they normally would to a patient and ultimately outlined next steps in the patient’s treatment plan.  

The technique pulled the HCP into the “HCP mindset”. We observed what the HCP actually said, rather than relying on the HCP sharing what they think they say. We heard the natural language they use and picked up nuances in body language and tone. One quote from an HCP participant sums up the value of the exercise: “I said that? I didn’t realize I say that.” 

The research delivered a clear answer to the team’s hypothesized question, and ultimately shaped and prioritized marketing efforts for the brand. 

Figure 2: Quote from an HCP as the moderator probed around their use of certain lexicon.  

The Conclusion: 

All four techniques introduce the patient voice into HCP research. They help circumvent some of the human biases that can surface, such as recall bias and confirmation bias, which can diminish the authenticity of the research results. We’ve found these techniques help increase engagement from HCPs; they find the exercises exciting and interesting, leading to greater depth to their responses. Research teams are also energized by the techniques; they’re more actively involved during research sessions and debriefs, and they’re excited to socialize the insights across the organization. 

These results are not limited to the healthcare space. The Link Group has successfully leveraged these techniques outside of the healthcare vertical, whether replicating sales representative and customer interactions, or further contextualizing target segments. The need for authentic insights is a core need of every research team in every industry. Finding ways to bring authenticity to the forefront creates more impactful and credible results.

Want to learn more?

The Link Group is excited to present more details on this topic at the upcoming IA IGNITE Healthcare session on June 5th. Jeff Whiteside from The Link Group will be presenting alongside Shawn McKenna from Currax Pharmaceuticals. 

We will also be presenting a different case study related to DigitalPersona™ and WorldBuilding™ and the tremendous impact these techniques can have on delivering authentic insights on March 12th at the Intellus Summit in Charlotte. Laura Bayzle and Jeff Whiteside from The Link Group will be presenting alongside Jen Möller from Pfizer.

How to Identify AI Survey Fraud

How to Identify AI Survey Fraud

Surveys are a cornerstone of market research, but what happens when the data itself becomes unreliable? AI has revolutionized many aspects of our lives, and research is no exception. But with this progress comes a new challenge: AI survey fraud. Have you ever suspected a survey respondent might not be who they seem? Perhaps rushing through questions or giving nonsensical answers. Well, AI has introduced a whole new level of sophistication to survey fraud. This blog post dives into the growing issue of AI survey fraud, exploring how it works, how it impacts research, and, most importantly, what you can do to protect the integrity of your data.
In this article, we’ll cover:
  1. AI Survey Fraud: What is it?
  2. How AI survey fraud impacts research findings
  3. How to identify AI survey fraud
  4. Strategies to prevent survey fraud
Quality in, quality out. We all know that a study’s results are only as good as the data behind it, which is why The Link Group has always put a huge emphasis on data quality. We have an internal team focused on advancing our data quality protection strategies, we’re involved in an industry-wide data quality task force, and we’re often told by our panel partners that we have one of the most rigorous processes for data cleaning that they’ve seen. Due to the rise in use and capabilities of Artificial Intelligence (AI) programs, we’ve seen an uptick in more sophisticated forms of survey fraud. The type of “AI Survey Fraud” that we’ve encountered is much harder to detect, since it doesn’t often get caught up in typical speeding, straight lining, or logic trap quality control checks.

What is “AI Survey Fraud”?

AI survey fraud is a newer type of survey fraud that utilizes some form of AI to aid in completing the survey. While we don’t have full clarity on exactly how the fraud is being executed, we’ve learned a lot about it from several recent studies and conversations with others in the industry. We know that it is happening at a large scale and that often these fraudsters will find a path through your screener through trial and error to allow for easier bulk qualifying. They also seem to have ways to avoid typical quality control traps within the survey and have AI-generated open-end responses that are on topic and sound knowledgeable when you review them at first glance. And from talking with our panel partners, we’ve learned that this type of fraud has been on the rise across the entire industry in recent months.

How does AI survey fraud impact me?

This type of fraud is especially concerning because it’s harder to detect and happening on a much larger scale. Automated quality checks will only get you so far in catching this more sophisticated fraud as it requires hands-on data reviewing to catch it. This is why we’ve adopted several new strategies to deep dive into the data – on top of our already rigorous standards of closely reading every open end for every respondent.

In any given survey we know we’ll have a small percentage of people who we clean out for a multitude of reasons, whether it’s not paying attention, speeding through to get the incentive, or trying to game the system in a one-off survey. AI fraud, however, is coming in at a higher volume and can take over a chunk of your survey responses and quickly fill up your quotas.

In a recent study across multiple countries and target types, we cleaned out ~40% of our responses due to suspicions of “AI Survey Fraud.” While this is certainly on the higher end, this fraudulent data can have significant impacts on your findings. Below is a blinded question from that study where you can see the significant impact this new type of bulk fraud could have on the findings if it was not caught. On this question there was a nearly 60% difference in Top 2 Box ratings between good and bad respondents.

So, what can we do about this issue?

As with learning about all the positive capabilities of AI, learning about how to detect and prevent its use in surveys is constantly evolving. While there is currently no magic bullet for stopping AI from making its way into your survey, the good news is that we know that it exists; we can keep an eye out for the patterns in the data and look even closer during our data review during fielding.

Tips for Identifying AI Survey Fraud

Before Fielding:

  • Select your survey panels / data sources carefully. The company that you partner with to recruit respondents and send out your survey is the first line of defense. You want to find a partner who is implementing measures to improve data quality (i.e. reCAPTCHA, fraud scoring systems, de-duplication, etc.).
    • If you’re using multiple panel sources, ensure that you have a variable to track the data coming from each one. Doing so will help with detecting patterns and potentially identifying a single data source causing most of your quality issues.
  • Include at least one emotional/empathy-evoking open end (i.e., what was your experience when you were diagnosed with XYZ disease?).
  • Disable copy-paste functionality through programming or at least track if a respondent uses it.
  • Include reCAPTCHA, honey pot questions (i.e., and quality control check questions that don’t consistently have the same correct answer, and honey pot questions (i.e., questions that are hidden on a page that a bot would answer but a real human respondent would not be able to see and answer).

Strategies for Preventing AI Survey Fraud

During Fielding:

  • Clean daily. The AI fraud we have seen comes into the survey in large batches, filling quotas and preventing real respondents from getting through. High frequency cleaning will help avoid this.
  • Review your data from multiple angles. Start by sorting data by survey start date to aid in recognizing patterns. You can also consider sorting open ends alphabetically, by panel source, by IP address, and by any other device identification your survey platform collects.
  • When reading open ends, look for these signs of AI survey fraud:
  • Longer-than-usual open ends. Most real respondents try to get their point across in as few words as possible (you can use Excel formulas to aid in this process).
    • Re-phrasing of the question in the answers. Again, most humans don’t take the time to do this.
    • Substantial responses for optional OEs, particularly the one at the end where you may ask for feedback on the survey.
    • Respondents suddenly seem to have more knowledge on a topic than past respondents (ex. they know several unaided drugs in development and know how to spell them correctly).
    • Responses written in third person when it should be in the first person.
  • Have multiple researchers review open ends. We recommend having more than one set of eyes on at least the open ends within your cleaning file.
  • Track incidence rate on a daily basis. Create a calculation to track IR since a sudden spike in IR may be a tip off for a batch of suspicious data. In our experience, the AI fraud tends to be set up to ensure the respondent qualifies for the survey, which would increase incidence rates.
  • Let your panel providers know if you suspect AI fraud. This helps the panel providers investigate further and pushes them to be part of the solution of finding ways to prevent AI fraud from entering panels.

We know that survey fraudsters tend to adapt quickly and create workarounds for our countermeasures; however, we are constantly innovating and expanding our toolbox to stay ahead of the curve. We are constantly reassessing our toolbox to see how successful or unsuccessful certain tactics are at catching AI and always brainstorming new tactics to implement.

AI Survey Fraud is an industry-wide issue, and our goal is to drive awareness and to be a part of the collective push to improve data quality. We will continue to advance our tools to catch this type of fraud and share our learnings as we conduct research on the effectiveness of our new tactics.

If you’d like to keep up with the latest in the industry when it comes to data quality, you can check out https://globaldataquality.org/. Working together as a full market research industry is the best way to push for a future with higher data quality!

If you would like to chat more about anything we discussed in this blog post, please don’t hesitate to reach out.