Skip to content
Table of Contents

Introduction
Duplicate cases (repeated attempts)
For the Research pool administrator
Inattentive responding
Lying

Why this blog post now?

I’ve been dealing with data from a departmental prescreening sample all morning. Data cleaning is always time-consuming (see my existing post on the topic). Data from departmental research pool surveys presents unique issues for two reasons. First, students have a strong incentive to get the survey done and receive credit. Second, too many students feel no compunction about lying and/or just “clicking through” a survey.

Bad data are a real problem with undergraduate samples. This is a topic I’ve discussed many times with a colleague (Dr. Jenel Cavazos). Dr. Cavazos has a lot of experience with students in introductory psychology. The majority of research pools draw primarily from intro psych courses, where completing research is a requirement. Dr. Cavazos and I feel strongly about students gaming the system. We also worry that other researchers may not appreciate the magnitude of the problem. Eventually, we’ll probably write a paper about it. In the meantime, here’s the short list of what to look out for.

Potential issues that lead to bad data:

  1. Repeated survey attempts
  2. Inattentive responding
  3. Lying

All of these can happen with any data collection, especially online data collections. But they are more likely to occur, and more difficult to prevent, in departmental prescreening samples. Below are some suggestions for mitigating the problem.

See my post on using MTurk to collect data and the problems it incurs.

Duplicates in prescreening samples (repeated survey attempts)

It is essential to ensure that you use data from any individual only once. Any time participants can complete a survey more than once, you risk violating assumptions of independence. (Unless you are collecting data for within-subjects analyses by design.)

The easy way to prevent repeated attempts by a single user is to select “prevent ballot box stuffing” option in Qualtrics (or other survey platform equivalent). It’s not foolproof–they can simply use another browser! But it does make it more difficult for someone to complete the survey multiple times.

However, it is in researchers’ interest to obtain as many responses as possible from the subject pool. Many studies offered to psychology department students require prescreening survey completion. Sometimes researchers use the prescreening data directly, and every good case counts. This is more important given the quantity of cases you have to throw out in a prescreening sample. Therefore, I advise allowing participants to take the survey as many times as is necessary for them to complete it.

It is essential to take the into account when cleaning the data.

Identifying and dealing with duplicates

Fortunately, you can use SONA ids to find duplicate cases. (See my post on Embedded data if you do not already collect SONA ids in your surveys and automatically grant credit.)

To identify duplicates, run a frequency analysis on the SONA id variable. Sort the output table by frequency (descending). In SPSS, you doubleclick the table, right-click the head of the column, and select sort rows –> descending. If there are any duplicates, they will be at the top. You can then copy the SONA id, and search for it in your dataset.

Sort the file by SONA id so that the duplicates are together. To do this in SPSS, right-click on the variable name in the “data view.” Select sort –> ascending (or descending, if you prefer).

departmental prescreening samples
This participant accessed the survey twice. The first time, they did not finish.

I’ve had students take the prescreening survey up to five times. Mostly, there will be one or two incomplete attempts, and one completed survey. But sometimes people will do the entire thing twice.

This also happens with MTurk data! Workers will run out of time on MTurk, stop participating, but then accept another HIT later, and finish the survey.

For the researcher: What to do with the duplicates

It’s not a good idea to just use the final, completed row. Why not? First, there may be testing effects. Second, the participant may not pay attention the second time, and just rush through to get credit. (Yes I have seen evidence of this in timed pages).

I take the data from the first time the participant completes any given item or questionnaire. Yes, this requires tedious work. Sometimes I have to copy-paste bits and pieces throughout the row, when the participant stopped midway through a randomized block.

For the Research pool administrator: How to get better data from departmental prescreening samples

First, do randomize the survey. Put one questionnaire per block. You can do several randomized blocks if branch logic is needed. I try to balance questionnaires from different labs across the survey flow. This way no single lab is stuck with fewer responses (there are always students who drop out mid-survey).

Survey flow for departmental prescreening samples
An example of randomized blocks in survey flow on Qualtrics.

I strongly suggest including a timing question on each block. Because this is a lot of trouble, individual researchers (labs) should be required to provide their blocks. Then all the administrator has to do is import each block into the prescreening survey. If the lab doesn’t put a timing question in, that’s their problem.

The timing question allows researchers to discard participants who spend very little time on a specific questionnaire. Relying on total survey duration can mean excluding perfectly good data or including bad data. Some participants may complete the beginning of the survey conscientiously, and then either drop out or start flatlining*. Others may rush through individual questionnaires but leave the browser open and idle for an hour at one point.

*By “flatlining” I mean selecting the same choice on each item (e.g., all middle points). See my post on data cleaning.

It might be a good idea to be frank with students. Ask them to skip any questionnaires they are certain they have already completed when taking the survey more than once. I floated this idea to our research pool administrators last week. Of course, you then risk a student thinking they had completed something they had not.

Inattentive responding

Participants who are in a hurry or bored or simply exhausted after completing endless Likert-type scales are likely to provide dubious data. This is why it is vital to time each questionnaire and to inspect the data carefully. Researchers will want to discard flatliners or responses that are simply too fast.

All the suggestions I make in my data cleaning post apply here. A few other ideas include penalizing students for not taking time on the survey. They could be refused credit if they are too speedy. This can be set up in Survey Flow to happen automatically. You could potentially use it at the individual questionnaire level, using timing. See my post on Using Logic for details.

You can use survey flow to eliminate students who rush through the questionnaires.

In my personal surveys, I set up “Break” pages. On them, I suggest that participants take a break, leaving the browser tab open. I tell them to come back within a given time frame, which depends on the survey. Whether you do this–and how–depends on study design. For a departmental prescreening survey, it shouldn’t matter much if participants take a break and come back later. (OU’s survey, however, explicitly prohibits this.)

Lying

This is the tough one. One of the reasons Dr. Cavazos and I want to write that paper is that we have evidence of blatant lying. Some of it could be avoided by not offering an incentive to lie. For example, if you only want 25 year-old Catholics to participate, and you offer course credit for participation, your freshman class might turn out to be full of religious converts who took a long time to get to college. (How do we know? Institutional statistics.)

This issue also comes up in MTurk samples. One way to deal with it is to prescreen literally. SONA does allow prescreening on the basis of sex (but I’ve never got it to work properly). You can have participants complete the filter questions (religion, age), and either individually invite the ones who meet the conditions, or have a separate task for the others. Best not to tell them if there is a chance they could get filtered out of an opportunity for extra credit.

What I do with MTurk is pay everyone equally, and have the people I am not interested in do a different task. I do this with my own college undergrad surveys as well. I am not sure how it applies to departmental prescreening though. On the other hand, they really don’t have much incentive to lie per se. Everyone completes the same survey (if not the same tasks, if it is used for experiments), everyone gets credit. Hopefully few will lie, and since departmental prescreening samples tend to be very large, the inevitable bad data will not unduly influence analysis results.

Let me know if I missed anything by leaving a comment! And if this blog post helped you, please buy me a coffee!

Buy Me a Coffee at ko-fi.com

Leave a Reply