Duke University’s Interdisciplinary Behavioral Research Center for Human Engagement
The COVID-19 pandemic has reoriented the research landscape towards online research. However, the introduction of AI technologies that mimic human responses has also led to new complexities that can undermine the authenticity of data obtained through online surveys, according to Patty Van Cappellen, director of Duke University’s Interdisciplinary Behavioral Research Center (https://ibrc.duke.edu/).
Appropriate payment
The expert advocates both online research and the value of face-to-face research. AI tools such as ChatGPT can be used to answer surveys and promptly provide all the desired answers. It is even possible for the AI to watch videos and provide answers about the content or even opinions. There are probably a number of reasons for this approach.
According to the researcher, one reason could be that people who are paid a very low six dollars (around 5.50 euros) for their work save themselves time and effort. Another reason could be limited resources. However, according to Van Cappellen, this raises an important question: how can we learn about people’s opinions, feelings or behavior if the available data actually comes from ChatGPT?
Re-evaluating AI research
According to the expert, the phenomenon is a siren call to re-evaluate the quality and authenticity of online research data. She asks whether this could be a signal for a renaissance of research conducted by the researchers themselves. The controlled environment of this type of research has always been seen as its seal of approval. The value, according to Van Cappellen, goes far beyond this control and accuracy.
The benefits of this approach also include the fact that nothing can replace direct human interaction. This is also true for students and budding researchers, who can learn so much more about human behavior. Nevertheless, online research has its advantages, especially when it comes to accessing remote locations or conducting cross-cultural studies.
However, Van Cappellen emphasizes that the procedures for protecting against manipulation by means of an AI must be improved and, at the same time, access and scope for personally conducted research projects must also be facilitated.