Researchers are increasingly using AI to write texts for them

May 14, 2026

Researchers are increasingly using AI to write texts for themA study by the University of Pennsylvania has found a decline in the quality of academic papers

Artificial intelligence is also increasingly changing working practices in academia. More and more researchers are turning to generative AI systems when drafting abstracts, academic articles or project descriptions. However, a new study by the University of Pennsylvania has now reached a critical conclusion: whilst the use of AI tools in academic work is increasing significantly, the average quality of submitted work is simultaneously declining.

The study analyses the extent to which generative AI is now integrated into academic writing processes and what impact this has on academic standards. According to the researchers, what is particularly striking is the growing number of texts that appear formally correct but often lack depth of content, methodological precision and academic originality.

AI is transforming academic work processes

Generative AI systems are now being used in a wide range of areas within the research environment. In addition to literature reviews and language optimisation, they are increasingly helping with structuring, summarising and full-text generation.

According to the study, researchers are increasingly turning to automated text generators, particularly for abstracts, conference papers or first drafts. The barrier to using them is falling significantly, as modern systems are capable of generating linguistically fluent and formally convincing texts in a short space of time.

It is precisely here, however, that the authors of the study see a structural risk: the linguistic quality could give the impression of scientific substance, even though argumentative rigour, methodological depth or well-founded results are sometimes lacking.

More submissions, but less substance

The researchers are particularly critical of the link between rising productivity and declining quality of content. AI makes it possible to produce academic texts much more quickly, leading to an increase in submissions to journals and conferences. At the same time, however, the average relevance and originality of many contributions is declining.

The study points out that this could place an additional burden on peer-review processes. Well-written texts with weak content are more difficult to identify and significantly increase the workload for reviewers and editors.

Added to this is the risk of increasing standardisation of academic language. AI systems are based on existing patterns of phrasing and argumentation. This could lead to further harmonisation of linguistic and structural conventions, whilst innovative or unusual approaches to thinking become less common.

Scientific integrity under pressure

The discussion thus increasingly touches on fundamental questions of scientific integrity. Universities and research institutions worldwide are currently working on guidelines for dealing with generative AI.

Whilst many institutions accept AI support for linguistic optimisation or structuring in principle, its comprehensive use in scientific argumentation, analysis or knowledge generation remains highly controversial. It becomes particularly problematic when AI-generated content is not transparently labelled or when source references are incorrect or fabricated.

The authors of the study therefore warn against viewing AI solely as a tool for efficiency. Scientific quality, they argue, does not result solely from linguistically correct formulations, but from comprehensible methodology, critical reflection and original intellectual output.

Impact on research and higher education

The increasing use of generative AI is also likely to change the evaluation of academic work in the long term. Universities and academic publishers face the challenge of establishing new verification mechanisms and transparency standards.

These include:

  • disclosure requirements for the use of AI,
  • adjustments to peer-review procedures,
  • technical detection systems,
  • new requirements for documentation and traceability,
  • greater emphasis on original research output.

At the same time, the productive benefits of generative AI remain undisputed. AI can significantly reduce the workload for researchers, particularly with regard to administrative tasks, language barriers or data preparation. However, it is crucial that the systems are used to support, rather than replace, human work.

Between productivity gains and loss of quality

The University of Pennsylvania study thus highlights a key tension in the AI transformation within the academic sector: generative AI increases the speed and accessibility of scientific results, but at the same time calls established quality mechanisms into question.

This presents research institutions, academic journals and science policy decision-makers with the task of developing new rules for transparency, verifiability and scientific accountability. For the more AI is integrated into academic work processes, the more important the question becomes of where machine support ends – and where independent scientific work must begin.

Related Articles

Share This