Will a Psychology paper containing false data generated by a LLM tool be published in an accredited journal in 2024?
➕
Plus
17
Ṁ831
Dec 31
36%
chance

LLM assistants and similar tools are notorious for outputting bad data and false citations ("hallucinating"). There has already been a highly public case of this leading to legal malpractice (https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html). Will we see a similar case or cases in the arena of Psychology during 2024?

I'll be considering all journals with an average impact factor >10 for the last 10 years (2024 inclusive), where those journals self-describe as being primarily concerned with the field of Psychology.

Get
Ṁ1,000
and
S3.00
Sort by:

Wouldn't it be impossible to prove the data is generated by a LLM unless if the author admits to it?

predictedYES

@Calvin6b82 That's the rub, yeah. There's a 0% chance this won't happen. Whether or not it's caught, you know...

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules