Well… it is happening. You can’t put spilled milk back to bottle. You can do future requirements that will try to stop this behaviour.
E.g. in the submission form could be a mandatory field “I hereby confirm that I wrote the paper personally.” In conditions there will be a note that violating this rule can lead to temporary or permanent ban of authors. In the world where research success is measured by points in WOS, this could lead to slow down the rise of LLM-generated papers.
Unironically, maybe they should be scored by LLMs? My first thought was that the reviewers could score the papers but that would lead to even more group-think.
Ideally whoever is paying the academics should just be paying attention to their work and its worth, but that would be crazy.
> This approach dismisses the cases where Ai submissions generally are better.
You’re perhaps missing the not so subtle subtext of Peter Woit’s post, and entire blog, which is:
While AI is getting better, it’s still not _good_ by the standards of most science. However it’s as good as hep-th where (according to Peter Woit) the bar is incredibly low. His thesis is part “the whole field is bad” and part “Arxiv for this subfield is full of human slop.”
I don’t have the background to engage with whether Peter Woit’s argument has merit, but it’s been consistent for 25+ years.
My comment was more an answer to the proposed gatekeeping of science as a human activity.
Yes, Ai is still not good in the grand scheme of things. But everybody actively using it has gotten concerned over the past 2 months by the leap frigging of LLMs - and surprised as they thought we had arrived at the plateau.
We will see in a year or two if humans still hold an advantage in research - currently very few do in software development, despite what they think about themselves.
The other side of the coin is: automating science as a machine activity.
Is that what we want? I agree with you that the use of language models in science is an inevitable paradigm shift, but now is the time to make collective decisions about how we're going to assimilate this increasingly super-human "intelligence" into academic practices, and the rest of daily life. Otherwise we will be the ones being assimilated by a force beyond our control.
The progress is so rapid that the only people who might have control over the process are the ones with self-interest, mainly financial, and not aligned with - in some aspects opposed to - the interests of humanity.
Its already automated. Do you think astronomers manually count stars or medical scientists manually run chemical reactions? Why is automation by ai wrong when all other automations were beneficial?
The single most valuable part of science is keeping the gates: not adding things to the corpus of scientific knowledge unless they can be properly substantiated.
E.g. in the submission form could be a mandatory field “I hereby confirm that I wrote the paper personally.” In conditions there will be a note that violating this rule can lead to temporary or permanent ban of authors. In the world where research success is measured by points in WOS, this could lead to slow down the rise of LLM-generated papers.