From the Chronicle of Higher Ed story:
For this year’s ICLR, Guha, the Northeastern computer scientist, turned in a study about how successfully large language models can write code when used by students with little programming experience. Conceptualizing and designing the experiment, running it on dozens of undergraduates across three colleges, and writing up the results took him and his team more than two years. Last fall, he got back four anonymous reviews, including the one complimenting his “lucid narrative.” It declared, too, that “this paper heralds a new dawn for the LLM community” and the analysis was “rendered in an approachable fashion, ensuring it is digestible for a broad readership.”
"We spent more than two years normalizing the eating of faces by leopards, but we never imagined that the leopards would eat our faces!"
Russo acknowledged that he uses generative AI to help him write reviews — emphasis on help. He said that he always reads the paper and writes his own response, but occasionally asks ChatGPT to analyze it and come up with counterarguments for him to consider incorporating. Similarly, other scientists said that they value the tool for its ability to distill technical concepts and suggest relevant research to cite.
I'm sorry, but this is just morally bankrupt. "Our process is only powered by a forsaken child during the intermediate stages. The final product is completely orphan-free by weight."