In order to prove that he did not use ChatGPT, the associate professor was forced to write a paper o

Mondo Education Updated on 2024-02-08

West Wind from the Concave Fei Temple qubit | qbitai

The hard-working handwritten ** was identified by the reviewer as "a glance at chatgpt" and was rejected.

The story of an associate professor has attracted the attention of the academic community and has appeared in the Nature column.

She decided to write every ** article on github from now on, using the change record to prove her innocence.

In the article, she argues from her own experience that "AI can destroy science without intentionalism".

ChatGPT has undermined the peer review process simply by its own existence.

As soon as this matter spread, netizens immediately thought of such a sentence:

When humans do not pass the Turing test.

Some netizens also said that they had encountered a similar situation:

I showed my manuscript to a colleague, and he said the same thing. I was like, "I've improved my writing!" "Haha.

Event details. **The author, Lizzie Wolkovich, is an associate professor of forest and conservation sciences at the University of British Columbia, Canada.

The rejected study was on "the impact of global change on ecological communities".

Lizzie admits that she is not very good at writing, and like many people, I find it a bit of a painful process."

To this end, she said that she studied a bunch of writing guides, and finally formed her own writing process, which is roughly as follows: first make a few outlines, then start writing the first draft, and then repeatedly revise them.

She also recommends this approach to her students, emphasizing the importance of being a scientist to be able to articulate complex ideas.

However, when Lizzie submitted her carefully polished **, she was unexpectedly accused by a reviewer of being suspected of using ChatGPT for scientific research fraud.

And this accusation is not about data falsification. Lizzie says her research data is transparent and reproducible, and no one questions her veracity of the data or results, but her dedication to writing is seen as a fraud.

What she didn't expect was that the journal editor also vaguely agreed with the reviewer's statement and thought that "the writing style is unusual".

Faced with such accusations, Lizzie vehemently denies and is determined to prove her innocence.

She points out that she writes in Latex plain text and uses the Git version control system, the change history of which can be verified on GitHub, including "Finally started writing!" I wrote for another 25 minutes! Such a meticulous record of submissions.

She also plans to further prove herself by comparing her ** style before and after ChatGPT appeared, and even considered asking ChatGPT to confirm that ** is not what she wrote.

Despite the various ways to prove her innocence, Lizzie is blunt about "really wanting to do something about leaving the scene in anger".

Forced to write ** in github.

At the end of the article, Lizzie expressed her opinion on the matter in a large text.

He pointed out that although AI has brought convenience, it has also caused a series of problems, and his own experience shows that AI can cause problems simply because it exists.

Scientific research needs to be based on trust and ethical standards, and it is recommended that the scientific community should develop clear guidelines for the use of AI, rather than attacking authors when there is little evidence.

And she also mentioned that in order to prove her innocence, she decided to use github to record the writing process for every ** article in the future to show that her work was done independently.

This also sparked a lot of discussion among netizens. Some people say that the problem caused by the large model is "unexpected and reasonable":

If a large model can meet people's expectations, the natural consequence is to undermine our trust in whatever is written. This means that another cornerstone of the functioning of society will no longer exist.

What do you think about this?

Reference Links:

Related Pages