When AI becomes a weapon for applications and examinations, how will universities in various count

Mondo Technology Updated on 2024-03-06

Figure Visual China.

Yangcheng Evening News reporter Sun Wei, intern Hu Yuxiao.

From the advent of ChatGPT to the stunning debut of SORA, the global higher education field has ushered in unprecedented opportunities and challenges under the wave of generative artificial intelligence (AI). At the same time, the phenomenon of using AI to cheat has also attracted global attention. In the face of the all-round penetration of artificial intelligence technology, how will universities protect the temple of academic integrity?

AI student assistance has caused controversy: universities in many countries have been impacted.

In the past year, major universities in the UK have been involved in an unprecedented "cheating storm".

According to the UK Universities and Colleges Admissions Service (UCAS), in 2023, there will be 7,300 plagiarized personal statements, a 105% increase from 2021, when ChatGPT was not yet available.

In addition to application essays, the misuse of AI technology in assignments, tests, etc. is also a concern.

*Course** Quotient Studycom launched a survey of 1,000 American students over the age of 18, more than 89% of students have used ChatGPT while doing homework, 48% of students admit to using it to complete home tests or quizzes, 53% of students have it write, and 22% of students have used it to generate outlines.

With the increase in cheating, the use of AI in colleges and universities has sparked heated discussions. To tackle the problem of AI cheating, universities across the country have adopted different strategies.

It is reported that as of July last year, more than 40% of British universities investigated students' use of artificial intelligence to cheat, and at least 377 students from 48 institutions were tracked for using artificial intelligence chatbots.

In the U.S., professors at George Washington University, Rutgers University, Appalachian State University, and other schools have eliminated some homework and used more assessments such as class work, handwriting**, group work, and oral exams. Frederick Luisaldama, chair of the humanities at the University of Texas at Austin, said that in order to prevent cheating, he deliberately chose texts that were more "unfamiliar" to ChatGPT, such as William Shakespeare's early sonnets instead of "A Midsummer Night's Dream."

Recently, Rice University in the United States issued an amendment prohibiting students from using AI software without proper citation. The school emphasizes that the act of passing off AI-generated content as an original result is considered plagiarism and will be adjudicated by the relevant authorities. In addition, teaching staff have the right to prohibit students from using AI programs in the classroom.

Since September 2023, the University of Hong Kong (HKU) has lifted its previous ban on the use of generative AI for students and is offering apps such as Microsoft OpenAI and DALL-E for free to teachers and students. However, students cannot submit more than 20 instructions to the AI per month.

Contrary to the alarms of many educators, researchers at Stanford University believe that concerns about AI-fueled cheating are unwarranted. Long before ChatGPT came along, 60 to 70 percent of students "engaged in at least one cheating act in the last month" had remained the same or even decreased slightly in the 2023 survey, according to study leader Denisepope.

The gray area of hidden risk: using AI is a "big gamble".

For many college teachers and students, AI tools are not unfamiliar, but the boundaries of using AI have always been a "mystery".

Xiao E, who is studying at the University of Wisconsin-Madison in the United States, revealed to reporters that the use of AI programs in homework or ** "does have an element of gambling": "If it is detected as cheating, it will be dealt with with 0 points at the least, and it will be punished by the academic court." "Whether or not you can use AI and how you can use AI depends entirely on the teacher, and different teachers have different attitudes towards AI," she said. For example, some teachers in charge of language and writing classes explicitly prohibit students from using AI in their syllabus. There are also some teachers who are quite supportive of using AI, and they usually recommend that students use AI to translate, polish, and expand their thinking, but they don't want students to rely too much on AI-generated content. For students, there is still a lot of ambiguity in the rules of using AI. ”

Xiao E's concerns are not unique. In Sweden, a survey by Chalmers University of Technology found that the majority of students have a positive attitude towards AI, but many feel anxious about using AI because they don't know what to count as cheating. A recent KPMG survey showed that more than half of Canadian students over the age of 18 are using AI to help them complete their studies, and nearly nine** believe that generative AI has improved the quality of their studies. While the beneficiaries of AI technology are not in the minority, 60% of respondents believe that this practice constitutes cheating. The relevant person in charge of KPMG Cj."The growing popularity of these tools is putting a lot of pressure on educators and educational institutions to navigate and regulate quickly," James said. But the question is how to draw the boundaries of cheating. ”

In addition to the uncertainty of the system, the mechanism for detecting cheating has also been questioned by students.

In the spring 2023 semester, William Quatman, a graduate of the University of California, Davis, was accused of using ChatGPT to cheat. In response, Quatman showed his editorial history at Google Docs and questioned the school's detection software, GPTzero, which labeled Martin Luther King's "I Have a Dream" speech as the work of artificial intelligence. In this way, Quatman proved his "innocence".

Subsequently, Stacey Vandervelde, director of the university's Office of Student Support and Justice Affairs, said that the misuse of AI is a dynamic challenge for all universities, and that "it is very difficult to get the situation under control." "We're working on a new version of the AI detection tool, and at the same time, ChatGPT is moving to a new version." She also said that in addition to the software Turnitin, which is commonly used in her office, teachers can choose other software, but these tools have misjudgments in actual testing, which can only provide some reference value.

Walking with Artificial Intelligence: How to Use this "Double-edged Sword" with Good Use

Lu Yu, a member of the Overseas Chinese Sector of the All-China Youth Federation and a visiting professor at the School of Foreign Chinese Languages and Literature, and the School of Information and Digitalization at the University of Paris II, believes that in the face of the "double-edged sword" of artificial intelligence, universities should adopt an attitude of equal emphasis on active exploration and strict regulation.

Lu Yu said that judging whether the use of AI constitutes cheating depends on whether it promotes the development of students' self-learning and thinking skills, and whether it maintains the fairness of homework and exams. He said: "The use of AI-assisted learning, such as improving the efficiency of language learning, assisting in homework research, and finding and searching materials, is acceptable without plagiarism and without replacing personal thinking." Educators should clearly communicate this boundary to students and guide them to use AI tools correctly. ”

With the advancement of AI technology, the methods of using AI to cheat will become more and more "hidden". In this regard, universities should actively explore strategies to "use AI against AI", for example, develop advanced detection software to recognize AI-generated text and assignments. At the same time, schools should strengthen punishment measures for academic misconduct to protect academic integrity and educational fairness.

In addition to cracking down on cheating, the penetration of AI technology also has new requirements for school evaluation and training. In Lu's view, the education system should shift its focus and focus more on evaluating students' thought processes and innovation capabilities, rather than a single output: "Colleges and universities can carry out flipped classrooms, oral defenses, practical projects, and teamwork tasks to assess students' comprehensive abilities, thereby reducing the likelihood and attractiveness of using AI to cheat." ”

The opportunities presented by AI should not be overlooked. "Universities should explore new ways to use AI for student recruitment and selection, and use AI to analyze the application materials that best match their purpose, such as creativity and problem-solving skills, so as to quickly screen out suitable candidates and help them achieve more in their career planning and academic research," Lu said. In addition, he said that schools can give full play to the role of AI-assisted teaching, use AI to provide students with customized learning resources and guidance programs, innovate teaching methods and content, and provide students with a more personalized and efficient learning experience.

For students living in the wave of AI, how to make AI their "mentor and friend" is also an indispensable skill. Lu Yu suggested: "Students should make reasonable use of AI tools to assist learning, such as using language learning applications, practice, etc., to improve learning efficiency and quality." But at the same time, it is necessary to maintain the ability to think for yourself and analyze critically, and use AI to optimize, rather than replace, the results of individual efforts. ”

As educators, we need to constantly adapt to technological changes to guide students to understand and use AI correctly, and promote its healthy development within the framework of educational equity and academic integrity. Lu Yu said.

*: Yangcheng Evening News).

Related Pages