Ella Stapleton noticed in February that the lecture notes for her organizational behavior class at Northeastern University appeared to have been generated by ChatGPT. Midway through the document was the statement to “expand on all areas. Be more detailed and specific,” which could have been a prompt directed to the AI chatbot.
Stapleton looked at other course materials from that class, including slide presentations, and detected AI use in the form of photos of people with extra limbs and misspelled text. She was taken aback, especially because the course syllabus distributed by her professor, Rick Arrowood, prohibited students from using AI.
“He’s telling us not to use it and then he’s using it himself,” Stapleton told The New York Times in a report published on Wednesday.
Stapleton took the matter up with Northeastern’s business school in a formal complaint, asking for her tuition for the class back. The total refund would be over $8,000 for the course.
Related: These 4 Words Make It Obvious You Used AI to Write a Paper, According to New Research
Northeastern denied Stapleton’s request this month, the day after she graduated from the university.
Arrowood, an adjunct professor who has been an instructor at various colleges for over fifteen years, admitted to The New York Times that he had put his class files and documents through ChatGPT to refine them. He said that the situation made him approach AI more cautiously and tell students outright when he uses it.
Stapleton’s situation highlights the growing use of AI in higher education. A survey conducted by consulting group Tyton Partners in 2023 found that 22% of higher-education teachers said they frequently utilized generative AI. The same survey conducted in 2024 found that the percentage had nearly doubled to close to 40% of instructors within the span of a year.
AI use is becoming more prevalent among students, too. OpenAI released a study in February showing that more than one-third of young adults in the U.S. ages 18 to 24 use ChatGPT, with 25% of their messages tied to learning and schoolwork. The top two use cases of ChatGPT among this demographic were tutoring and writing help.
Related: ChatGPT Is Writing Lots of Job Applications, But Companies Are Quickly Catching On. Here’s How.
Tyton’s 2024 survey found that faculty who use AI are tapping into the technology to create in-class activities, write syllabi, generate rubrics for grading student work, and churn out quizzes and tests.
Meanwhile, the study found that students are using AI to help answer homework questions, assist with writing assignments, and take lecture notes.
In response to student AI use, colleges have adapted and released guidelines for using ChatGPT and other generative AI. For example, Harvard University advises students to protect confidential data, such as non-public research, when using AI chatbots and ensure that AI-generated content is free from inaccuracies or hallucinations. NYU’s policy mandates that students receive instructor approval before using ChatGPT.
Universities are also using software to uncover AI use in written materials, like essays. However, New York Magazine reported earlier this month that college students are getting around AI detectors by sprinkling typos into their ChatGPT-written papers.
Related: Using ChatGPT? AI Could Damage Your Critical Thinking Skills, According to a Microsoft Study
The trend of using AI in college could lead to less critical thinking. Researchers at Microsoft and Carnegie Mellon University published a study earlier this year that found that humans who used AI and were confident in its abilities used fewer critical thinking skills.
“Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved,” the researchers wrote.
Ella Stapleton noticed in February that the lecture notes for her organizational behavior class at Northeastern University appeared to have been generated by ChatGPT. Midway through the document was the statement to “expand on all areas. Be more detailed and specific,” which could have been a prompt directed to the AI chatbot.
Stapleton looked at other course materials from that class, including slide presentations, and detected AI use in the form of photos of people with extra limbs and misspelled text. She was taken aback, especially because the course syllabus distributed by her professor, Rick Arrowood, prohibited students from using AI.
“He’s telling us not to use it and then he’s using it himself,” Stapleton told The New York Times in a report published on Wednesday.
The rest of this article is locked.
Join Entrepreneur+ today for access.
Read the full article here