USC community reacts to new guidelines surrounding artificial intelligence in the classroom

Students and faculty comment on the implementation of AI, specifically ChatGPT, in this semester’s syllabus.

DESCRIBE THE IMAGE FOR ACCESSIBILITY, EXAMPLE: Photo of a chef putting red sauce onto an omelette.

As the new school year begins, USC students have already seen a notable change to the syllabus which has been adjusted to include new artificial intelligence (AI) guidelines.

Over the past year, AI technology has undergone significant improvements, making it increasingly more capable of handling academic work.

This past March, ChatGPT4 – Open AI’s most advanced chatbot to date – was released, prompting USC to issue updated syllabus guidelines to professors for the fall semester. The update follows instructor guidelines released last March by the University’s Academic Senate.

In most cases, the use of AI is considered a violation of USC’s academic integrity policy. The university permits AI for tasks such as fine-tuning and grammar checking, but the software cannot be used to write entire sentences and paragraphs.

However, different courses can have policies that range from banning the use of AI completely to allowing free usage of the software throughout the course. This is because allowing students to use AI for assignments and research is at the discretion of each professor, though students who do so must properly cite their use of the technology.

Computer Science professor Jon May told Annenberg Media that he has decided to allow his students to use AI software as long as they share the transcript for the prompts that were used as well as the responses.

“I’ve made this policy and I have already had students submit work where they were in full knowledge of the policy,” May said. “Out of 15 or so students who I was working with already, two took me up on the offer to use it.”

May, whose coursework and research involves computational linguistics, said the students who did use assistive software seemed to be forthcoming with its use and were using it appropriately. He said he wants students to use the technology to their advantage, though he makes sure to address ethical barriers.

“People who don’t have English as their first language, and they need help with fixing up the grammar, that’s a perfectly reasonable way to use things,” May said. “What I don’t want is for a student to say to a [chatbot], ‘I have to write a report on this. Please write a report.’”

Computer Science and Psychology professor Jonathan Gratch also encourages the use of AI in his classroom, though he does not require his students to cite their use of the software.

“I haven’t really thought about, ‘Should I require them to disclose to what extent they use technology?’” Gratch said. “From a philosophical perspective, I think I want them to use it as a tool, because I think it actually helps them do more.”

Sophomore psychology major Fiona Collins said she understands both sides of the AI debate, especially when it comes to her coding class.

“[Professors] say that it’s really helpful for more intense coding classes but for intro to Python, you’re not supposed to use it,” Collins said. “I think my coding teacher is kind of for ChatGPT, because people coded that. She’s not against it, she’s just against it for learning the fundamentals.”

With the rising popularity of ChatGPT and other AI sources, teachers and students alike must learn how to adapt to and ethically use the new tool.

“I feel like if you don’t use ChatGPT, you’re going to fall behind,” Collins said. “The people who really know how to use it well are at an advantage.”