How UTK can adapt to artificial intelligence with new policy

How UTK can adapt to artificial intelligence with new policy

Photo taken from Pexels

The University of Tennessee Knoxville's approach to Artificial Intelligence (AI) has become a topic of discussion among students and faculty, with calls for increased transparency and accountability in the use of AI on campus.

Although AI policy is the topic of this story, I didn’t write that last sentence. The aforementioned dry sentence was masterfully crafted by UTK’s AI chatbot, UT Verse, when I asked it to write an introduction to this article.

As AI technology progresses, students will use it and potentially abuse it. In turn, educators and universities provide policies for AI use as well as educational resources, such as UT verse. Serving a similar role to the surging ChatGPT, UT verse is available to anyone with a UTK email.

While UTK provides an easy-to-use AI resource, the university has not created any broad policy on AI use. Instead, UTK defers AI policy to the syllabus of individual classes. This referral to individual solutions is not empty because UTK also provides a few suggested policies for syllabi. 

Catherine Schuman, a professor in the UTK department of electrical engineering and computer science, adopts her own policy for AI usage.  In a course she is teaching this semester, introduction to machine learning (which is a subset of AI), her course has a moderate use policy. 

Under this policy, Schuman allows the use of tools like ChatGPT so long as students provide a metaphorical paper trail. This involves acknowledging the use of AI, including the prompts they used, and providing the generated AI response.

“I want to see the practice of how they were using it... But also, I think it's useful for them to be able to see it to see sometimes it generates nonsense, and you want to be able to know whether or not you should be trusting it for what it's generating,” Schuman said. 

Given that Schuman’s class focuses on machine learning, the students will develop a greater understanding of AI in case they use it as a tool.

This serves as an example for how educators can proactively plan for AI use, yet other classes may have different needs. 

But what if a syllabus doesn’t have an AI Policy?

If that is the case, then any rules would have to defer to some notion of plagiarism. However, could be seen as a contentious claim to say that the usage of AI is plagiarism at all. After all, exactly who is being plagiarized by AI?

In the plagiarism section of the Student Code of Conduct, you will find a strong implication that sources should be credited to avoid plagiarism. This would be a non-issue before the advent of programs that collect from sources which you couldn’t trace if you wanted to.

In the past, each footnote of a research paper could lead you to a web of sources, but AI can lead students into a whirlpool of untraceable, credible sounding information. 

Complications with sources is a major problem when AI is used for humanities courses, however  other problems arise in STEM. Even though there is evidence that ChatGPT struggles with math questions that could be given to students, there is still an issue with AI usage for STEM courses.

More specifically, AI goes into stealth mode when questions only need to be answered with mathematical answers or lines of code. Given a frequent impossibility to trace back homework, these answers will live undetected.

AI may currently struggle with some mathematical problems and it’s difficult to find a straight answer on what AI chatbots such as ChatGPT excel in, but AI may get better with mathematical problems with time. AI will likely keep struggling with its fundamental flaws in writing and research, but barriers will only be shattered within technical questions.

However, something can be done to solve every issue discussed in this article.

A possible solution: Department wide AI policies

It’s difficult to create sweeping policies on a university wide level. With how complex AI is becoming, its applications will be drastically different between academic fields and majors.

The status quo is a unique policy for each class and giving professors three template options for their syllabi is a good first step, but it fails to take into account complexities like using AI for translation or brainstorming.

Professors within similar departments may have related concerns, such as citations for political science or image generation for art. 

To help university departments make AI policies, faculty can look through other syllabi AI policies.

For my field of journalism, AI could be allowed as a jumping off point for research, transcripts for interviews and help with grammar. (The highly flawed Grammarly does heavily use AI, but hopefully it won’t be flawed one day.) 

In other areas of writing articles, AI should be discouraged. It might not get the best grade, as shown by the first sentence of this story, but it certainly won’t teach prospective journalists.

AI may present unique challenges and opportunities for education, but UTK can make strides in policy to progress forward with AI.