By Dom DiLuzio, Staff Columnist
Since November, the sphere of academia has been irreversibly changed by the release of ChatGPT. Available for use by any person without a paywall or other dissuasion, the chatbot has caused a firestorm over its implications in the present and in the future. The current discourse is that use of ChatGPT should be lumped in with other forms of academic dishonesty and treated as such. It is evident that this approach is not sustainable.
Mainstays of the academic system today—calculators, computers and Google—were all controversial when they were first introduced, but all have revolutionized research, education and learning. Artificial intelligence (AI) like ChatGPT should be no different, treated as a tool to increase the capabilities of students and educators alike exponentially.
Current opinion resides in the idea that bots like ChatGPT will be used by students to write papers, complete assignments, and otherwise completely relinquish the responsibilities of education to a system that can effortlessly generate them at the same quality a student can complete them. There are two issues with this interpretation. In its current form, ChatGPT is not capable of producing subjective writing, of any length, that is comparable to one produced by its human student equivalent. Is it capable of producing writing on the prescribed subject and at the dictated length? Sure. Does it complete it using the language, tone and style of an average student with sufficient accuracy? Absolutely not. AI is incapable, in its current stage, of producing a satisfactory substitute for student-completed work.
Additionally, if an assignment can be effectively completed by AI software, then the difficulty and necessity of the prescribed assignment must be reconsidered. Assignments that are meant to be a gauge of a student’s proficiency in a subject (compared to those designed to reinforce it) should not be simplistic or general enough to be completed by AI. Should AI be used in dishonest ways, professors must evaluate why a student was made to feel they could not complete the assignment by themselves and determine if the assignment was constructed in a challenging enough fashion to begin with. Both of these reasons manifest the idea that AI currently cannot be considered a threat to academic honesty, but is rather a tool that can further the ability of students and educators.
Institutionally, we not only need to accept the place of AI in academia, but foster and explore its usage. Because the rise of ChatGPT has occurred so rapidly, the lack of institutional response is understandable. Moving forward, Gettysburg College should implement the following changes:
- Educating faculty and staff on how to implement AI in the classroom. Chatbots like ChatGPT have the ability to revolutionize not only the education experience but the process of preparing for education.
- The creation of a class on the development, usage, and future of AI. Because AI transcends industries and concentrations, it cannot be limited to a class in computer science. It is essential that students of all concentrations, as future members of a revolutionized workforce, understand how to use and implement the most current technology in their respective fields.
- A clear definition of the relationship between AI usage and academic dishonesty/plagiarism. While AI will be beneficial overall, it—like any other tool—can and will be exploited. Students need to understand these boundaries to improve how they utilize AI and the Honor Commission and faculty must be clear on what constitutes improper usage.
The development of AI is destined to be one of the largest technological advancements of the 21st century. As an institution hopefully on the cutting edge of new technology and information, the College must act fast to help its students capitalize on technology and knowledge that will revolutionize the workforce and the world in the coming years.