By Omer Shamil, Opinions Editor
At Gettysburg, the conversation around artificial intelligence (AI) is no longer abstract. Students are already using AI in the quiet, ordinary ways that rarely make it into official policy drafting. Emails, testing ideas, clarifying assignments, sometimes pushing through late-night problem sets, were the common themes discovered by the student senate session on AI. It is also when the line between help and dependence begins to blur.
But as that use becomes routine, something else has settled alongside it: uncertainty. What feels like responsible use in one classroom can raise questions in another, and for many students, the boundary between assistance and violation is not clearly defined, instead constantly shifting.
That tension surfaced clearly in a recent Student Senate session on AI, where students described a campus already shaped by these tools but not yet fully prepared for them. A significant portion of students reported actively using AI for learning—studying, brainstorming, generating practice questions, coding and proofreading. Technology is not on the horizon. It is already embedded in how students work.
What emerged just as strongly, however, was not enthusiasm or resistance, but confusion. Students pointed to unclear and inconsistent expectations across courses, a lack of concrete examples and a growing anxiety around potential Honor Code violations. If anything, the message was less about whether AI should be used and more about the absence of a shared understanding of how it should be used.
The problem, in other words, is not access. It is coherence. It is within this context that Josh Wagner, of the Innovation and Creativity Lab, frames the purpose of Gettysburg’s AI Lab. In his view, the issue is not simply that students are using AI, but that they are often doing so in isolation, without a common space to question, test or fully understand the tools they are engaging with.
“We saw a need to move the conversation beyond students sitting alone at a computer using ChatGPT to ‘help’ with homework,” Wagner said, describing the lab’s origin. Rather than treating AI as something to be quietly used or cautiously avoided, the lab aims to create a space where students can engage with it openly, collaboratively and critically. That distinction matters. The AI Lab is not being presented as a repository of tools, nor as an endorsement of unrestrained use. It is, instead, an attempt to reframe how students encounter AI altogether: less as a shortcut, more as a medium for inquiry.
Wagner situates this effort within the broader mission of the Innovation and Creativity Lab, which has long emphasized accessibility, experimentation, and interdisciplinary collaboration. The addition of the AI and Emerging Tech Hub expands that mission into a space that is rapidly reshaping both academic and professional environments. Part of that expansion is practical. Access to advanced AI tools is uneven, often determined by who can afford paid versions and who cannot. By providing shared resources, the lab seeks to reduce that gap and allow students across disciplines to engage with the same technologies on more equal footing.
But the deeper goal is cultural. Wagner emphasized that the most compelling uses of AI do not come from a single field, but from the intersection of many. A student analyzing data, a student generating visual ideas, and a student questioning ethical implications may all be using the same tool but arriving at entirely different conclusions about what it means. In that sense, the lab is not designed for a particular kind of student. It is designed for a particular kind of thinking one that is adaptable, collaborative, and comfortable with uncertainty.
That emphasis on uncertainty is not incidental. It points directly to the concerns raised in the Senate session, where questions of academic integrity and fairness remain central. Students identified cheating, unfair advantage, overreliance and the ethics of sourcing as ongoing worries, reflecting a broader unease about how AI fits into an academic system built on individual work and accountability.
Wagner does not sidestep those concerns. He places them at the center of the lab’s purpose. “I believe these questions are not side conversations—they are central to the work,” he said. Through workshops, discussions and course-integrated projects, the lab aims to create space for students to move beyond the most immediate question, whether they are allowed to use AI, and toward more difficult ones. “The goal is to help students move from simply asking ‘Can I use this?’ to asking ‘Should I? How? Why? What are the implications?’”
That shift, from permission to judgment, may be the most consequential part of the lab’s approach. It suggests that the challenge of AI is not only technical, but intellectual and ethical, requiring students to develop a sense of responsibility that cannot be outsourced to the tools themselves.
Still, a lab cannot resolve the broader uncertainty students described. It can model a way of engaging with AI, but it cannot, on its own, establish consistent expectations across classrooms or eliminate the ambiguity surrounding academic policies.
The Senate findings make clear that students are looking for something more stable: guidance that does not change from syllabus to syllabus, examples that feel concrete rather than abstract and a sense that good-faith use will not be mistaken for misconduct. Without that clarity, even thoughtful experimentation can feel like a risk. That risk is not hypothetical. Students expressed explicit concern about being accused of violating the Honor Code, even in situations where expectations had not been clearly defined. In that environment, uncertainty does not simply limit misuse; it can also discourage transparency, questions, and honest engagement.
At the same time, the conversation extends beyond the classroom. Some students raised concerns about career preparedness and the possibility that AI may reshape or replace entry-level roles. While those fears remain less immediate, they contribute to a broader sense that students are being asked to adapt to a changing landscape without a clear map.
Wagner’s response to that uncertainty is not to promise stability, but to emphasize adaptability. The skills he associates with the AI Lab—creative problem-solving, communication, risk-taking and the ability to work with evolving tools—are less about mastering a specific technology and more about navigating change itself.
In practical terms, the lab will host collaborative projects, workshops, guest speakers, and opportunities for students and faculty to share their work. Students may experiment with generative text and image tools, explore data analysis, prototype new ideas, or rethink creative and research processes through the lens of AI. The direction of that work, Wagner noted, will remain shaped by student interest.
Looking ahead, he envisions the AI Lab as a central space for interdisciplinary exploration, one that evolves alongside the technology it engages with and remains responsive to shifting needs on campus. Its success, he suggested, will depend not only on what students create there, but on whether they feel empowered to question, challenge and shape the tools they are using.
That vision is ambitious. It also raises a larger question.
Gettysburg is no longer deciding whether AI belongs on campus. It is already here in study sessions, in drafts, in the small decisions students make every day about how to approach their work. The question now is what kind of academic culture will form around it.
Will AI remain a private workaround, unevenly understood and quietly negotiated between students and faculty? Or can it become something more deliberate—a shared space of experimentation, reflection and clearly defined responsibility? The emergence of the AI Lab suggests that the institution is beginning to engage with that question. Whether it can answer it in a way that feels coherent across the campus is something students are still waiting to see.
This article originally appeared on pages 20-21 of the March 2026 edition of The Gettysburgian magazine.