Opinion: Progress for Whom?

By Omer Shamil, Opinions Editor 

Artificial Intelligence (AI) may become one of the defining technologies of our age, but history warns us that when societies celebrate innovation before they confront power, the costs of progress are almost always paid by someone else. The questions AI places before us now is not simply what it can do, but who will control it, who will benefit from it, and who will be left behind. It has entered classrooms, workplaces, and public life with the language of inevitability, as though speed alone were proof of virtue. But technologies do not arrive with moral meaning attached to them. They are shaped by the people who build them, the institutions that governthem, and the communities made to live with their consequences. If AI is to become one of the defining technologies of our age, then it must be met not with blind awe, but with the harder questions history always asks too late: who gains, who loses, and who is asked to bear the cost? 

 If the rise of artificial intelligence feels disorienting, it is because we have seen this kind of arrival before. The Industrial Revolution, too, was introduced in the language of advancement, efficiency, and a better future—and in many ways, it did transform the world. But its progress was never innocent. It left behind polluted cities, brutal labor conditions, and entire classes of people expected to absorb the cost of a future they did not design. The internet followed a similar path: celebrated first as liberation, then slowly understood as a force that could also deepen surveillance, dependency, and concentrated power. AI belongs to that same historical tradition of invention: genuinely transformative, undeniably powerful, and dangerous precisely because society is so eager to admire it before it has learned how to question it.  

One of the great illusions surrounding AI is that it feels weightless, as though what happens on a screen happens nowhere else. But the digital world still drinks from rivers and feeds on grids. AI is sustained by sprawling data centers, immense electricity demand, and water-intensive cooling systems, all of which give this so-called future a very physical cost. The International Energy Agency has projected that electricity demand from data centers will rise dramatically in the coming years, driven in large part by AI, while the U.S. Government Accountability Office has warned that generative AI carries significant energy and water burdens that companies often do not fully disclose. And these costs are not only environmental. AI is already reshaping the terms of work, especially for young people trying to enter professional life through entry-level jobs. The promise is always that innovation will create new opportunities, but that promise rings hollow when the first jobs to disappear are the very ones that teach people how to begin. Every generation is told to adapt, but that becomes a cruel command when the ladder is being pulled up at the same time. 

AI’s danger does not end with the environment or the labor market. It also reaches inward, into our habits of mind and the structure of public life. A campus should be especially alert to that. What becomes of education when students grow accustomed to outsourcing the first draft of a thought? What becomes of human connection when simulated companionship begins to feel easier than the difficult, necessary work of being known by another person? And beyond that inward dependence lies a larger civic problem: AI increasingly asks for our trust while revealing very little of itself. We hand over our words, our preferences, our questions, sometimes even our vulnerabilities, to systems we cannot meaningfully inspect, regulate, or fully understand. Once a technology acquires that much power, it never remains confined to convenience; it moves toward surveillance, toward warfare, toward profit, toward the hands of those already positioned to benefit most from its use. That is why the central question has never really been whether AI can do extraordinary things. It is whether its power will be publicly governed or privately hoarded, whether its benefits will be shared or concentrated, and whether we will keep mistaking reaction for regulation while the technology races ahead of our moral and political imagination.  

So how are we supposed to feel about AI? Perhaps there is no single correct opinion, no pure position untouched by contradiction. Technologies this large rarely allow for that kind of simplicity. But there is a correct response, and it is not surrender disguised as sophistication. It is dialogue: serious, public, and morally grounded dialogue between those building these systems, those regulating them, those displaced by them, and those who will be asked to live in the world they create. If AI is already reshaping work, straining resources, unsettling privacy, and moving closer to machinery of war and concentrated power, then the least we owe one another is honest debate before inevitability removes consent. History does not ask us to fear every revolution. It asks us to think before praising one. And if artificial intelligence is truly to become part of our common future, then that future cannot be decided only by the people positioned to profit from it. 

Sources used:
https://www.iea.org/reports/energy-and-ai/executive-summary 

https://www.weforum.org/publications/the-future-of-jobs-report-2025/digest/ 

https://www.gao.gov/products/gao-25-107172 

This article originally appeared on pages 12-13 of the April 2026 edition of The Gettysburgian magazine.

Author: Omer Shamil

Share This Post On

1 Comment

  1. Well written and researched. AI is indeed “artificial” and needs to be understood and controlled.

    Post a Reply

Leave a Reply to Larry Luessen Cancel reply

Your email address will not be published. Required fields are marked *