I admit in advanced, this post may seem a bit more sci-fi over academic. But it’s easy to either get freaked out by AI or dismiss it altogether. Most people still view it as a slightly smarter Siri or a hit-or-miss “Hey Google.” Some see it as helpful sometimes, yet when it isn’t, just frustrating enough that you eventually just do it yourself and question what is so ‘intelligent’ about it anyway. For students it can be a tool for learning, or a road to academic dishonesty. For faculty, it can be the newest thing that has to be used, or be banned all together. And this is for a technology that is only at its worst we’ll ever see it, this is at it’s least ‘intelligent.’ But what about what about the AI we don’t have yet, the one that experts believe will upend humanity as we know it?
For some of us geeks, we see AI as just another tool in the digital toolbox, now being referred to as “Tool-AI.” But there’s a bigger, sometimes felt as an ominous concept: AGI, or Artificial General Intelligence. Unlike today’s AI, which simulates human output (think ChatGPT writing a passable email), AGI refers to something much more profound, a system with actual digital cognition (I won’t call it sentience). Good at not just mimicking intelligence, but having it, and much like any human learner, looking for agency and autonomy.
AI Helpful or Hurtful Future?
I came across a video this week that feels less like a combination of clickbait doom-scrolling, and a well-researched warning shot. It’s called We’re Not Ready For Superintelligence, by the channel AI in Context. It walks through a speculative yet plausible scenario laid out in a paper called “AI 2027,” written by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean.
The core idea? That we are only a “few years away” from AI surpassing human intelligence, and when (not if) that happens, things could begin quickly falling house of cards for the human race. The video presents this not as some cinematic apocalypse, but reminds me when I was a kid reading one of those Choose Your Own Adventure books, where you had to read and choose where to go next. The plot point for this adventure…one path ends with AI quietly killing us all; or quietly hollowing out what it means to be human as we passively hand over our humanity ourselves. Of course, just reading that, both sounds like an Isaac Asimov or even Simon Stålenhag book.
Science Fiction or Fact?

The video wants to drill into us that there are two stark forks in the road. In one scenario, a few powerful actors capture and control superintelligent AI, reshaping the world to their liking (and making immense wealth doing so). In the other, AI accelerates so fast that we collectively lose control in Cold-war style or Skynet fashion, with no kill switch, no oversight, just cascading systems outpacing human governance.
“The way the world is made. The truth is all around you, plain to behold. The night is dark and full of terrors, the day bright and beautiful and full of hope. One is black, the other white. There is ice and there is fire…”
― George R.R. Martin, A Storm of Swords
Arguably there is a fork in that option, in my opinion, where the immense wealth is shared amongst the world and a future more like StarTrek happens…if you are an optimist in thinking. In any case, it should make anyone pause for thought. For me it isn’t as much the destruction of human kind, I would like to hope we all have enough self preservation to prevent that, but the power of this technology being in the hands of a few non-trasparent, non-elected people that could have a global, environmental, economic, and tactical impact to nations.
That said, the video also brings in a brief counterbalance. It acknowledges that some assumptions in the AI 2027 paper may be overly simplified or not scalable in the real world (yet), like the process of “bringing AI into alignment” with human values, which is vastly more complex than the scenario implies. Or the idea that humans wouldn’t protest or pressure governments if jobs begin to evaporate en masse due to AI automation and cost savings.
What this means for Higher Education

If superintelligence is even plausible within the next few years, then higher education cannot afford to sit back and wait for a simple syllabus update of where a faculty stands for or against AI.
We need to move on beyond just talking about AI-generated essays or ChatGPT in the classroom anymore, and instead talk about the core purpose of education. And prepare humans for a future they can still shape. Institutions need to rethink how we teach critical thinking, ethics, technological literacy, and what it even means to “learn” in an artificial-intelligence era.
“Education is not preparation for life; education is life itself.”
-John Dewey
Faculty need support to make sense of what’s coming, not just new tools, but new frameworks. Meanwhile, students need more than just technical skills; they need to see the value in the friction that comes with learning instead of differing to what they are starting to see as easier and “better than what they can do.”
Higher ed has a choice to make, not just in policy or pedagogy…but in purpose.
“We will learn no matter what! Learning is as natural as rest or play. With or without books, inspiring trainers or classrooms, we will manage to learn. Educators can, however, make a difference in what people learn and how well they learn it. If we know why we are learning and if the reason fits our needs as we perceive them, we will learn quickly and deeply.”
-Malcolm Knowles
You must be logged in to post a comment.