The views expressed are not to represent that of WWU, ATUS, or its subsidiary departments, and is intended as an op-ed of the author. -AJ
The changing state of AI and its impact in higher education continues to become more and more complicated. There’s still a high level of concern among both faculty and students around its use; when it should be used, when it shouldn’t, and what “use AI” even means. Some faculty still feel AI should be outright banned in some way, but that’s easier said than done. Especially as AI becomes more deeply embedded in everyday tools and our devices at the root level.
Evolving Landscape of AI

Since October 2024, as I forecasted, the line between what is and isn’t AI was going to become harder and harder to see. By January 2025, that became abundantly clear. Windows, MacOS, iOS, and Android all integrated native AI one way or another within their operating systems. We began to see their applications natively use AI without a second through like iOS Mail summarizing and drafting emails for you. Windows laptop ads during the Fall and Winter academic terms were marketed and highlighted Microsoft Copilot, with a dedicated key on the keyboard for some machines. Across the board, OS-native apps and free access to powerful AI models have made AI more accessible to everyone with increasingly little effort made writing in academia what felt like a minefield for some students. Second guessing whether or not their spelling/grammar check was going to flag them as an AI-cheater, as now AI was what feels like in everything, and there isn’t as much of a need to copying and pasting into separate platforms.
Google’s NotebookLM shook up the note-taking world by creating an online service that could transform into a student-centered AI environment. It allowed learners to use their own notes and reference materials to transform an AI into a web-based retrieval-augmented generation (RAG) tool for the nearly any class. Google took it even further trying to make an opportunity for notes to digitally come to life. Those notes could be generated into podcast-style audio file to support or augment learning. I even had friends and colleagues who identify as Neurodiverse who say this was a game changer for them personally and professionally. In just one year, that feature went from static audio to a fully interactive, AI-generated radio show within a user’s NotebookLM. Allowing users to ask it questions mid-“broadcast,” get a synthesized response, and then seamlessly return to the scripted generative content.
All of this created the perfect storm in 2024-2025: AI isn’t a separate tool anymore, it’s now a native part of the computing experience. It’s ease of use and integration at the root level of devices made it feel normal shockingly fast. And for students, it’s begun to introduce entirely new workflows, or maybe even learningflows, especially for those who are already short or stressed on time and resources.
AI Frontlines from an Instructional Designer and Technologist

Professionally, I’ve seen a sharp uptick in consults related to AI, both around its academic use and in questions around potential unauthorized use. In Winter 2024 alone, I handled more AI-related consultations in those 10-weeks than I did the previous two years combined (or whenever you mark on the calendar when ChatGPT became the Kleenex of AI.)
This past year was, in many ways, about experimentation and establishing protocols. Protocols for supporting academic integrity investigations involving AI. Investigating and pitching for using AI during ideation. Evaluating where it can help automate and democratize the complicated software in the media and creative process. And yes, where AI can help someone in their synthesis of information, and where authorized and appropriate, in their writing process. Sometimes I’ve pushed AI tools to their limits so students and faculty can see where they break down or just aren’t ready for prime time. At the same time, I’ve explored how AI can automate or streamline aspects of learning and instruction, especially in ways that would’ve been prohibitively time-intensive otherwise. Such as using AI to clean up poorly recorded audio into something that wouldn’t otherwise be used. Or, having AI aggregate large datasets so that researchers can get a better 50-thousand foot view on things before parsing out demographic data.
There’s no single answer to how AI should or shouldn’t be used, and that’s what I find most frustrating about its role in higher education. The technology evolves faster than institutional policy, instructional practices, or workforce training can keep up. And each day it evolves, is another day that many in the academic community feel that AI is just undermining what ‘real education’ is.
For some, the response has been to pull back. Back to paper, and yes even blue books (yes that dates me). Back to ‘what is and has always been the gold standard’ of in-person classes , back to being “unplugged” in the learning process. And while that might feel like the safe solution, it’s a clear case of golden-age thinking. The “we were better off back when…” mentality isn’t a solution to AI, or any new technology, for that matter. And to some students, it sounds more like academia saying to these ‘kids’ to “get off our lawn.”
Let’s not forget that in the last three decades, higher ed has benefited greatly from every new piece of tech we once resisted. The typewriter. The mimeograph. Scantron for the bubble-ization of automated test grading. The internet. Word processing software. Desktop publishing for design works. And yes, even the frustrating but still essential campus Wi-Fi. Reverting to “what used to work” doesn’t help us prepare students for a world where this technology, and everything evolving from it, will be in their lives tomorrow. Students K–20 are going to live and work in a world filled with AI or the next uprooting technology that threatens the computing status-quo. They still need to be literate in it, conscientious with the use of it, and able to adapt with it. No different than when spell check came out, or grammar tools became mainstream, or search engines changed the way we search for information, or when office software changed how we even do or now define work in the first place.
Tracking the Arc of AI

What many forget about the history of the internet: it wasn’t just the internet itself that changed the world…it was the browser. We don’t think of terminals and mainframes when we talk about the internet. We think of the ease of using a browser to connect our devices to online content. The browser as an interface to anything, changed everything.
AI is in a similar phase right now. We’re seeing a change of pace, moving away from terminal-style prompting into something more intuitive. Something that I would describe as more browser-like. In one case in particular, browsers are already becoming the next AI platform.
Over the last several months, I’ve used just about every kind of AI I could get my hands on. Dedicated devices. Alpha and beta AI playground systems. Even spinning up my own offline AI using downloadable models and prebuilt interfaces. But despite the hype, it’s still not as easy as saying “Computer…” like on Star Trek: The Next Generation. That is what I hold up as my dream for this technology, but in reality, it still feels like the early DOS prompt days. We’re all just banging away at keyboards or touchscreens, entering prompt after prompt, often with mixed or “good enough” results.
Even now, for AI to actually be useful, we still have to feed it a lot of context, much more than I thought we would by now. And if we’re not careful, we can waste more time trying to get the AI to do the work than just doing it ourselves.
So here’s where I see two major trends on the horizon:
- The personal agent-ification of AI, and
- The AI-ification of the browser.
Honestly, I’m surprised Chrome and Edge haven’t already become full-blown AI platforms. But then again, running AI at that scale is expensive to these companies. Not to mention browsers are typically free. On top of that, both Google and Microsoft make money from ads. If AI cuts down the time we spend searching, those companies lose tracking data and ad revenue. So it is a reminder that these companies are not providing us internet tools out of altruism, but still capitalism.
But what if that evolves? And I don’t mean the altruism part.
What if, in a web economy built on advertising, the model shifts to monetizing AI agents? Imagine a (free) browser where you can buy personal AI assistants like apps. Need to plan your first family cruise? Purchase the “First-Time Cruise Family Travel Agent” agent, and let it do the research, compare deals, and book everything. Need help in physics? Buy a “Personal Physics Tutor” agent that reads your class notes and builds a remediation plan to get you ready for that midterm. What if webapps as we know them, could just be created by us all within a browser for free based off our own personal specification? In that world, the human web stays free, but the personalized, automated web could become a pay-to-play playground. What kind of digital divide does that open up as we head to the 2030s?
When your Browser IS your AI…

A niche browser called Arc (and their future product Dia) created by The Browser Company, is already pushing in this direction. Built on Chromium, Arc reimagines the entire browsing experience. Tabs on the left. “Spaces” instead of profiles. You can pin entire websites into persistent web apps. And then there’s Max, settings that can be enabled for the browser that brings AI assistant like features that lives right in the browser.
Be sure to check out my guest appearance on the Canvas Insider
For starters, Max can help you with simple automations in the use of the browser such as organize your tabs or cleaning hte ones you forgot you kept running. So for the person who is alot like my beautiful and lovely wife, who lives in about 89 different tabs open at all times, this would be a godsend for sure. But, Max also has the ability to read and understand whatever webpage(s) you’re on. Using ChatGPT, it can synthesize entire web pages, tabs, or even whole groups of tabs. You can prompt it as you browse, and like Captain Picard, it feels closer to asking “computer…,” granted it’s still limited to anything based on what’s on-screen or in your tabs. But that idea alone is a powerful next evolution.
Yet, that single limitation of it just knowing your browser is what makes this different. It is the first example I’ve used where you don’t need technical know-how to use AI on content that’s behind a login. If it’s visible in your browser, Max, theoretically, can see it…and use it. That includes your LMS like Canvas.
This is a huge shift. For the last few years, higher ed could rest easy knowing that their LMS’s weren’t easily integrated with AI tools. Sure, students could manually copy and paste content into ChatGPT and the like, or download course materials into NotebookLM. But the time, effort, and sometimes skill required made it less appealing for most students. Not to mention that the student runs the risk of learning wrong information if they solely rely on, or blindly believe what it says when they prompt it.
But with agent-powered browsers, or browsers where the AI is at the root level, barriers to information that would otherwise be restricted due to login are removed. Can I coin the term ‘concierge-computing’ now?
Imagine a student in a Canvas course with ten open tabs of reference materials. They tell Max in Arc, “Be my personal research assistant and memorize all of this.” Then they go into the Canvas quiz tab, copy and paste each question, and let Arc answer. And yes, in my tests it will do it. With 100% accuracy, no, but still it shocked to see that it was able to do it.
Of course as this become more and more common, questions of academic and testing integrity will begin to fly. In the hands of students, are they being assessed on how well they found the information; or showing how well they can make their browser AI take a quiz? How about during an open-note exam? A student uses NotebookLM or Arc to comb through ten weeks of notes, and then asks the AI to answer questions based on what they’ve written. Is that cheating? Or is that just another signal that higher ed needs to re-evaluate how it assesses learning? I can’t stress this enough, what we see today of this technology, is the worst it will EVER be. It only gets better from here…
** Author’s Note- UPDATE – June 18, after publishing**
After publishing this post, I was able to get a beta version of Dia, and ran the same test I did in Arc. I tried prompting Dia AI to see if it is still built of Anthropic, but it replied back to me that it is not.
However, when using it with the same quiz and reference material in my demo course, it passed the quiz in ONE prompt, and surprisingly with 90% (in that it answered question one with my full name as opposed to just AJ). Clearly, it can source and interpret links referenced within the content page provided in Canvas. It was able to come up with a passible response to Question 3 which was my short response question. Again, never leaving the browser page. I will be looking into the quiz logs after this to see what data is recorded…

Golden Age Thinking Will Not Solve AI…

Again, it’s easy for any of us to default to “this is too difficult to deal with”, or, it’s going too fast, let’s go back to how it used to be.” Faculty and teachers are already overworked, stressed by state and federal budget cuts, and frustrated that tech keeps upending what they’ve spent decades doing. It’s understandable that a protectionist mindset would set in. Now, I’m not saying that we should embrace our AI-overlords with open arms, I will be the first to reiterate that we we should, and need to, maintain a healthy level of skepticism with AI, like with any technology. Not every technology is worth the time to integrate into curriculum, anyone remember the Vine craze? Of course, not every technology will yield a learning opportunity that will benefit the learner. Plenty of technologies have promised transformation only to become obsolete by the time students graduate. But at the core to all of this, is a healthy level of technology literacy, to allow for any student to be well rounded. That is not a foreign concept to higher education, especially in liberal arts institutions, as the practice of requiring GURs or General University Requirement courses are there to help round off all students regardless of concentration. We expose students to things outside their major so they can become more well-rounded, more capable of building strong foundations.
Back at the start of this AI wave, when institutions still had funding to send instructional designers like me to present at conferences, I gave a session rooted in a philosophy I’ve developed over the last decade as an Instructional Technologist : adapt, improvise, utilize. It’s a twist on a phrase I heard growing up as a kid of US Air Force parents: “adapt, improvise, overcome.” I don’t think academics need to “overcome” technology. We’re not at war with it, nor do we need to have a combative nature towards it. We’re not here to defeat it, but rather “utilize” its best parts within our practice of teaching.
Our job is to help students build knowledge and develop the skills they’ll need for whatever future they step into. If technology enters the room, we shouldn’t hide it under the desk, we should put it on the desk. Let students see it, understand it, question it, and become literate in it. That literacy fosters the very critical thinking I hear all my faculty write in every course they teach as a learning outcome, regardless of the discipline.
Why would we treat AI any differently than any other technology that has preceded it, that at it’s time too changed so much of what education looked like back then?
“All this has happened before, and shall happen again.”
-Battlestar Galactica
You must be logged in to post a comment.