Two owls sitting on an open book, one wearing a VR headset, the other not.

FA25 AI Trends in Education: Agentic Browsers and defending the Friction of Learning

Back in June, I wrote my observation on the snapshot I see that was AI and its implications up to the end of the last academic year. Every fall, I like to take a bigger prospective look at what has developed in the arena of AI and what forecast how it might impact education in the new academic year.

Last year’s trend, I wrote about the move from AI as standalone apps to deep integration at the operating system level of all the major computing and mobile computing platforms on the planet. This made new computers, and even old operating system upgrades, into a new sales pitch to students coming into the new school year. From Windows getting a dedicate AI key on some PC’s to iOS getting Siri to chat to ChatGPT, we have AI at the root level of most, if not all, of all our modern personal computing devices.

This year, the trend forecast I have for K-20 education is the rapidly developing of, and use, agentic browsers. The term “Agentic” is the developed term where a bot, or AI agent (or personality or view it is to take), can be created or is integrated by a user in an AI to deliver results or actions from a specific view point or personality all on it’s own. Agentic Browsers are those AI agents, that have root access to everything that is a web browser, whether that is what is immediately on the screen, your search history, your payment methods stored, etc, and from there are able to operate something or perform a set of tasks on behalf of the user with minimal or no interaction once deployed. Agents have been used in AI products such as ChatGPT and others for some time now. For example, anytime you use a prompt and say something like “from the perspective of a…” or “act like a…,” you are making an AI into an agent. Or if you have used NotebookLM, having your notes take on the persona of a pair of podcast hosts. The difference now is that this kind of control woven into a browser brings a whole new level of customized support and automation, and with it new upsides in productivity, potential downsides in academic integrity.

The Potential Upside

Papers and notes flying out of a laptop screen on a desk

For starters, I want to note the amazing potential agentic browsers can bring to users in terms of automation. Being able to have a browser know your credentials to your favorite sites, see what you see, and then process your requests or automate your actions for you is a huge time saver. Or in terms of academia, being able to have a journal article open and have a conversation with the material as a digital study-buddy. These browsers could also be a new tool in the realm of digital accessibility, as these agentic browsers may help not only those already using assistive technologies in their computing, but also open up new avenues for them to complete tasks on the web more easily.

I have heard from friends, colleagues, and other professionals globally who use AI and are turning to agentic browsers to perform better in their professional lives, as most of their professional lives already live within a browser or web services. There are already businesses that are purchasing AI models for their employees, and reports are beginning to show a trend of advancing productivity or output from a worker’s time. However, this starts the conversation of the negative side effects from AI and agentic use in the workforce in general. Namely, the potential replacement of intern or entry level positions [Stanford University Digital Economy Lab study (August 2025)]. Some early studies are beginning to record the impact of agentic AI, and their relatively lower cost compared to an employee, have and may continue to impact internships and entry level job positions and prospects. At the same time, being more productive with work may see a rise of doing more with less in the mid and upper level positions. This is especially true in roles where AI can automate tasks that could otherwise be delegated to early career employees. For higher education, this means some graduates or soon-to-be graduates may be concerned about what the entry-level job options might looks like for them, with potentially more limited options caused by AI.

Agentic Academia

As I already wrote about and talked about on the mic with Chris Powell of the Canvas Insider Podcast, agentic browsing specifically is a tricky subject for academic integrity. My position still stands;

“In the hands of a technologist, AI can be a valuable automation tool.

In the hands of a faculty, it can be a tool to open up new ways to more effectively teach or find holes in research,

and in the hands of a student, it can be a way to better engage with their learning journey… or completely remove a student from their learning journey all together.”

I do not believe that AI, agentic browsing, or any technology is inherently bad; it is just another tool. Nor do I believe that only students can and will abuse agentic browsers’ abilities. Anyone can use any technology, or a tool, to make what they will from it or accomplish the task at hand. Within human nature there is at times a tendency to have self-doubt in something new we want to active. Or, we carry with us the notion that that we are “not as good as” a what a piece of technology do instead of a human. Both can easily run afoul of the learning journey.

Three years ago, I was in an AI in education workshop during the early days of AI where the facilitator, a former English teacher, spoke about the benefits and deficits of AI in the humanities and in writing. She said something that has stuck with me and was the single most important thing I gathered from that workshop. It wasn’t their examples of how AI could be used in the classroom, or even how we might improve digital literacy around AI. It was this:

“In writing, we need to remind students that it is more about chopping wood rather than slaying a dragon.”

More often than not, students think that an assignment or a paper is like going on a hero’s quest to slay a dragon. Just another part of their hero’s four-year degree journey where tasks in a class are mini-boss levels leading to the ultimate battle; the dragon known as the final-term-paper.

A hero's tale, a knight slaying a dragon with a large pencil and the dragon is a colllege paper

They focus on and believe that once that dragon is slain, the writing is complete, and that is what makes good writing. They lose sight that it wasn’t in vanquishing the dragon, but the journey that lead them in learning how to. It was this workshop facilitator’s stance that we need to teach students that writing is more like chopping wood…not slaying a dragon. Writing, and I would argue learning in general, is repetitive, slow, and reflective work. But in doing it over and over, and spending the time, it is not just about having split wood at the end. It is about becoming stronger for the next time you pick up the axe. This is what I refer to as the “friction of learning.” Inherently learning, something new, is difficult. But it is in that friction of learning where we begin to make new pathways, new realization, get the opportunity to practice (and in the end, get evaluated) these new skills or knowledge. Students today do want to know how to do things for themselves, my 9-year old reminds me of that almost everyday as he asserts his independence (sometimes a little too boldly). But more importantly than that, students want to see the point behind WHY they have to know what is being taught. And want things that are meaningful, engaging, or simply practical. If there is a magic screen that can just give the answer and that is good enough, we in education need to stress and provide avenues to prove why the learning matters.

It’s my professional opinion, that it is in that friction of learning we can leverage in students. That is not to say that friction should be confused with difficulty, but rather making the learning space for students to be able to swing the axe over and over again to become stronger at the learning, and see the reason why they need to pick up the axe to start with.

Technology’s Disruption of Education

“In my tests of a 54 question final exam, I had an agentic browser complete the exam missing only one question. And each of its responses, it showed me it’s work and references for validation…”

Historically, technology has always disrupted the status quo of education, and each time we have seen a net benefit from technology’s progress. But in the time of its happenings, there are always two camps. Those that welcome new changes and those that view this as a new “end” of how we taught or learned as we knew it. There have always been critics of each change, and AI is no different. As academics, we need to keep a healthy level of skepticism with any technology in order to separate what is a fad from what is truly a tool. However, unlike technologies of the past, many feel that AI is different. That vibe could come from that earlier technologies helped humans in their creation, calculation, or sharing of their original thoughts. However, AI has the ability to mimic human language, creation, and what looks like original thought. Because of that, and the sudden development of it, it has changed the conversation. Beyond plagiarism, which was a major concern in the early days of the internet, the rise of agentic browsers introduces a new clear and present danger to academic integrity. Education today depends on the internet and web-based resources to manage teaching and learning, and AI can now automate interactions in ways we have never faced before. In my tests of a 54 question final exam, I had an agentic browser complete the exam missing only one question. And each of its responses, it showed me it’s work and references for validation. To some this could mean the end of take home exams, or exams in general, to others it marks a much long over move towards redefining what and how we assess learning. We had similar debates during the early days of search engines, where the thought and fears of the time were that students would just have to Yahoo it (later Google it) and wouldn’t need to know anything or how to look up things. When is the last time you have needed to consult a library card catalog to find a book? The answer, depending on your age, is probably never or not since you were a kid like myself. Instead we now have better ways to find more information. Isn’t that a better trade off from having to know and memorize the Dewey Decimal System?

Why Faculty are Concerned

a paper cutout of a teacher holding a laptop with a book in the foreground. Nature in the background

As I demonstrated back in June, an agentic browser could already provide correct answers to a knowledge-based exam in an LMS. In just four months, that same browser now has an integrated agent that can not only give the answers but also take the test for you. It can select the right answers, click on the radio buttons in an LMS like Canvas, and move through the exam page by page. It is only a short matter of time where the right prompt could just move through the entire quiz answering any question it sees (multiple choice, true false, short answer, and essay answers) for a student. Is the answer in this case to use more AI to detect this malicious work? Is the answer to just bring students back into a classroom and hover over them and make them write everything out by pencil in a BlueBook? I, and what the AI detection literature, would argue no- that is completely unreasonable in a digital society we have built. There are even fringe cases of teachers outright banning all digital devices in a classroom, to which more analog inclined and snarky students will always find a workaround for as seen in this YouTube.

It is easy to see why faculty are concerned and why some may want to return to analog assessments but in my opinion, this reaction happens because that is how they were taught, and so they believe that is the way teaching should still be done. Leading to…”Training Scars”

Training Scars

While I was in the middle of my Divemaster training, my instructor, who I deeply respect (no dive pun intended), one day saw me struggling with a concept one day in class. She was about to take over and drill me on it but instead, stepped back and knowing I myself am an academic and trainer- took a moment explained something she called “training scars.”

She told me that when she was young in her diving career, she was taught a certain and that that information had to taught that way because her instructors were taught that way, and their instructors before them were taught that way too (and so on in their history). Over time she realized that repeating old methods just because “that’s how it’s done” was not always the best approach. She felt herself just then going back to that default…but then took a moment, to say that she wasn’t. Times change. Technology around diving changes. Safety procedures change. And most importantly, learners change, and she instead opened up with “why do you feel this [the topic i was struggling with] is being so difficult for you?

That day she chose to break the cycle and teach me that topic differently. It was new, it took longer, and it wasn’t as ‘easy’ as just the ‘old way.’ But I eventually got the skills and knowledge I needed, and better than if I was just drilled, but the bigger lesson was the idea that we as trainers, or educators, need to acknowledge the training scars we carry in our teaching practices. And chances are, we carry a lot of them and never realize we do. We often default to how we were taught, believing it is the only right way because “that was the way I was taught” (see my mention in a previous post about “Golden Age thinking“). We fall victim to teaching in a way that feel easy for us, and forget sometimes that times change, and the tools and knowledge available today did not exist when we were learning. So why teach in a way that is outdated?

In education, we must be careful not to let our training scars dictate our approach to teaching and learning in this era of AI, or what comes after. We need to adapt to the time our students are graduating into, not the past we were trained in.

The Rising Call: “What is the best way for me to use AI in my class?”

This is probably the single most asked question I get from faculty lately. Faculty want examples of effective AI in other classes they can crib from. Of course that is always a good jumping off point, but what was effective in one class, doesn’t necessary mean it will work in another especially with AI. This technology evolves too quickly.

So that leaves the next question; “So, what am I supposed to do then for my course?” The best advice I can give any educator is to step back and ask: what is the point?

  • When you look at your class, your assignments, your assessments, what is their purpose? What is the intended transfer of learning?

For the faculty who are looking to leverage AI as a study aide for students, ask yourself;

  • How could students work towards learning the material you assign and could conversations with AI using it be used by them to augment their thought process? Or not?

For faculty who are looking to evaluate student’s learning journey.

  • Much like chopping wood, shouldn’t we be evaluating the technique of chopping, and not the cords of wood at the end? For assignments that are longer in form (which is becoming harder and harder both in a lower attention society as well as an generative AI society) think about multiple stages of assessments evaluating the process not the product.

A new podcast that is in my playlist to listen to is AI Goes to College

And lastly, for the faculty who looks to want to assess a student’s knowledge regardless if AI is or is not used in the process

  • How can you build an assignment or assessment that is more human-to-human? How could they express their learning through media such as a video or podcasting? Or are their avenues where they can produce something that can only be done using original thought?

The thing is with AI, is that it is highly customizable and sometimes very unreliable. So there is, and will always be in my opinion, and need for knowing enough to make sure an AI is really right. If anything this strengthens the argument for more critical thinking, and maybe more deeper understanding for topics. In my opinion, the era we are evolving toward opens up more possibilities for students to use the information they are in school to acquire. That is not to say that rote knowledge is unimportant. That debate started in the era of the calculator and continued in the era of the search engine. What it does mean is that knowledge may be acquired through different modalities of teaching and learning. And isn’t that what we have been looking for? For decades students have complained about the group assignment. Or eye roll when they have to stand a deliver yet another 15min presentation. What if this is our, educators, opportunity to show that these more engaging outputs are not just a way to check their understanding of a topic, but student opportunity produce something more than an AI could mimic?

This is already a growing trend across higher education. The New York Times’ “Hard Fork” episode “AI School is in Session: Two Takes on the Future of Education” is an excellent conversation of where we are today from the view of K-12, from higher education administration, and even from students themselves.

Proving Student Work Is Done by the Little Grey Cells

Poriot pointing to the temple of his head, saying "the little grey cells"

Students still are concerned about AI allegations against them, when they want to prove their own work. AI detection still isn’t a reliable solution, however, I still believe that with the current technology trends, and now the rise of agentic browsers- it makes for an even stronger case for proctored or locked down browsers supplied by institutions to preserve online (and accessibility) assessments within LMSs. It also strengthens the need to have the ability for students to prove the provenance of their work. Things like Revision History (which recently updated to support multi-tab Google Documents) and GPTZero’s update to provide similar writing timeline features called GPTDocs, are strong candidates of ways this could delivered. However, more needs to be done and provided by Microsoft natively within their Office products (and they have nothing) and these tools will not work with Word documents unfortunately.

AI and Digital Literacy

The last element I want to point to is digital literacy. And here, I want to speak to us as educators. We need to be better at understanding what this technology does, and what it does not do. That means more than just reading a headline and passing it off as what we know.

We need to actually use these tools ourselves. That way we can leverage them in our teaching, better understand how they might benefit students in their learning, and recognize how students may already be using them, instead of assuming the worst.

Where to start learning about AI trends and topics?

My friend and colleague Emily Laird, an AI Integration Technologist and AI Lecturer, is the host of Generative AI 101 Podcast. Her podcast is my recommendation for anyone in education who wants to have a good jumping off point to understanding the many facets of AI and beyond, from in episodes that are usually under 15min each.

For the more deeper dive of AI topics

Wired’s Uncanny Valley – not a dedicated AI podcast, but when they do cover it, some solid tech journalism.

To hear more from the more skeptical side of AI

Mystery AI Hype Theater 3000 (some strong language at times)

Where We Are This Fall (2025)

So, this is where I am and my observations going into this fall 2025 term. AI continues to accelerate, and I believe agentic browsers will be the next big ripple hitting our education pond. The challenge for us is not just keeping up, but adapting our teaching to prepare students for the world they are stepping into, not the one we were trained for. At the same time, we need to maintain a healthy skepticism of AI technology itself. Even though we are in 3 PCE ( three years into the Post-ChatGPT Era), it is still early days. The technology, as impressive as it looks or sounds, is still essentially a very good calculator that can automate. That is why I think of it more as IA: Intelligent Automation over an actual Artificial Intelligence. And with that mindset, we can better evaluate our courses to be more adaptable for students to see the value from the friction of the learning process.

There are still ongoing legal battles over copyright and fair use. The environmental cost of energy and materials used for training AI models are still opaque. And the impact on labor, particularly when AI can be purchased as a cheaper alternative to hiring employees, is still being measured. And the glacial pace of politics and policy are always a concern, but something we should be actively defining in our own institutions through our pedagogy and andragogy practices.

Education cannot sit back and wait to see what happens. We need to define what makes education valuable, both intrinsically and extrinsically. And we can do that in this era of AI if we come together to define the strengths of the technology alongside the strengths of the human mind.

The views expressed are not to represent that of WWU, ATUS, or its subsidiary departments, and is intended as an op-ed of the author.  -AJ