“Uncritical AI Adoption” vs. the Gain Equation: Why abstinence fails and critical adoption wins
A response to a recent paper
If “uncritical adoption” of AI is the problem, what is its opposite? Resist it. Or at least that is the answer by 18 mostly Dutch scholars (Guest et al., 2025). This means pausing, deferring, even banning it. In Against the Uncritical Adoption of “AI” Technologies in Academia, they compare AI to cigarettes and close with this line:
“We and our students can choose not to use these technologies. Just like we have banned smoking from public spaces, we could foster that process of banning both by choosing to individually quit smoking and by demanding regulation of the tobacco industry.”
Their concerns are real and felt by many. Rushing AI into classrooms can widen inequality, weaken academic integrity, and erode critical thinking. Studies that point in this direction keep coming. But “just say no” feels detached from classroom reality. Students already use AI, often invisibly. Telling them not to is like telling non-English speaking students in English universities not to use Google Translate ten years ago.
The smoking analogy has limits. Cigarettes can be banned from classrooms; AI cannot. As the authors point out and lament, It is already built into Word, Google, search, and soon most LMS platforms. For many, AI is becoming a “normal technology,” like electricity. More than that, it is turning into a cognitive layer under everyday thinking.
Here Guest et al. are right to worry about dependence. Smoking and nicotine addiction shows how “choice” bends toward short-term reward and design. Humans take the path of least resistance.
We can think of this in terms of the gain equation:
GAIN = VALUE − COST (EFFORT).
That path is easy with today’s chatbots. The gain equation is tilted because chatbots are vastly knowledgeable, fast, fluent, and sycophantic.
If the visible effort is low and the value seems high, we reach for the tool.
GAIN = VALUE − COST
High (perceived) = AI use (speed, convenience) − minimal effort
Unfortunately, the hidden cost of AI overuse is learning, and when students do not “see” that cost, offloading becomes the default.
AI enthusiasts, on the other hand, see the equation differently: use AI with purpose, add effort, and you get real gains, like new abilities, higher quality ideas, better drafts, stronger research.
GAIN = VALUE − COST
High (felt) = AI use (enhance human skills) − purposeful effort
High felt gains are possible with AI. Though the darker path with shallow perceived gains are too: repeated offloading until the habit reshapes how students think and write.
That is dependence, edging toward addiction.
I saw this tension in a workshop l gave last week on academic writing and AI tools. We spent more than an hour and half on five sentence principles: one main idea, keep structure simple, use subordination, put key info in the main clause, and end with the most important point. And then we discussed research tasks and AI workflows.
I asked teams to decide human vs AI roles across research, brainstorming, drafting, and language revising: only human, human with some AI support, human-AI collaboration, only AI. Even after I showed examples of how LLMs often break those very rules, three of four teams were happy to let “only AI” do all the revising.
This is the real danger. The problem is not AI use, but handing over judgment. Students assume the system “knows” more than they do—often true, especially for non-English speaking writers who are writing research in English. But cognitive offloading here means the tool stops supporting and starts replacing human reasoning.
I don’t think anyone agrees with uncritical adoption. So, what is the opposite? The more reasonable option is not abstinence. It’s critical adoption.
Critical adoption means clear limits and clear purpose. Teach students to evaluate AI, not accept it. Build workflows where humans lead and AI supports. This will mean changing priorities, curriculum, assignments, and assessment so checking, comparing, and explaining AI become routine, graded moves, and not optional extras.
Also, I can’t imagine the rhetorical effort and skill that would be required to talk students out of using AI. It’s like insisting everyone navigate only with paper maps when the world already uses GPS. It will be easier, more persuasive, and more useful to prepare students to enter a future workforce, to expend the pedagogical effort to channel AI.
The task for critical adopters is to tilt the gain equation, to demonstrate and convince students to make the extra effort with AI. This means building verification, comparison, source-checking skills, and helping students see and feel the additional effort is more rewarding than dumping everything on the machine. That includes building foundational knowledge/skills and teaching AI fluency so students know when to use AI, when not to, and how to critique its output.
Of course, not even the most effective and confident critical adopter knows the best balance between human-led reasoning and AI support, or how to prevent dependence from sliding into decline or addiction.
Openness and humility are needed. The work ahead is neither uncritical AI use nor AI abstention. It will be a critical experimentation that tests ways to integrate AI that protect human judgment and keep learning at the center.
References
Guest, O., Suarez, M., Müller, B. C. N., van Meerkerk, E., Oude Groote Beverborg, A., de Haan, R., Reyes Elizondo, A., Blokpoel, M., Scharfenberg, N., Kleinherenbrink, A., Camerino, I., Woensdregt, M., Monett, D., Brown, J., Avraamidou, L., Alenda-Demoutiez, J., Hermans, F., & van Rooij, I. (2025). Against the uncritical adoption of “AI” technologies in academia [Position paper]. Against the Uncritical Adoption of 'AI' Technologies in Academia




Teaching students to evaluate and not accept AI is the main thing. I hope I can manage this in my classes.