I recently came across a statement on AI in universities by the Aotearoa Communications and Media Scholars Network. I can't say I was that impressed by the way it reduces issues around AI in higher education to a series of single cause-and-effect assertions when the situation is a lot more complicated. In addition, the accompanying infographic is a bit of a poor showing for a group of media and comms scholars if you ask me... and as a social semiotician you can.

The following critically examines the ten assertions about AI made by the Network, highlighting where their claims are overly simplistic and (hopefully) offering a more nuanced perspective.

image.png

<aside> <img src="/icons/paint-brush-wide_gray.svg" alt="/icons/paint-brush-wide_gray.svg" width="40px" />

The infographic that summarises the statement is a bit of a dumpster fire in and of itself. The underdeveloped visual language and lack of empirical grounding are a bit frustrating. The heavy reliance on dense, academic jargon such as "techno-capitalist conjuncture" increases the cognitive load for a non-specialist audience, hindering immediate comprehension. It would be more effective if it integrated textual and visual modes, yet here the visual-symbolic layer is weak; and it uses minimal icons or imagery, reducing the potential for meaning-making and recall (Kress and van Leeuwen, 2006) The visual rhetoric lacks integrity by presenting powerful assertions without accessible citations or data visualisations, a practice that design theorists like Tufte (2001) would criticise. Although the QR codes offer a way to look at more information, the primary visual text fails to function as a self-contained, evidence-based artefact, instead operating as a set of unsubstantiated claims that demands significant prior knowledge from the viewer.

</aside>

1. AI is not an inevitable techno development; rather it is the result of the techno-capitalist conjuncture.

To say artificial intelligence is solely the result of a "techno-capitalist conjuncture" is pretty reductionist and ignores non-commercial forces like state-driven geopolitical competition (e.g., Lee [2018] on USA v. PRC), and it ignores military applications and national security factors—paramount drivers distinct from market profit.

Placing the techno-capitalist conjuncture position in first place is anachronistic and ignores the decades of foundational, publicly funded, and academically driven research that predates AI's modern commercial explosion (e.g., Nilsson, 2010).

And if you want to be ideologically extreme about AI, how about the military-industrial [academic] complex, e.g., DARPA, which has historically been a primary engine for technological leaps (see: The Internet!).

All of this suggests that AI's current trajectory is determined by a complex interplay of state power, scientific curiosity, and capital, not by the logic of the market alone.

2. AI is a corporate product of private companies which encroaches on the public role of the university.

There is definitely some truth in the idea that the AI landscape is increasingly dominated by corporates like Google, Microsoft, OpenAI et al. However, without looking deeper, this assertion is misleadingly antagonistic and ignores the historical and symbiotic relationship between academia and industry.

Foundational deep learning architectures that power modern AI were overwhelmingly developed within university labs, often with public funding, long before ChatGPT could plan your holiday, and this continues to be the case as universities supply industry with essential talented graduates and fundamental research.

Like assertion 1, framing AI as encroaching on university territory is reductive and ignores frameworks like the "Triple Helix" model (Etzkowitz, 2017), where university, industry, and government interact to drive innovation.

If there is a tension, it arises not from a one-sided corporate intrusion (that horse has already bolted—thanks Neoliberalism!) and more from the university's own systemic shift towards what Slaughter and Leslie (1997) term "academic capitalism," where institutions actively seek to commercialise knowledge (thanks again Neoliberalism!).

The university's public role is not being erased but is evolving. The academy remains uniquely positioned to provide independent ethical critique, investigate long-term societal impacts, and pursue foundational knowledge (without having to deal with product cycles!), whereas corporates are structurally ill-suited to do this.

3. AI impedes rather than supports intellectual work in its emphasis on formulaic 'results'.

This assertion narrowly casts AI as an engine for standardised outputs and fundamentally mischaracterises the potential of LLMs to augment cognition and streamline intellectual labour. Uncritical application of AI will result in formulaic work for sure, but its more profound impact lies in its ability to automate genuinely tedious and data-intensive tasks (kudos to anyone who has ever written a systematic literature review or had to deal with mountains of complex data). Take out the tedium and all of a sudden, the human researcher is liberated and able to apply more time and effort to higher-order synthesis, hypothesis generation, and critical thinking.

AI can function as an intellectual scaffold just as other elements of computing do (Engelbart, 1962), enabling scholars to identify novel patterns in vast datasets and explore complex models far beyond the scope of unassisted human cognition. The risk of intellectual atrophy is real but not an inherent property of AI itself. Rather, continuing to develop student critical thinking is a pedagogical challenge. AI is in the process of redefining intellectual work and the endpoint of this redefinition is some time away. In the meantime, our current situation demands a paradigm shift from the simple production of results to a more critical engagement with the processes of inquiry and interpretation.

4. AI promotes an unethical, reckless approach to research.

Asserting that AI promotes unethical and reckless research mistakenly attributes agency to AI, when the perceived "reckless" and "unethical" approaches to research using AI are in fact symptoms of the intense commercial and geopolitical pressures under which it is developed.

The "move fast and break things," "fail often" culture of the tech industry, coupled with the immense capital investment driving a race for market dominance, creates a socio-technical environment where ethical guardrails are often sidelined. This sidelining is a problem of governance, not of the inherent nature of the technology. Put another way, students (which I am most familiar with in this context) don't cheat because they can (that's something I am likely to do, just to see if I could). Rather, they feel compelled to cheat or rely on AI because they feel like they have no other choice. Again, this is a pedagogical challenge rather than one of a technology causing people to throw ethics to the wind.