ChatGPT: IDGAF (Or: How I Learned to Stop Worrying and Ignore the Bot)

                            Photo by Alex Knight on Unsplash

If you weren’t critical readers of blogs like this, you would have thought there was no bigger story in the world than ChatGPT. Littered with hyperbole, barely evidenced case studies and the kind of end-is-nighism we have not seen since, well since MOOCs, the education press has been extremely quick to declare ChatGPT to be the future-executioner-calculator-fire-wheel of the modern university as we know it. Advocates have unpacked and dusted off the claims that anyone who doesn’t embrace the generative AI change is a luddite who is doomed to spend their lives in the Middle Ages (let’s ignore the historical contradictions). Fear mongers are amping up the disaster rhetoric (one-third of universities will go broke in five years because of ChatGPT, claims Jordan Peterson). On the other hand, the more rational voices in higher education are left to navigate a way through the middle and argue it’s the chicken not the egg that’s important.

We shouldn’t be talking about what is effectively vendor spam (it is no coincidence that most of the press releases have focused on education – a very lucrative market and IMHO often a very gullible one when it comes to technology). We should be talking about assessment and how it facilitates learning. Changing assessments to ‘defeat’ or ‘detect’ generative AI is both counter-intuitive to good learning design and effectively feeding the beast. Each time we go to the bot to try it out, test its powers and fuel our fears that the machines will take over, we feed it. When it doesn’t know an answer to a question it gets asked a few times it goes searching for information to formulate a response. When we use ChatGPT we are doing the beta testing for the company that owns the bot. OpenAI became a capped profit entity in 2019 (despite the name) and has received funding and/or governance from Elon Musk, Peter Thiel co-founder of PayPal, Sam Altman from LinkedIn and Microsoft amongst others (see the Wikipedia page). Whilst their intentions are to feed the open-source market, Reuters estimates that ChatGPT will earn US$200 million in 2023 and US$1billion in 2024 for OpenAI. This is bolstered by an estimated US$10 billion funding provided by Microsoft.

So why don’t I give a fuck about ChatGPT? Well, let me posit four reasons.

  1. Been there, done that

The people advocating for widespread disruptive change to education because of ChatGPT and other generative AI platforms are often the same people (or use the same tropes) as those who claimed MOOCs would change the future landscape of higher education. And guess what, they didn’t. Neither did Pokémon Go, Second Life or any other technological ‘disruption’. Not even an experience of unprecedented potency like the pandemic led to long-term change in higher education, with the snapback discussed in this blog in full swing and online learning returned by institutions to its apparent ‘natural home’ on the fringes and declared a second-class experience when compared to the ‘perfection’ of our face-to-face teaching model. It is too early to make a prediction about the future impacts of generative AI or whether it be the next ubiquitous technology like the computer or the calculator as a tool for learning. But that doesn’t stop advocates wanting to be first to the media well to say they know better and that knew it first.

  1. If there is a problem, we created it

Assessment in higher education has been a problem for decades. Our students have told us this in successive student satisfaction surveys at a local and national level, that put satisfaction with assessment and feedback at significantly lower levels than anything else (including car parking and university food). Yet, we are still wedded to the exam as mode of assessment (returning to it like a recently found lost favourite sweater after the pandemic). We have deeply inculcated a small suite of assessment modes and practices into our LMS, our technology suite, our integrity detection and our measures of quality, achievement, and performance. This rusts these practices onto our curriculum and quality assurance and enhancement processes. We have been talking ad debating authentic assessment for well over a decade now with little evidence of a widely accepted definition, frameworks and new types of assessment modes, question types and feedback. As universities have marketised and increased cohorts and breadth of program offerings, the imperative for assessment has become scale. Auto-marking, online exams, AI generated feedback and other efficiency interventions are fed by more simplistic, dichotomous, memory-based, or standardised questions. The problem that AI apparently exposes has been there for decades. In general, we design inauthentic assessments with bad questions that don’t assess or facilitate learning. They can be searched through Google, replicated by contract cheating sites, and aggregated from unattributed sources before we even get to AI.

Of course, if we keep assessing in inauthentic ways and we keep asking students to repeat knowledge back to us or to show how much they remember using modes of communication they will never be asked to use again then ChatGPT makes what they can already do simpler, easier, and more convenient. And because we are simply asking for words to be fed back us, then a generator of words like ChatGPT is perfect. Generative AI does not produce knowledge. It does not know who you are and what learning is doing to you as a person. It does not understand meaning. What is does is write words, without meaning or context. It is the perfection of the infinite number of monkeys writing Shakespeare argument (it is a thing, it is called the infinite monkey theorem). Does text generated by ChatGPT work when words without meaning demonstrate a definition of competence? Yes. But is that learning? The meaning of the words in a Shakespearean play come from the reading, the performance and what emotions, feelings, actions they engender, and further how those words reside within a unique set of lived and yet to be experienced crises, challenges, relationships, and moments. Learning resides in the same.

  1. ChatGPT doesn’t learn. It just becomes more efficient

What ChatGPT is, in this instance, is replication as travesty. ChatGPT may be able to write a speech or an essay or a sermon or an obituary, but it cannot create a genuine song. It could perhaps in time create a song that is, on the surface, indistinguishable from an original, but it will always be a replication, a kind of burlesque. Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend. Nick Cave from the Red Right Hand Files

If we assume that assessment is there to assure that knowledge has been remembered and can be applied to abstracted, standardised contexts such as case studies then ChatGPT makes the process of evidencing that outcome more efficient. The more we feed it questions, the more it improves its responses. The more data it consumes, the more it reinforces its answers. If it doesn’t know or have enough data, it either tells you or it makes it up. But if we ask it what have I learnt, or how I have used the knowledge to further my understanding of the unique phenomena and experiences around me? It can consume information about you, and make assumptions, but it cannot get to know you. It cannot understand you nor can it experience what you have experienced to make you the person or leader or professional you are. Learning in the way it is constructed is a human trait built on, as Nick Cave so eloquently puts it, emotions, and experiences such as suffering and grief, but also joy, satisfaction, confidence, sociality, ego, and ambition amongst so many others. Learning is not a procedure, it is a sometimes traumatic, sometimes joyous journey of transition from not knowing to knowing, from incompetence to competence and from personal to collective. No generative AI can replicate that.

A general question asked to ChatGPT, a generic but OK answer (with US spelling mind)

The same question but with a specific subject added (also good to know the bot who I am. Yet)
  1. ChatGPT is an ethical challenge

Generative AI exists in a challenging and grey ethical area. First, there is the use of humans to help reduce the propensity for the bot to be abused to generate toxic content. There are allegations that workers in the global south were used to manually train the bot to not react to prompts and provocations for generate pro-fascist content (for example). Secondly, there is the issue of where the data has been scraped from and whether that has been attributed, and thirdly the thorny copyright issue of who owns the output of generative AI. This is a legal quagmire that has yet to be even vaguely tested in court. For higher education, who owns the intellectual data and more importantly, how that data is recognised and cited is critical. This will invariably be rigorously fought in courts around the world, especially with the source data that the bots use to generate natural language responses and then as the monetisation happens, how the ‘copyright’ of the generated text is enforced. Like many tools, ChatGPT is built on the labour of others. It assumes the precepts of the Internet as the wild west, without legislative or copyright boundaries, which has been proven time and again to not be true. Academia has struggled with the greyness of Internet knowledge, beating itself up over Wikipedia and open access often at the behest of publishers.

Higher education needs to look at this data in the same way it applies deeply structured and transferable standards of the ethical management of data. Who owns it? How was it collected? Did the parties give consent? Is it managed to ensure privacy, attribution, and agency for the sources of data? Universities must apply these standards, why not ChatGPT?

So, what do I give a fuck about then?

CTFD – and do as the Hitchhiker’s Guide to the Galaxy says…

We don’t panic. We stop jumping about like beans trying to defeat, embrace or outflank the bots. Each time we do that in beta we are simply giving the developers more test data, even more than they can scrape.

We don’t panic! We don’t need to change assessment tomorrow, mainly because the moment we do, we feed it again to find ways around what we do. Universities are huge opportunities to monetise this platform. The fact that so much traffic talks about how ChatGPT creates assignments (at credit level whatever that means) shows that the media forces behind ChatGPT are very willing to poke the bear and get a response (and its working a treat).

We don’t panic. Writing better questions and setting more authentic tasks will always encourage a learner to demonstrate their learning. They will generate more effective and useful feedback and feed-forward. Assessments that require more than memory, regurgitation or replication engage learners in higher-order, transdisciplinary skills. They prompt and catalyse journeys through transitional spaces and can create deeper learning through real or simulated experiences. This isn’t theoretical. Academics have been making assessments like this for decades. Simulations, reflections, presentations, portfolios…the list is endless, its just that scale and systems have made them seem niche.

We need to think deeply about assessment. Change the damned questions and stop asking students to regurgitate our curated version of knowing. That is not learning. Stop asking students to apply and recontextualise knowledge in the standardised images we create but to use knowledge and skills within their own unique ecosystems of lived experiences, work, life, play and learning. No bot can make that real for any person. All the bots can do is create another fake person, with a fake picture and fake life. If that is good enough for us as teachers and for you as students, then we have failed as academics and the prophecies of the doomsayers deserve to be right. We are better than that, and our assessment and teaching must reflect that. ChatGPT is just another warning sign that we have to think about, but it is not the first and it won’t be the last. Make assessment that are epistemologically, educationally, and experientially authentic. By that I mean, they have value and meaning to both the educator and the student. They catalyse learning through action, application, and connection. They use knowledge and skills, but don’t require students to repeat or replicate it. They don’t have to take longer to mark, they don’t have trick the AI and they don’t require the dismantling of everything we know and love in the academy. I will have some more to say in the next few weeks on designing and deploying authentic assessment.

Thirdly, let’s open the door. Let’s treat any generative AI like a Google search or a journal article or a book, or even Wikipedia. Go ahead students, use ChatGPT to write your assignments. BUT REFERENCE IT. We ask this of references drawn from the Internet, we ask this of ‘traditional’ literature, we ask this of any writing that is NOT YOUR OWN. So sure, use ChatGPT to write part of your essay. But get graded on its quality and the admission you did not write it. And if you choose to not acknowledge the source of the information then you are plagiarising, you are engaging in academic misconduct, because all sources deserve to be acknowledged. That is a human trait of respect. You may wish to go further and consider the veracity of the words generated by ChatGPT. Are the references real? Are the sources of information that it scrapes (and does not acknowledge) trustworthy and reliable? Be critical. Finally, you may do as my friends Professor Lawrie Phipps and Dr Donna Lanclos suggest and consider the ethics of using words generated by ChatGPT, and acknowledge that:

This presentation/paper/work was prepared using ChatGPT, an “AI Chatbot”. We acknowledge that ChatGPT does not respect the individual rights of authors and artists, and ignores concerns over copyright and intellectual property in the training of the system; additionally, we acknowledge that the system was trained in part through the exploitation of precarious workers in the global south. In this work I specifically used ChatGPT to …. 

DigitalisPeople – Lawrie Phipps and Donna Lanclos, with the help of Autumn Caines

Just like there was at each of the ‘watershed’ moments in higher education, any opportunity to rethink assessment should be welcomed. Any opportunity to de-rust some of the assessment architecture and separate the quality of assessment from the UX of the technology and IT infrastructure should be grasped. Any shift in the mindset that assessment weeds out the weak and rewards those who follow the narrow pathways of learning that have defined disciplinary knowledge for decades should be celebrated. Maybe we can start giving a fuck about assessment. That would be nice.


 

3 thoughts on “ChatGPT: IDGAF (Or: How I Learned to Stop Worrying and Ignore the Bot)

Leave a Reply

Your email address will not be published. Required fields are marked *