How UOA learns to love the Bomb
Since ChatGPT, a chatbot developed by OpenAI was publicly released at the end of November last year, it has been the subject of increasing interest that often borders on hysteria. ChatGPT passes four Law Exams. ChatGPT passes the Engineering interview. ChatGPT passes a Medical Licensing exam. It seems the doomsday scenario for a University that has pivoted towards online assessment, and it is undoubtedly another turning point in UoA’s history, but UoA’s Education Office has already put processes in place.
Recall that calculators didn’t take Maths out of fashion, the internet did not stop us from learning, and text-to-speech definitely has not made us illiterate. Everything is integrated, styles change, and everyone too old to experience it complains about how different it is to their day. Yet remaining stagnant is not good for the health of any pool. The Education Office Bulletin from the University outlines how they plan to take on this new challenge:
Prepare for a wider range of assessments, especially in-person, as well as oral presentations and the dreaded group assignments taking more precedence in some areas. Self-reflections and comments on the class are likely to be more widespread. Personal experience and feelings too; ChatGPT refused to emote when consulted for this article. Summarised, prepare for a group, oral presentation on how you were deeply moved by your lecturer’s tangent about getting to work.
An aspect that the bulletin did not address but will be very relevant in any rapidly developing fields, is that ChatGPT was only trained with data up to September 2021. The AI is not sure if there is a war in Ukraine, how many people have caught COVID, nor that the Public Holiday at the end of this semester is now called King’s Birthday. Expect research on the last eighteen months to suddenly become much more applicable once the University reads this article.
The AI itself lists repetitiveness, consistency of style, issues with jokes and sarcasm, less emotional nuance, and repetitiveness as means of identifying its work in comparison to human work. It can also be quite repetitive. Students who submit assignments with little variation, bizarre idioms, and cold undertones; that is, business and law students, should expect to be checked by GPTZero or any similar tool
GPTZero is an AI detection tool developed by a Princeton University student that aims to classify text as AI-generated or human. It is not foolproof, indeed, on investigation, it was shown that a sufficient amount of human work can disguise any AI section. But it does provide a first step, particularly for identifying sections which are largely generated. The University is encouraging all markers to run assignments through the software as a baseline. This article is identified as wholly human-written, which is fortunate, because it is.
It was released only a month after the public launch of ChatGPT, and since then, development has kept roughly at pace with AI. More companies and organisations are putting funding into detection and sorting software, with even OpenAI releasing ZeroGPT to detect their own outputs. The naming scheme for these algorithms is infinitely creative.
A Teachwell University Panel has provided three different sets of rules for courses to use as a baseline for use of chatbots. The first is a total ban. Simple in understanding, probably very close to impossible in execution. AI is treated as if it is another student, and significant cooperation is academic dishonesty. The second ruleset is based on gaining permission. In this, students can use AI whenever the instructor tells them, or if independent permission is sought. And finally, a total allowance, so long as the AI’s work is credited at the end of any essay. In this way, the examiner separates out work that is wholly your own and that which is assisted.
The panel has also outlined some suggestions for incorporating ChatGPT into assessments themselves. Critiquing AI-generated texts and even letting students make assignments with AI may be possible over the coming years. Both cover various parts of the University’s skills lists and may prove useful for the workplace. All of this is, of course, up to the discretion of the course manager. One would expect those that embrace it in their assignments to suddenly become much more popular.
Alex Sims, a UoA business professor, has claimed in her article “ChatGPT and the future of university assessments,” that the answers chatbots give are “pure luck.” And although it’s not entirely untrue, it misses the mark of how these machines operate. The AI uses statistical data to determine how to write a sentence and fills it with data that it has been provided from the internet. Sometimes that means it can be confidently wrong, something humans are more than capable of being. But that does not make it truly random, because if it was, it would be entirely undetectable and probably useless. As the data and the analysis gets better, AI will get better at tasks, it will pass more and more assessments, and mitigating that advancement will see you left behind.
On the other side, Sims acknowledges that a total ban or most attempts to prevent usage are not constructive for a modern university. Making the canny comment that we will not be in competition with AI, but “people who are adept and skilled at using such tools.” It is an enhancement of skills. Like the calculator or excel spreadsheet, it will improve students’ skills beyond what they could normally achieve and upon entering the workforce, it will no doubt become a useful ally in many facets of daily life.
An unidentified lecturer would like it to be known that the usage of ChatGPT to mark assignments has been broached at their meetings. It seems a true double standard. If this goes ahead we may even end up with situations where an AI writes an essay, an AI checks the essay, and no knowledge has been gained.
ChatGPT was approached for comment and would like the world to know that it represents the cutting edge of technology (Only a little ego in the machine that claims to have no emotions). It insists it exists only to be “a useful tool for answering questions and providing information” and inspiration to the next generation of AI. It also hopes that it can contribute to the conversation about technology and its many uses, it has certainly achieved that.