Dr Daniel Tse Woon-kwan looks at the core technology of ChatGPT, the nature of the new knowledge-based systems, and the drawbacks and potential benefits of the new apps.
The burgeoning popularity of ChatGPT over the past year has taken the world by storm, impacting many industries and changing the world in both productive as well as destructive ways. Besides ChatGPT, similar software such as Adobe Firefly and DALL-E are making their mark. Will these technologies become a threat or an asset to education and the wider society?
Before we come up with an answer, let us explore the magic of the core technology of ChatGPT, i.e. the generative artificial intelligence. First of all, AI Is not new. In fact, my own IT company used an early form of AI technology, ‘Prolog,' in creating a timetable scheduling software package for local primary and secondary schools more than 30 years ago. But things have moved on. In addition to having a very powerful AI engine, the success of ChatGPT relies on a sophisticated user interface and knowledge-based systems (KBS) which are connected to a very large pool of resources. Basically, AI is a kind of logic thinking done by the computers. Its implementation can be done via AI programming which is composed of high-level declarative language with built-in intelligent predicates (e.g. backtracking) and knowledge engineering, trying to find all possible solutions which are then filtered or trained to provide the most appropriate solution. In other words, its output is not absolutely correct. It is also not limited to text: Generative AI is a newer term covering the generation of text, images or other media in response to human inputs (or prompts). This is a very powerful natural language processing tool.
What is the difference between AI and non-AI applications? Technically speaking, there are two approaches in computing programming, namely, conventional programming (or procedural programming) and AI programming. Conventional programming languages are imperative for primitive data processing needs. Using these languages, e.g. Java and Visual Basic, the programmer's task is to code step-by-step instructions (procedures) sequentially for the computer to follow in processing data to arrive at a solution. Knowledge is represented throughout the programme in the form of procedures. Given a set of data inputs, a conventional programme is able to deterministically produce the single correct output. Using the input-processing-output concept, if the inputs and the procedures are correct, then the output must be correct. In contrast, AI programming is able to solve problems non-deterministically by deducing a set of all possible solutions that can satisfy a user's query. Therefore, it is error-prone but the accuracy can be improved by ‘training.' In other words, conventional programmes focus on how to accomplish the mission while AI programmes represent a descriptive style of programming where the programming task is to declare what is known about a knowledge domain. This knowledge comprises facts about domain entities and rules that permit inferencing about them.
In this information age, knowledge is the most precious of all resources and increasingly organisations are turning to AI and expert systems for an effective means to manage this resource. KBS differs from traditional database systems in that they are able to reason about data and draw conclusions employing heuristic rules. With many heuristic rules encoded into its knowledge base (together with a mechanism for inferencing called an inference engine), a KBS can ‘reason' about problem situations and can generate a range of likely solutions. Besides the use of AI's heuristics in KBS, machine learning algorithms can be used to replicate processes and patterns previously gathered and can remember these processes / patterns for further problem solving. Thus, according to websites such as enoumen.com, combining heuristics with machine learning technology makes for more accurate and reliable predictions and greater efficiency. ChatGPT contains a super KBS with contents coming from many sources including the internet.
Having discovered the magic of AI, now let us discuss the drawbacks of using it in education. One of the most controversial topics is student cheating in higher education. Through assignments, students are typically required to explore the nature of the subject domain by themselves and then give in-depth analysis (according to the questions asked) resulting in deeper knowledge acquired. This has been a very effective educational tool in the past, particularly because students' answers should be different as they are usually responding to an open question. Thus, plagiarism is reduced to a minimum. However, if the students can use ChatGPT to generate essays and other written assignments, then the work produced may be hardly distinguishable from that made by human beings. This is very unfair to those students who have not used ChatGPT in completing the assignments. In other words, ChatGPT has made cheating easier without guaranteeing actual understanding of the subject domain by the students.
The use of ChatGPT can however be helpful to education. Although students can use ChatGPT to cheat in assignments, it can also help educators create more innovative assignments which require students' critical thinking and practical problemsolving techniques rather than merely an acquisition of deeper knowledge. Students can use ChatGPT for exploratory study and efficient deeper learning, and the potential benefits are being recognised in the higher education sector. Whilst writing this article, a Hong Kong university announced that they will allow their students to use ChatGPT in their coursework preparation up to 20 times per month. In addition, ChatGPT can be used to analyse candidates' behaviour during examinations so that cheating cases can be easily detected. In our own university, responses have been intensively used to monitor students' behaviour during online examinations in the Covid period. However, there were many critics on its effectiveness mainly because of the potential loopholes. Using ChatGPT and other similar tools, it is possible to monitor and analyse all of each candidate's behaviour (e.g. eyeball movement frequency and direction) closely while at the same time holistically monitoring and analysing the whole class behaviour in real time mode. Although in the post-Covid era we do not need to run examinations online, cheating during examinations is still potentially a big problem. ChatGPT can be deployed in many productive ways. Even recognising that the nature of AI may not be 100% accurate, as mentioned above, it could be used to provide alerts to invigilators. In addition to this usage, ChatGPT can be used in understanding students' psychological and emotional problems rather than relying on the busy school counsellors who may have too many cases to follow.
Governance has the objective of making sure the acts and deeds of practitioners fulfil legal requirements and meet society's ethical standards. In education we have stakeholders from students and staff who may wish to use ChatGPT in their work. For students, the amount of use of ChatGPT in completing the coursework should be defined clearly as a guideline. Apart from that, if there is no control over the use of ChatGPT in the office, quality may be a big question. Senior management should set out clear guidelines in using ChatGPT for administrative staff. Furthermore, although ChatGPT can be used to monitor students' behaviour during exams as well as potential psychological issues, improper use may affect the privacy of students. Therefore, all users should follow the data privacy ordinance.
To sum up, an effective governance on the use generative AI tools needs to be put in place. The government or the senior management of organisations, with the help of professional organisations, should form a steering committee to define the policies, guidelines and procedures for using such productive but also potentially dangerous generative AI tools. AI is such a smart and powerful tool that its use has to be properly governed. If it is used in the wrong way it becomes a threat; if it is used in a productive way it becomes an asset. Let's make it a power for the good.