Two months ago, I wrote that ChatGPT is a powerful "generative" artificial intelligence solution that can answer questions in a seemingly natural conversational way, be trained on a massive data set, and fine-tuned by human AI trainers.
It garnered 100 million users in just two months after its launch on November 30 and recruited 13 million users per day in January, according to SimilarWeb, a web analytics company.
The remarkable chatbot is the next big thing in a technology sector that harbors more than 500 generative-AI startups.
AI-powered chatbots are also the darlings of the investing world, which has collectively raised more than US$22 billion (HK$171.6 billion) for the startups behind them, according to the Economist Intelligence Unit.
The generative AI tool has seen students attain passing-grade results recently at the university level - including law exams in four courses at the University of Minnesota and a business management exam at Wharton School of Business.
Schools and teachers are unprepared for the immediate impact of ChatGPT on many fronts, such as coping with AI-generated homework and cheating concerns in exams.
This week, I want to look at the rising fears and challenges brought about by ChatGPT for teaching and learning.
AI is not perfect and can be a threat to the education community.
According to a 2023 study conducted by Britain's BCS, the chartered institute for IT, 56 percent of 124 computer science teachers/respondents did not think their schools had a plan to manage incoming use of ChatGPT by students.
About 33 percent said early discussions had taken place and a further 11 percent said a plan was under way.
Around 62 percent of teachers believe ChatGPT will make it harder to mark students' work fairly, and over three quarters of teachers rated the general awareness of the capabilities of ChatGPT among their school colleagues to be "low" or "very low."
I have an impression that the levels of unpreparedness in schools and among teachers in Hong Kong is even worse.
Concerns have been raised about how AI-powered chatbots could be spreading misinformation - false, biased or outdated information - through our not being able to understand the authenticity of their sources.
I don't want to see schools become factories for misinformation.
Some educators have also warned that the use of AI tools could lead to widespread cheating by students because the exam and coursework results have been good, potentially even signaling the end of traditional classroom teaching and compromising assessment methods.
A further danger is that the digital divide is only going to get wider if better-off parents can pay for premium services from powerful chatbots and achieve better grades.
A recent study by a professor, Lee Dongwon, at Penn State University found that students using the codes written by ChatGPT to complete essay assignments could be opening themselves up to plagiarism allegations through the avoidance of traditional detection tools such as TurnItIn or iThenticate that the chatbox offers due to the way that it processes text.
In response to this phenomenon, new tools are emerging to detect the use of AI like the GPTZero created by Princeton student Edward Tian.
Harvard Business School professor Tsedal Neeley warned that unchecked AI databases could reek of bias that can become immense without effective oversight or regulation.
New technology often brings unease, but generative-AI tools could revolutionize academia on many fronts.
Dr Jolly Wong is a policy fellow at the Centre for Science and Policy, University of Cambridge
US elementary-school students are tasked with writing summaries about late boxing legend Muhammad Ali and then figuring out which were written by classmates and which by ChatGPT.