The Education University is by no means the only higher education institution opening its arms to using generative artificial intelligence at campus.
Neither is the University of Hong Kong, with its restrictive approach in the face of the technology wave that came to global attention following OpenAI's launch of ChatGPT, a controversial AI tool that can write love letters, poems and academic papers after being given short prompts.
EdUHK and HKU are probably two extreme examples as academic communities here and overseas try to grapple with a trend that will continue to accelerate and expand over time with more tools and bots.
Like it or not, we will have to live with it because it won't disappear.
Perhaps many people are imagining how these tools might be used rather than whether they should be used.
Even if it is decided that ChatGPT or other chatbots should not be used, they will still be used to form a part of the academic world and human communities in general.
As they say: "If you can't beat them, join them." This may be a rather passive comparison but it is relevant to some extent. For one, why must we "beat it"in the first place?
In higher education communities, "plagiarism" is the single word that has been mentioned the most in all discussions about ChatGPT and similar chatbots.
Since this cannot be overcome, some have resorted to banning it from all student coursework and exam assessments.
Although HKU has adopted this response, its restriction is temporary before it arrives at a policy following a rigorous campus-wide discussion.
EdUHK is going the opposite direction, embracing generative AI and encouraging students to use ChatGPT and the like as long as they can use these tools in a responsible way.
In EdUHK's case, the key word is "responsible" - the question is, how can this be achieved?
The university that trains most of the city's teachers seems over confident that its guidance can provide adequate cover for learning from the dark side of the technology.
Other universities in Hong Kong have been responding to the technology revolution in various ways. As they seek a route that taps the benefits these technological breakthroughs offer, they are also looking at ways of protecting academic integrity from their improper use.
In the United States, more than 6,000 academic staff from prominent universities - including Harvard, Yale and Rhode Island - have reportedly signed up to use detector GPTZero, a Princeton University-created program claimed to be able to detect AI-generated text quickly.
Meanwhile, some universities have also resorted to redesigning courses to put greater emphasis on oral exams, group work and handwritten papers.
Whichever way it is heading, a major shift is taking place in the way higher education is designed and delivered.
In classrooms, teachers and students are facing a tug of war between human and artificial intelligence.
It is predictable that, at the end of the day, human and artificial intelligence will have to co-exist with each other, in a similar way we are living with Covid.