AI is not the enemy hereOverseas-education | Crystal Wu 29 Sep 2020
This year, the International Baccalaureate exams have been canceled due to the pandemic. "But then, the question is: How do you now assign grades?" asked Anton Ovchinnikov, distinguished professor at the Smith School of Business in Kingston, Canada, and a visiting professor at Insead.
"So what they were thinking of doing this year was taking the predicted grade and basically adjusting them depending on how good and accurate the teachers and schools are at predicting."
However, this plan backfired, as many students received what he described as "nonsensical" grades. Coincidentally, Ovchinnikov's daughter was also an IB candidate this year.
Initially, the appeals process allowed only remarking and the algorithm was not appealable, prompting over 25,000 to sign an online petition in protest.
"About a month later, they introduced a new set of rules which superseded their existing algorithms. Those rules ended up correcting certain clearly nonsensical situations, such as the final grade being lower than all the other grades the student received," explained Ovchinnikov.
The incident, and the use of artificial intelligence in student assessments, prompted a debate regarding the decision to use such technology in academic evaluations.
"If you are not happy with AI, consider two things: if not AI, then what? What's the alternative? And is the alternative fairer?" asked Theodoros Evgeniou, professor of decision sciences and technology Management at Insead.
"The alternative would have been to just use predicted grades. Given the differences between how different schools may potentially inflate grades, this would most likely not have been fair."
According to the IB, there are 35.4 percent more predicted top grades of seven compared to previous years. "So in some sense, this need to adjust is justifiable," said Ovchinnikov.
"Even in a situation where the algorithm was used correctly and cases of downgrading were correct and justifiable, because of the opaqueness of the process, an almost unwillingness to share the framework of usage and not having a supporting mechanism of appeal, it resulted in a very strong reaction," said David Hardoon, senior advisor for data and artificial intelligence at UnionBank Philippines.
"This opaqueness resulted in a lack of trust because it did not elicit familiarity and trust in the system. You then see this prejudice: are you really doing the right thing? Should we be doing it at all? In the UK, there were actually protests against AI."
The trio believes that while the IB's use of the technology yielded undesirable results, the problem may not lie with artificial intelligence. "I'm not saying that we should not be pissed at AI, but of course we should be angry with AI done badly," said Evgeniou.
He explained that the problem lies with the development and deployment of artificial intelligence, which is designed by humans.
"In this case, it seems the process did not take into account transparency and appeal issues. It was done all in one big chunk, scaled up without an opportunity to iterate in a more step-by-step way. In essence, there were issues with the AI development process, not the AI itself," he said. "So if people are angry with AI, don't. Get angry with the people who developed it."
However, he also thinks that the IB should be given credit for updating its algorithm, because improving with feedback is important in developing a good AI system.
Ovchinnikov said: "The biggest challenge this year was that [the IB] needed to do something, but they did not have the data set designed or specifically built to make this prediction, as the option to do nothing was not there.
"There is nothing really wrong in principle with using AI to predict grades. If a teacher can make a prediction, so can a machine. The question is what kind of data they have access to."
Success stories of AI used at large organizations, such as banks or insurance companies, are the result of millions of dollars and years of work, Ovchinnikov said. "The IB, or even the UK A levels, may have had fewer data points about a lot of students in the past, but they are not even close to having the granularity of data based on which a big tech-like prediction can be made," he said.
In the case of the IB, despite the dissatisfaction of students, parents and schools, there may still be overlooked advantages. For example, with AI, unfairness is scaled up and becomes more visible. However, corrections can also be scaled up so that all students are treated fairly.
"When there was no AI, it would most likely be that every year, some students in some parts of the world would be appealing, but that would not happen at scale, even though the actual unfairness may have been still at scale. But we probably wouldn't know," explained Evgeniou.
Another argument against artificial intelligence has to do with privacy, but Hardoon assures that the systems can be designed to be just like financial and health information, where the relaxation of privacy occurs only with justifiable suspicion.
"When you're dealing with algorithms that are meant to design some kind of insight, it is extremely possible to anonymize the information and provide it in such a manner that's encrypted, not decipherable nor understood by a human," Hardoon explained.
"But it's something that can be leveraged and used by a machine and only surfaced up - basically, de-identified in the event that there is a reason for it, like suspected fraud or plagiarism in the case of schools."
"At the end of the day, these types of algorithms don't need to see the information to understand and identify patterns."
All in all, the trio are still advocates for using artificial intelligence in education. "In fact, I would be discouraged if the IB or others are discouraged from using AI because of this. If anything, to me, this is not a lesson not in doing AI, but, if you are going to go about doing AI, do it properly."