Aree Moon, President of WISET
“How are we supposed to do assignments without AI?”
For many students today, artificial intelligence is no longer just a fancy search tool. It has become the most natural way to find and shape information. One survey by a private education company says 96.6% of high school students use AI for performance-based assessments. After the heated debate over “AI cheating” on university campuses, it may not be surprising that a large-scale cheating incident recently occurred in a high school as well. But this problem will not be solved by tighter monitoring and harsher punishment alone. We have reached a point where the goals of education—and the methods we use to evaluate students—must be redesigned for the AI era.
At the heart of learning is not the “right answer,” but the process. The slow work of finding sources, comparing them, doubting them, and checking them is what builds real thinking skills. Psychologist Robert Bjork called this kind of effort “desirable difficulties.” Neuroscience also tells us that deep thinking grows when the brain strengthens connections through repetition and exploration. Yet AI can produce a clean, polished result in a second, and that speed can tempt students to skip the very process that helps them grow.
An even bigger risk appears when students do not have enough basic knowledge. In that case, they may accept AI’s “convincing mistakes” without noticing. Universities are already seeing more cases where students submit papers citing fake references created up by ChatGPT. Overseas, there have been reports of people nearly getting into danger after trusting wrong advice from medical chatbots. Without a solid base of knowledge, people simply don’t have the ability to judge whether an AI answer is correct. Ironically, the smarter AI becomes, the more important human basic knowledge becomes. So what we need now is not a blanket ban, but clear standards and a new direction.
In that sense, the Ministry of Education’s recent announcement—about developing AI guidelines and ethics education content—is late but still welcome. Rather than just laying out rules, we need to rethink learning and evaluation themselves, and rebuild them on the assumption that AI is part of the classroom. The key is to place AI where it belongs: not as a shortcut to finish work, but as a tool to raise learning efficiency and expand thinking.
First, building basic knowledge must remain non-negotiable. Students need to truly absorb basic concepts, so they can understand and verify what AI gives them. If you do not know chemical symbols, you cannot correctly understand medicine ingredients. If you do not know the structure of the human body, you cannot read medical information properly. That is why, in the basic stage, schools should strengthen evaluations that directly check knowledge and understanding, such as written tests, face-to-face assessments, or oral exams.
Second, at the application stage, AI should be used actively and thoughtfully. Instead of copying AI’s output, students could be asked to find logical holes in an AI-written text, or to choose the best option among several AI-generated solutions and explain why. This pushes students beyond simply “knowing how to use AI.” It helps them build real AI literacy: understanding how AI works, where it fails, what risks it carries, and when it should or should not be used.
In the AI era, the most important skill is “human supervision”, the ability to review AI’s answer critically and take responsibility for the final judgment. And that skill only stands on strong basic knowledge. In the end, what matters is whether students gain the wisdom to decide what to hand over to machines and what must remain in the human mind. If we can move beyond the temptation of AI cheating and use AI as a learning partner that expands thinking, then we can finally enter a true era of “AI learning.”
☞ The original Korean version is available at the link below. (The Financial News: https://n.news.naver.com/mnews/article/014/0005445940?sid=004)