Artificial Intelligence makes it possible to generate images, videos, music, texts; It has potential that can be used positively in a school, but also a dangerous side that carries risks: fraud and manipulation, threats to privacy, and a long list of negative consequences that teachers need to be aware of. It is on this theme that the XVIII Teachers’ Conference organized by the Schola Foundation in Valladolid focused, with the help of Guillermo Cánovas, professor, writer and director of the Observatory for Healthy Use of Technology, “Educational”.
Cánovas warned of “the need to work on this issue in schools, not only with families and teachers”, but also “with the students themselves because, although we do not consider it interesting that they use some artificial intelligence tools, we do” We need to train them on how to avoid dangerous or unhealthy use of these tools. Because, according to a study carried out throughout Spain, 80% of students from the 3rd year of ESO and 95% of those of the baccalaureate – one hundred percent in some cases – already use AI. For this expert, In the case of text generative AI, “we must very clearly distinguish the why from the why because certain tools interest us” for “the personalization of teaching that they allow”, for self-evaluation or “the possibility of obtaining explanations different from those given by the essentials of class in class then the whole subject of “chatbots”, which I find very interesting for working with adolescents.
But there is also the other side of the coin, these are the risks associated with it. “First of all, the possibility that generative artificial intelligence does not transmit truthful information” due to, for example, bias, because “these tools were trained with existing information on the Internet and on the Internet there is also biased information,” he said. said Cánovas, Unicef Children’s Prize in 2013.
According to him, we must also take into account the effect that AI can have on student learning. Cánovas draws an analogy with the use of calculators: “We believe that it is necessary for students to use calculators from a certain age, but until that age, what they must learn is to multiply and divide by their own means. In the same way that it does not occur to us to give a calculator to a child who is learning the multiplication tables, we should also not put in the hands of an adolescent who is developing executive functions a tool who does them for him.
Cánovas, who detects interest and concern among teachers, but who, according to him, “are not trained” in the matter, assures the relevance of training in the face of the reality that more and more Students, mothers and fathers also use AI for homework and academic work. In this sense, Cánovas highlights the convenience of this training so that teachers can detect texts and content presented by students, but generated by AI. Also to discriminate between truthful information and information that is not true or to know how to detect the cloning of a person’s image and voice, a risk in cases, for example, of harassment.
There too added an element of concern: “PTo be able to locate these deceptions, there are tools, but Artificial Intelligence continues to progress. So, errors that allow us to clearly identify these images are errors that will likely be corrected, making them increasingly difficult to detect. According to him, another problem also lies in the lack of legislation in most countries: European legislation will only come into force in full in 2026. “We started the house with the roof and created the tools, we have distributed them and now we are thinking about the rules we give ourselves regarding their use and it will take us time to adapt the legislation to these realities which, moreover, will not stop evolving. . “, while AI also “has potential and possibilities that no tool had until now”.
For Canovas, It is advisable to provide guidelines on privacy threats which can occur, but also “hallucinations” that the AI can present: when it does not know how to solve something, it invents it. We must warn about possible manipulations, a dependence on technology or an issue of particular concern for this expert: that the AI presents itself in conversations attributing human characteristics: “I understand you”, “I support you… “, “anything she should that will not be allowed.” The student must be told, according to Cánovas, to verify and expand the information with more sources, not to provide personal data, to establish schedules for its use, that it is only a matter of of a tool, not of a person, and to comment with adults “if it hurts you.”