Artificial intelligence: exclamation and question marks

7 October, 2022
Photo by Markus Winkler on Unsplash

The end of April saw Barcelona host the EdTech Congress which, under the title “Education technology at the service of learning”, offered an opportunity to share knowledge and tools for spearheading digital strategy, a race in which artificial intelligence (AI) already holds a leading position.


The EdTech Congress talks on AI in educational environments were inspirational and intense, much like an episode of Love, Death & Robots or Black Mirror . Cleverly entitled “Hi, AI!” and “Hi, AI?” (with this exclamation mark being welcoming and the question mark sowing doubts), the focus was on the impact of this technology. The sessions were led by Marta García-Matos (Cosmocaixa Education Projects), Marià Cano Santos (holder of the Chair in Secondary Mathematics at the Ministry of Education of the Government of Catalonia), Toni Hernández, teacher at Polytechnic University of Catalonia and also, by Jorge Calvo (teacher at the Colegio Europeo de Madrid), who shared some of his practical experiences in the classroom.


That nobody should be surprised at AI being a technology that is already a feature of our lives was one of the thoughts that speaker Marià Cano shared with us. The fact is that, from the moment we get up in the morning, there is an algorithm floating around us suggesting what music to listen to, what news to read, what places to visit, what relationship to have, how to improve our English, what steps to take to stay healthy, what to invest our money in and what time we should go back to bed. It could be said that AI helps us make better decisions than ever, but will these algorithms spell the end of intuition?


Marta García-Matos stressed that we should not forget that AI is designed by human beings to address concrete needs, and that it acts, broadly speaking, by following these steps:


  • Collecting data from the surrounding environment.
  • Interpreting these data to form a pattern.
  • Processing said information (reasoning, making decisions, etc.).
  • Providing a solution.
  • Analysing the outcome and adapting it in the future, much like a learning process.


Artificial intelligence systems have been designed in response to a range of different educational requirements. Consider, for example, mobile adaptive learning apps, which adapt automatically to our learning, and which are widely used for learning foreign languages, musical instruments and maths.


AI also focuses on the field of learning analytics and the analysis of the digital footprint left by students, so as to be in a position to subsequently adapt learning processes, resources and activities and thereby provide that personalized, unique learning experience. Tokyo’s secondary schools use advanced data analysis-based tutoring tools, such as Squirrel AI, whose algorithms are able to precisely diagnose each student’s learning problems. How? By gathering, analysing and processing all possible information and thus offering an automatic solution and adaptation of the curriculum. The key to the algorithm lies in dividing each content element into tiny conceptual portions so they act as checkpoints.


So what about creativity? Can artificial intelligence be original? Can it spark inspiration?

The non-profit organization OpenAI, whose founders include Elon Musk and which has as its mission “to ensure that artificial intelligence benefits all of humanity”, heads the DALL·E open source project, a system capable of creating realistic images based on a description but there can be an infinite number of combinations in different versions. There is a long waiting list for testing DALL·E 2, which is not yet accessible to the public, but its website provides some examples of its work. There is another similar open-source project that is smaller in size and called DALL·E mini, which can be tried out, even though the quality of the images and their layout is not so realistic. Google also didn’t want to stand idly by and has developed Imagen, a project based on the same concept.


Speaking of art and creativity, one of the experiences related by Professor Jorge Calvo (speaker at the event and teacher at the Colegio Europeo de Madrid), which he himself had carried out in the classroom, centred on the development and testing of an app that “interprets” the emotions displayed on the faces of the different subjects of iconic paintings from the history of art, such as Leonardo da Vinci’s Mona Lisa (undoubtedly a different and original way of making the history of art approachable to students), as well as a secondary school students’ mathematics activity, consisting in analysing content-based recommendation systems, one of the key algorithms used by Netflix to suggest to us series that fit our preferences.


And now, it’s time to delve into the fascinating plot of our very own episode of Black Mirror to ask…


How close does AI come, at times, to crossing ethical boundaries?

In the afternoon’s “Hi, AI?” session, the discussion focussed on the ethics of some AI-related experiences already taking place. For example, we were able to view part of a video report by The Wall Street Journal on the application of hardware and software in a Chinese primary school: a headband that monitors every student during class hours and offers real-time information on their attention levels: if the headband’s light is red, it means they are very concentrated, blue that they are distracted, etc. It measures this using three electrodes, two behind the ears and one on the forehead, which pick up electrical signals sent by neurons in the brain. In addition to being viewable on the headband (via the colour-coded lights), all the data on this neuronal information are sent to a computer on the teacher’s desk and compiled into reports detailing each student’s attention levels every ten minutes. These reports are shared with parents on a daily basis by mobile phone. Is it necessary or efficient to subject students to these controls? Is this technology trustworthy? Some experts state that it is absolutely not, because these electrodes are not completely reliable, as the signal they give off varies if, for example, the subject feels nervous or anxious.


As you can see, AI has great potential and is already impacting education processes, but it is up to us to ensure that it does not become a threat, and the implementation of ethical codes is a crucial requirement to prevent situations in which students and people in general could feel exposed to a system that may be tremendously invasive.



(Visited 98 times, 1 visits today)
About the author
Education and instructional design consultant at the Learning Design Consultancy Unit of the UOC's eLearning Innovation Center (eLinC). Holder of a master's degree in Digitally Mediated Learning Environments from the University of Barcelona. Holder of a higher national certificate in Computer Application Design and Development. Primary education teacher specializing in music education and learning and knowledge technologies (LKT).