“There are new AI tools that can help teachers design courses and the learning process”

16 May, 2024
Barbara Wasson

During the Ed Tech Congress, held in Barcelona from 9 to 11 April, we interviewed Professor Barbara Wasson, director of the Centre for the Science of Learning & Technology (SLATE) at the University of Bergen in Norway. Wasson is also a member of the Council of Europe expert group on AI and Education and the CDEDU Sub-Group on Higher Education Policy. We talked to her about opportunities and challenges in the use of data in education, specifically in learning analytics and artificial intelligence (AI).

 

What are some of the possible practical applications of learning analytics on AI in education?

AI can be used to support students, using what we call ‘intelligent tutoring systems’ or ‘dialogic tutors’, where they receive personalized information. In mathematics, for example, there are many adaptive mathematical tools where, depending on how you answer a question, the next question is adapted to you personally.

And then it can be useful for teachers. There are new AI tools that can help them design learning and lessons. As an institution, there are also tools that help with the accreditation of people applying to university, for example, identifying those who could drop out of the programme so that appropriate measures can be taken. That would be one way to use AI in systems, but learning analytics is when different types of data are visualized or reported to stakeholders. In the end, there are many different ways to use it.

 

What are the main issues involved in the use of these technologies?

For me, the greatest problem is the use of student data, because we don’t always have the right to use it. The data may be private or sensitive. Furthermore, we’re vulnerable to the manufacturers of these tools. We only get the data they choose to collect, which may not be the data we want as educators. It has to do with the way they are regulated. Those are some of the main challenges. There are other challenges, for example related to how skilled we are, as recipients of the analysis, in using the data. We need to be competent to understand and interpret data. This is true not only for teachers but also for school managers. There are many studies that show that schools that really use data on what happens in their centre do better when it comes to improving as an educational organization.

Schools that use data about what happens in their centre do better when it comes to improving their institution.

 

Do you think educators can learn to use these technologies in their teaching work?

I really think it is best for universities to train teachers to start using them. They have to let them use AI tools and show them how they can use data themselves. It is difficult to improve teachers’ ability to use these tools competently. It’s often said that it will save them time, but I don’t think that’s true. I think it gives them more work because it’s something else they have to do, and they may not have the necessary skills.

One of the things we want to know is the extent to which their role is changing and that is one of the big problems, since there are not many studies of their use over time. What we do know, even outside education, is that, when you introduce tools, it takes a long time for them to be adopted and for work practices or the way they are used to change. It’s not something that happens overnight. I think we expect things to happen quickly and we reject them if they don’t work. We need time.

 

What is the toughest research you are currently conducting in the field of AI and learning?

Right now we have a new project, called Artificial Intelligence in Education, Layers of Trust, which studies how trust is built in the education sector and what it means to trust AI as a teacher or as a school manager. It also has to do with how AI is regulated. As a teacher in Europe, you are individually responsible for the use of the tool you use in class, and you can be fined if you use a tool that sends your data outside Europe. It also involves those responsible for privacy, who verify and approve tools, determining whether they comply with the GDPR and national regulations regarding data protection, especially data regarding minors. In the end, we have to trust the tools and this involves everyone.

We need to trust AI tools in education, and this involves everyone.

 

Is adaptive learning going to be implemented increasingly in educational centres or is it still too expensive for most to afford?

The first systems that were built in the 70s and 80s were very expensive due to the way we designed the domains and had to model decision-making. Now we think that generative AI can be part of the adaptation process in itself, but do we trust it?

We don’t always trust everything it produces, but someone else has paid for it and it would probably be much more expensive for us to do it, bearing in mind the cost of training the models. Although learning is a collaborative process, there is a fear that there will be a movement towards more individualized and focused learning, and that aspect will be lost. There are no answers yet, but this idea may be part of it.

 

How do you think AI can help us improve as an online university?

I think there are many ways to use AI to support collaboration and dialogue between students. Maybe we need to focus a little more on that, rather than on individual tutors, who have their role, but they are not everything. You can find ways to collaborate that don’t require you to be together, and learn to minimize the amount of time you need to liaise synchronously.

But, in the end, you have to ask yourself if students prefer to work in teams at the same time or if they feel comfortable dividing the work up and meeting only from time to time. Data only give us an opinion, a point of view. To discover what is behind each student’s preferences, you need to ask them personally.

(Visited 7 times, 1 visits today)