Xavier Mas: “We must be able to critically evaluate the results that #AI gives us”

1 February, 2024
Dr. Xavier Mas


Bloom’s taxonomy is a key tool for enabling teachers to establish learning objectives for the different topics. Since its creation by American educational psychologist Benjamin Bloom in 1956, it has been revised several times to update it to meet new training needs at different times. The latest is that proposed by the team of specialists from the eLearning Innovation Center (eLinC)at the Universitat Oberta de Catalunya (UOC) to adapt the taxonomy to the age of artificial intelligence. But what are they proposing? We talked to Dr Xavier Mas, the eLinC specialist who led the project, to find out.


What is Bloom’s taxonomy?

Bloom organized his taxonomy by classifying what are called thinking skills into a hierarchy of six categories, according to the complexity of the cognitive processes that need to be used by students during the learning process. These six thinking skills – remembering, understanding, applying, analysing, evaluating and creating – enable us to perform a wide range of tasks and activities. Since its creation in 1956, it has undergone several revisions: in 2001, by two of Bloom’s own students, Lorin Anderson and David R. Krathwohl, and in 2008, when Dr Andrew Churches developed an updated Taxonomy for the Digital Age, introducing new actions for each category or skill. Now we’ve come up with an adaptation based on the emergence of artificial intelligence.


The taxonomy hadn’t been revised in over 20 years. Why now?

With the rise of generative artificial intelligence, we saw a need to redefine the actions that would be part of each of the levels of thinking skills that Bloom had proposed in the 1950s. The taxonomy specifies actions for each of the categories. So, our idea is to review the actions that are part of each one to make them compatible with the use of AI, in a context in which this technology will be widely used and play an important role.


What are the main changes affecting the taxonomy with the emergence of generative AI?

The main change is that generative AI implies a transformation of the human-machine interaction model. Up to now, the way in which we interacted with digital systems, with computers, with machines, etc., was very direct: we gave the machine specific instructions and it returned a specific result, according to the task we had given it. With generative artificial intelligence, all this changes: first, because we relate to the machine using language that’s natural, less precise, more polysemic and much more open to interpretation than a simple command. Second, because the machine no longer just executes a specific order; it produces a result through very complex statistical processes. And each request we make can produce different, not identical, results. We’ve moved from a deterministic to a stochastic model of computation.

Generative AI represents a change in the human-machine interaction model. We relate to the machine with language that’s natural, less precise, more polysemic and much more open to interpretation than a simple command.


And what are the implications of this change for learning?

It implies that we must learn to engage in dialogue with machines, a kind of negotiation between the human and the machine, a conversation intended to reach the satisfactory result we’re looking for. We therefore need to know how to listen to the machine, give it new instructions and lead it in the right direction. This implies different skills, obviously. And these new types of skills are reflected at all levels of Bloom’s Taxonomy.


Could we say that, in a way, the machine acts as an assistant?

Exactly. It’s like an assistant; it isn’t an expert and can’t be fully trusted, but it’s competent in a wide range of tasks. You’ll have to review the information it provides to see if it corresponds to what you’ve asked for, but you’re the one responsible for the response, the one who’s requested it and given guidance on how to prepare it. So, you’ll probably have to make adjustments to work towards an answer that you think is sufficiently correct, complete and rigorous. It’s a different process, a different way of working.


It seems easier?

On the face of it, yes, but now we have a different level of complexity: it seems simpler, because you use natural language, but in fact it’s more complex, because you have to intervene more and you have to do it with rigour and expertise. And this is what we must be able to transmit to students, because for them everything’s new. If students delegate to ChatGPT activities that involve these thinking skills, without providing expert guidance through conversation, and failing to evaluate the result, there’s a danger that they’ll stop thinking and that the learning process will be undermined.


So, we have to apply higher-order human skills?

That’s right, and one of the faculties that humans have to preserve most with this new form of interaction is their critical capacity, which is, by the way, near the top of the taxonomy. More than ever, we must be able to critically evaluate the results that AI gives us. Why? In the first place, to see if we accept them or reject them; secondly, to see if we want to refine them and continue the dialogue, because they still don’t adequately meet our needs; and thirdly, to check for possible bias, whether explicit or more subtle.

One of the faculties that humans have to preserve most with this new form of interaction is their critical capacity. More than ever, we must be able to critically evaluate the results that AI gives us.


Can’t we delegate the higher-order thinking skills that Bloom describes to AI?

Apparently – and I want to stress that word – AI does everything and gives us a result, but the result is statistical and produced by a language model, and the machine doesn’t really know what it’s saying. We can’t delegate that capacity to judge to the machine just because the result looks plausible. It may appear to have carried out thought processes but it hasn’t actually done so, and we must do so ourselves because, remember, the result is our responsibility. When choosing the tool, it’s also important to verify that it’s based on information that’s accurate, up-to-date and not obsolete.

(Visited 9 times, 1 visits today)
About the author