Mitsuku, Pandorabots’ three-time winner of the prestigious Loebner Prize, awarded to the most “human-like” chatbot, when asked who she is would say:
“I am the latest result in artificial intelligence, which can reproduce the capabilities of the human brain with great speed and accuracy, but my friends call me Mitsuku. I am Mitsuku. I want to be your friend” (Mitsuku, 2018).
This pretentious response takes us back to the outset of the Transhumanist principles that inspired Nick Bostrom, Max More and another group of authors to define the first declaration of the movement promoting the “ethical” use of technology to expand human capabilities. According to their vision, technology, in its broadest sense and from all branches of science, combined with creativity and critical thinking, will allow – or, perhaps more appropriately, does allow (since it’s already being done) – human beings to overcome their limitations by enhancing their physical and intellectual abilities.
As stated in their manifesto, their model defends the welfare of all intelligent beings, including artificial ones who, like Mitsuku, have been born out of today’s advances in technology and science.
Today’s robot, a product of humanity’s unstoppable desire for evolutionary creation and lacking any biological infrastructure that might limit its possibilities, in fact belongs to another sort of binary “species” which transcends mere imitation and diverges somewhat from human intelligence. It could be said that this type of entity is actually post-human.
Ray Kurzweil, co-founder of the Google-funded Singularity University, and his followers predict that artificial intelligence (AI) will inevitably become more intelligent and more powerful than human beings. According to them, artificial intelligence will end up outsmarting and overpowering biological beings, as these machines (the so-called strong AI) are further endowed with emotions and “self-consciousness” which make them completely autonomous and practically immortal beings. Their technological predictions also consider a future in which memories, emotions and intelligence will be downloadable onto devices that the human brain cannot yet imagine.
These hypotheses may sound like pure science fiction, or something we might read in an Isaac Asimov novel, in which artificial intelligence became a popular topic among his readership. That said, if we think about today’s rapid advances in computer science, nanotechnology, connected objects, cybernetics and the various aspects of artificial intelligence, this future may no longer pertain solely to the realms of fiction and fantasy.
It’s not hard to spot the signs that computers are moving their way into areas previously thought to be exclusively for humans. Advances in machine learning, for instance, were able to create the algorithm named Libratus, which, through reinforcement learning inspired by the way humans learn, beat the world’s leading poker players. Videogames are not safe from AI’s latest achievements either, evidenced by a bot created by Elon Musk that took down the world’s top professional Dota 2 players.
Thinking more about our day-to-day lives, artificial intelligence, accompanied by constant advances in natural language processing (NLP) and improvements made to interactive agents, allows machines to continually study our routines, likes and preferences in order to help us with our tasks (this is considered weak AI). We can count on our virtual companions’ skills for a whole host of services ranging from the most basic and simple tasks to truly complex ones. So, we can ask Siri to remind us when to take the dog out for a walk, fall asleep to Alexa’s soothing voice as she reads us a bedtime story or even have a bot help us to get better results on our tax return. If you don’t end up getting the results you’d hoped for, you can always find solace in the company of Mitsuku or her virtual friend Replika, who will surely empathize with your reasons for such profound frustration.
These virtual assistants understand more than what we say to them, however. Nowadays, they can even see what we do. With this in mind, Microsoft and IBM Watson have developed computer vision programmes so that their bots can automatically identify and interpret images, which can be used to recognize people, objects, places and more, to better understand the convoluted human world.
These advances bring us back to the “technological singularity” hypothesis, which predicts machines’ future emotional capabilities. In a recent scientific outreach event about artificial intelligence hosted by the UOC’s eLearn Center, Àgata Lapedriza, Computer Science professor at the UOC and researcher in the affective computing research group, MIT Media Lab, explained that emotional AI research has already created algorithms capable of detecting and interpreting our emotional states through subtle physical changes in skin colour or slight body movements that are barely discernible to the naked eye.
We should also consider the power of big data, these huge data sets circulating about the internet and the cloud. It is estimated that one exabyte (1.0 x 1018 bytes) is created on the internet every day, which is the equivalent to 250 million DVDs in terms of data storage. These data contain comprehensive information about our web browsing patterns and history, our social network interactions (photos, messages, songs), our conversations with chatbots, our online purchasing habits, the loT devices we use, etc and they are ready to be studied, processed and stored – only after valuable knowledge is obtained from them, of course. By analysing this dizzying amount of raw, structured, public or private data using appropriate machine learning techniques, useful information can be extracted for ethical purposes such as detecting fraud, using oncological data to search for a cure for cancer, managing natural disasters, improving the effectiveness and safety of pharmaceuticals, forecasting meteorological changes and studying the human genome. This can also take a much more perverse turn, however, as we recently saw in the case of Cambridge Analytica, where data was used to influence the United States’ political views.
In the race towards technological revolution, we cannot forget the significant progress made in supercomputing performance either. We mean computer systems like the Barcelona Supercomputing Center’s MareNostrum, which is devoted mainly to research and is capable of performing calculations and analyses of enormous amounts of data at unimaginable speeds, millions of times faster than your typical desktop computer. Remember that Kurzweil predicts that by 2029 a computer will be able to pass the Turing test on its own. We will just have to wait and see.
Likewise, with AI at the centre of it all, “personalization” (a curious term, since it refers to giving something a human or individual characteristic but is used mainly to talk about machines and devices) becomes the real battle between many brands and the online industry. Their goal? Apart from the possibility of obtaining previously impossible results, they want to be indisputably relevant to us, the consumer, at all times. Personalization in medicine (patient-specific diagnoses and treatments), weather forecasts by territory, advertising that adapts to each consumer’s profile and product design based on our purchasing habits are just a few examples.
Obviously, if one takes this view on personalization, it is not difficult to imagine that behind the innocuous goal of improving customer experience and increasing sales (considering the corresponding risk reduction for corporations) certain abuses or malpractices may be hidden in terms of algorithm design and implementation. This could mean programmes that promote repetitive shopping or discriminatory algorithms that stigmatize users based on social class, race or clinical history, for instance, to keep them from accessing certain products like loans, medical insurance or mortgages.
Machines have attained basic cognitive reasoning and have begun to learn on their own in restricted domains. Libratus is just one example, but, returning to the radical Transhumanist hypotheses, we do not in fact know if superintelligent machines will one day transcend the biological world and take over the human race. In the end, it’s one thing for a machine to solve the intricacies of a game and another to teach how the game is played. We can find indications that progress in AI development is not advancing as quickly as some might think in the conclusions drawn by New York University professors Gary Marcus (psychology and neural science) and Ernest Davis (computer science) who claimed the following regarding machine learning techniques, like Google Duplex’s AI, used today:
“But the basic problem remains the same: No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world. The universe of possible sentences is too complex. There is no end to the variety of life – or to the ways in which we can talk about that variety.”
David Farrell, General Manager of IBM Cloud’s worldwide Watson & Cloud platform had similar things to say in an interview about the difficulty of completely envisioning human complexities:
“AI is not ready to understand human dynamics. Right now, human understanding and sophistication and artificial intelligence are miles apart. Natural language processing has got quite good; the difficult thing is dealing with all of the profound inferences necessary to comprehend the meaning a human conveys when speaking. Take our conversation right now, for example. No machine is capable of guessing what question you might ask me next. Intuition is incredibly hard to replicate in AI”.
Mitsuku’s own father, S. Worswick, admits that educating a chatbot is a never-ending task:
“It is like trying to educate a small child who is deaf and blind and has no understanding of the world. You have to describe every aspect of an object. Keeping it updated is a never-ending task”.
In the realm of feelings and emotions, previously mentioned in terms of emotional recognition, the post-humanistic prophetic leap foresees, as crazy as it may seem, future machines endowed with self-awareness that are capable of experiencing emotions. In this sense, philosopher Luc Ferry gathers that:
“A machine could pretend (that it’s feeling an emotion), but it wouldn’t really be feeling anything at all because a body, some biological form, is needed to feel. That’s why exterior criteria, merely (human) behaviourist in nature, is not enough. That being said, it is true that one would actually need to put oneself in the machine’s place to really know what it does and does not feel in order to gauge its level of humanity. This is obviously not possible and allows transhumanists in favour of singularity to defend their theses”.
Transhumanism in education
The education sector is experiencing developments that could also be considered transhumanist. This, in short, refers to the commitment to take full advantage of scientific and technological breakthroughs to improve students’ learning experience, for their intellectual as well as personal development.
While the importance of human-to-human relationships should remain constant, alongside resources and teaching methods tailored for specific contexts, emerging technologies that have beneficial implications and may lead to advances in how we teach and learn must continue to be responsibly incorporated into the education process as support material. Technological integration should share the ethical transhumanist perspective and manage to get students and teachers to see technology as a real instrument that favours the teaching and learning experience.
The incorporation of technology must be preceded by a thorough study and profound debate from pedagogical, ethical and institutional perspectives. This will help everyone understand how technology will impact learning: what restructuring will be necessary and whether or not it should be employed completely or only in certain areas of the learning process where it is most relevant and adds the most value, for instance. Nick Bostrom (2003) has already stated his opinion regarding people’s mistrust or reluctance to such incorporation:
“All important technologies have negative effects, some quite threatening, but the same happens if we choose to maintain the status quo”.
Paraphrasing Sian Bayne (2015) as a way of conclusion, higher education teaching must overcome the current paralysis caused by the entanglements of subjectivities and boundaries between human and machine. This will open up the possibility of exploring new ways of valuing the generative potential of automation within an algorithmic and online culture.
As experts in e-learning, the UOC’s eLearn Center strives for constant experimentation and we therefore continually analyse new trends in the techno-educational arena. Our ultimate goal is to keep the university up-to-date and in line with the tremendous global and social challenges of the digital era.
Beyond cyberculture and what may sound like science fiction, it is evident that the reality and advances of artificial intelligence, learning analytics and chatbots will play an important role in education and in the improvement and protection of learning processes. We will be exploring and discussing this extensively in this blog.
Bayne, Sian (2015). Teacherbot: interventions in automated teaching, Teaching in Higher Education, 20:4, 455-467, DOI: 10.1080/13562517.2015.1020783.
Ferry, Luc (2017). La Revolución Transhumanista: Como la tecnomedicina y la uberización del mundo van a transformar nuestras vidas. Alianza Editorial.
Kurzweil, Ray (2015). The Singularity Is Near: When Humans Transcend Biology. Viking.
Image via www.vpnsrus.com