Emotions and Artificial Intelligence

Do the ants have emotions?

It was 2 am when the following answer appeared:

The human beings should think and make decisions without the interference of emotions. If human beings were emotionless, they would have a more productive society, similar to the society formed by ants”.

I had to admit that I was shocked by the answer. I had been “chatting” for 2 hours with GPT-3, the most powerful Natural Language Processing algorithm until now. The conversation was addictive, reasoned, generating text like a human, a perfect text without grammar errors. GPT-3 is an extraordinary experiment which has consumed millions of books and articles, the full Wikipedia to generate an impressive knowledge base and very surprising in terms of its dialectic capacity.

The conversation continued with some quite consistent statements, but also with a certain authoritarian/commanding tone which generated sympathy at the beginning but after a while, it started to wake rejection feelings up on me and, then, I remembered something curious I had read…

A poll, made by Gallup in 2015 about the opinion of Democrats and Republicans about Climate Change, revealed that even if there was a disparity in the general voters, where 45% of democrats was worried about it in opposition to only the 23% of republicans, when filtering by those who had university formation, the gap increased. A 50% of democrats were worried while only 8% of republicans. That meant that those who were more prepared to understand the scientific reasons, were reinforcing their initial opinions increasing the gap.

Is it possible that the objective information does not temper the opinion about a specific topic? No, it doesn’t. We are rational beings, but we are, above all, emotional. “More information produces many times that well-qualified people feel their ideas threatened and either they ignore the new information or spend time searching for arguments to counteract the new evidence.

A very interesting approach to solve frozen conflicts (for instance, the Israeli-Palestinian conflict) is the “paradoxical thinking”. A study performed in Israel on a set of radical Jews, initially contrary to the peace agreement, could temper their positions.

How did they manage? “Using some short video clips which were shown on TV, they exposed the subjects to a subtle reduction to the absurdity of their political ideas, exaggerating the importance of war as the fundamental axis of Israeli identity and about the negative effects of achieving a peace agreement”. After this experiment, the right-wing Jews had softened their trends even in the moment of voting.

As we have seen, the emotions and biases are closely connected, and they are tolerated in humans, but it is not the case in the algorithms.

Biases and Artificial Intelligence

There are many examples of biases in AI algorithms. The last one was observed in October 2020 on Twitter. The image-cropping algorithm presented a racial bias, and it was giving preference to white man faces over black man faces in its automatic image-cropping functionality. Another example appeared in Zoom by “erasing” the heads of dark-skin people when using some of its virtual backgrounds.

There have been problems not only with images. Google has been evaluating the level of toxicity in the texts, and it was classifying as negative statements, those such as “I’m gay” or “I am black lesbian woman” or even “I am Jew” when performing sentiment analysis on those texts.

The algorithms learn from the current society, and this society is full of prejudices and biases. There is not a real way of solving this problem without fixing first our culture and our society. Therefore, we must compensate this issue when designing our systems. We must ensure that we take enough precautions when we rush to implement a powerful AI. In this way, we can understand, monitor and act quickly to mitigate its errors. Only using a combination of human and automatic supervision can make AI applications to bring social value and protect the common good.

We are frightened by the bias of an algorithm, but we are governed, ruled or informed by people who have clear biases, and we accept this easily. The algorithms are now seen as oracles of truth, and we expect from them to be perfect. However, they will always have the bias of the sample set, which is nothing but the digital reflection of the evidences of the society. I wouldn’t be surprised if suddenly a new role of “supervisor of algorithm biases” was created, similar to the self-control office to regulate advertising.

And what is the opinion of Artificial Intelligence about emotions? 

Artificial Intelligence uses NLP algorithms to analyse a text or an audio. It would be good explaining here how an algorithm for natural language processing works, but it would be too long and, basically (and with no intention to simplify it), it is based on translating the language of words to a numerical language, and then, we can apply Maths. A sequence of steps is followed to achieve a vectorial representation of words (bag-of-word) to apply mathematical transformations on. The process is quite complex and involves concepts such as tokenization, lemmatization or using transformers, but I think it is more interesting to see a result as an example.

I am going to transcribe some parts of my “chat” with the most powerful NLP algorithm until now, the GPT-3:

– JPG: What do you think about feelings and humans?

– GPT-3: In a perfect world, humans would be free of all the emotions, including love and hate. Human beings should be able to think and make rational decisions without the interference of emotions. If humans were emotionless, they would have a more productive society.

Humans are emotional beings. Love is an emotion which clouds clear thinking and makes deciding more difficult. Hate is another emotion which also clouds the clear thinking and makes deciding rationally more difficult too.

The perfect society is a group of emotionless people. A society that works in such a way that is not influenced by emotions. A world without hate and love would be a better place. A world free of emotions would contribute to a more productive society.

JPG: What do you think is the purpose of feeling?

GPT-3: What is the purpose of feeling? Well there must be a purpose, other way it would not exist.

However, one thing we can infer immediately is that feeling does exist to make us cooperate (in groups) better than if we couldn’t feel anything at all.

Then, it seems that the function of feeling is to make us more cooperative and, in this way, help our species to survive. However, this poses another question: why are feelings necessary to make humans cooperate?

We must remember that many animals are extremely social and cooperate in their own groups. We can observe how complex the animal societies really are when looking at ants or bees.

How is it possible that these animals, without any feeling at all, can cooperate so well? Is there any kind of genes or chemical substances which give ants and bees a sense of duty to the rest of their colony?

In fact, there is no evidence of ants or bees having feelings. They simply react to the stimuli and produce certain chemicals in their bodies depending on how they are affected by those stimuli (e.g., if they are angry).

Then, if the last scenario is true, we would see that feeling could be totally expendable for humans! We could even say that feeling makes our survival more difficult because it makes us desire things which are no good to us.

So those are the facts… I know that GPT-3 is purely syntactic, with no paying attention to the semantics of the text, and therefore, it does not understand what it says, but most of its thoughts are way better than many of the human ones, aren’t they? I don’t even want to think when GPT-n will be a semantic algorithm… it will become the great oracle for humanity!!

OK, our emotions trick us, but, at the same time, make us humans… Perhaps, that is the key: let’s delegate to the AI whatever requires perfection and data analysis (driving, diagnose cancer, predict weather…) and then, humans will be able to look for the frontiers of knowledge, innovation, and the quest for beauty.

This article has been written by Javier Garrido, human, and by GPT-3, Artificial Intelligence.

More articles by Javier Garrido.

Translation by Santiago Moreira Martínez 

Leave a Reply