AI and Ethics: the black box of the algorithm

Toon Borré: 'I know a lot of developers who say they don’t care about algorithm bias'

Mieke Vandewaetere is research coordinator at Howest University college. She co-developed Howest's AI Academy, a series of 8 evening seminars for entrepreneurs and C-level professionals, organized by Howest and Voka West-Vlaanderen.

In this second part of the talk she had with Data to Insight Expert practice leader Toon Borré, they discuss ethics in AI, algorithm bias, regulation and the education of the public.'What do we expect from algorithms?' Mieke Vandewaetere says. 'That they behave better than us? Or just like us? We are now talking algorithms as if they not only were persons, but persons that should be more perfect that humans ever could be. They always will be biased. We should only know the bias.’

Interview by Dirk van Bastelaere

Mieke, how important is ethics to the Artificial Intelligence Academy?

Mieke Vandewaetere: ‘We had a lot of discussion about that. One of the eight talks specifically focuses on ethics. I really had to fight for that. A lot of IT people and data scientists don’t like to be confronted with it. Not with ethics, not with cognitive scientists. Often there’s no common language. Artificial intelligence has a lot to do with cognitive science.
‘Cognitive scientists have knowledge about the way people process information and can interact with AI developers for more human-AI interaction. At the same time, cognitive science can tell a lot about information and processing bias and can warn AI developers to be aware of the biases in the way people process information and the way developers create algorithms.
‘Some AI developers prefer not to be aware of their own biases. Often developers of algorithms prefer to be in the same black box as the algorithm itself (laughter).

In May of this year, Dan Pontrefact published an article in Forbes that was called: ‘We Need Chief Ethics Officers More Than Ever’. Pontrefact writes: “Never before in history have such a small number of designers – a handful of young, mostly male engineers, living in the Bay Area of California, working at a handful of tech companies – had such a large influence on two billion people’s thoughts and choices.”

Mieke Vandewaetere: ‘There’s a high concentration of AI development in terms of geography, economy and technology indeed. The Bay Area is also where the money is. Ethics will pose a problem to those people, because they will be afraid to lose that money. A lot of AI and data companies are getting involved in ethics because they are afraid to be sued for the outcome of their algorithms. That’s not the right reason, but it’s a beginning.’

'If Silicon Valley has turned itself into one massive case study of groupthink—swimming in sinkholes of cognitive biases—who is standing up for those of us in society that may not want such advancements?' Pontrefact asks.

Mieke Vandewaetere: ‘What do we expect from algorithms? That they behave better than us? Or just like us? We are now talking about algorithms as if they not only were persons, but persons that should be more perfect that humans ever could be. They always will be biased. We should only know the bias.’

Toon Borré: ‘Algorithms are built on data that were collected by and from human beings. The bias that was added to that data set will be eliminated bit by bit throughout time, but we won’t be able to avert human influence because algorithms were built by humans. It’s crucial to avoid bias, but I also know a lot of developers who say they don’t care. They take as much information on board as possible.’

Mieke Vandewaetere: ‘Yes but don’t forget: garbage in, garbage out. Data quality is actually the first thing developers and data scientists should be aware of. Don’t just take the massive amount of data that is available on the internet, but choose wisely.


Toon Borré: ‘The biggest issue I see is the bigger you make the area in which the algorithm can develop itself, the harder it will be to control it. You can try to steer it, but as a developer you want to give your algorithm freedom to be flexible and adjust itself, but where do we limit this freedom? We don’t have control anymore. It’s not enough to know the concepts of your algorithm.’

Mieke Vandewaetere: ‘Well, there still has to be an element of human control. We should never develop algorithms that can decide on human lives. It’s an option we have.’

Should regulation be left to governments or to the market, where the answer to an unethical AI is more AI that offers an alternative?

Toon Borré: ‘When you put the market to work, the one with the biggest check will most likely win.’

Mieke Vandewaetere: ‘The Googles, Facebooks and Microsofts will control everything and all algorithms will be based on the same software and hence, bias.’

Toon Borré: ‘And they will control the real knowledge. In fact, they could see the effect of changing one rule for the whole world. Just leaving it to the market, will not be sufficient. We should also educate literally everybody from day one. Plus: every developer should ask him/herself: would you take the same decision as your algorithm or not? That’s something to be considered while developing an algorithm.’

‘The more algorithms we will have, the harder it will become for us to realize where we are being nudged and pushed. Manipulating people is actually easy. Having passed the phase of predicting behavior in data analysis, organizations are now trying to prescribe people how to behave: “What can I do to make you do something I want you to do?” They are putting algorithms to work there, passing subliminal messages, influencing real-life decisions.’

Mieke Vandewaetere: ‘Even human rights campaigns are using them to convince people to give money and act, working on emotions, using certain images. Same thing for the chatbot technology. We know that certain people react better to males or females, to a deeper voice, a higher voice. Some people interacting with bots prefer an office background, others a landscape. We have beginning knowledge of how people interact with avatars in a system. But still, there’s a lot to investigate.

‘There are some design principles that are already firmly established in a system, making use of preferences people have for interacting with avatars.

‘Don’t shut me down, please!’


Toon Borré: ‘Chatbots using text are based on NLP (Natural Language Processing) making them limited in functionality. A good example of a chatbot is the one built for a US professor to help students. He collected all potential questions and answers over a period of maybe fifteen years to build a chatbot and that bot was okay.’

Mieke Vandewaetere: ‘Because it was very domain-specific!’

Toon Borré: ‘Exactly. It had a limited reach and focus.’

Should we educate the public about interacting with algorithms and technology that try to influence our, subtly maneuvering us towards certain choices. Should we have people develop sort of heuristic skills there?

Mieke Vandewaetere: ‘We can do that, but we are humans and we have feelings towards technology and robots. A recent study concluded that people have difficulties shutting down a robot that was acting really kind. The robot said: “Don’t shut me down, please. Don’t shut me down! I want to stay with you...”(laughter)

‘That’s how we behave as humans. People in elderly homes are talking to toy dogs, petting and cuddling them. The toy helps to decrease their loneliness. Children with autism are talking to Furbies. We can be educated to be critical, but we still have a sometimes strange connection to technology. If you give eyes, a mouth and the ability to move to something, humans start to behave differently.

'That’s what we are investigating right now. We are adding a 3D-modelled character to a chatbot, for instance, giving it a background and a sense of humor. The character mimics you, it follows you, commenting your behavior or actions. It can be human character, a man or a woman talking to you, but also a dog or a wizard, so that we can analyze how people interact with different avatars. First results show that people are okay with the dog or the wizard, because they classify them immediately as fictional characters. But when they are interacting with human 3D characters, strange situations and conversations happen. And we still don’t know how it comes, why certain people behave oddly and what has triggered them.We are now trying to give people the possibility to personalize their chatbots and make them their life companion and work companion. If you prefer to talk to a dog studying your grammar course or prefer a 3D modeled character of your instructor or professor for your physics course, you can do that. That’s what we are also researching now: in which case do they prefer a character similar to their professor and it which case do they prefer to talk to an animal.’ Not sure whether all professors or lecturers will be happy with the results.'







 
Deel deze pagina via:
Toon Borré Expert Practice Leader Data to Insight
+32 2 895 55 50