Ruminations on Artificial Intelligence and Ethics at the Brussels #diSummit
It has taken the Data Science community’s diSummit only three years to become exactly that: a summit. June 27, at the ULB Solbosch campus, more than 600 experts from Belgium’s data community (data scientists, academics, business people) attended plenary sessions, hands-on workshops, master classes, tons of inspirational pitches (‘Ignite talks’) and a panel discussion.
diSummit organizer Philippe Van Impe has clear ambitions. In the three years to come, he says, ‘we are going to put Belgium on the fast track of #AI with the "AI-Hub" program, connecting the Belgian data sector to the international data community.
Article by Dirk van Bastelaere
Earlier this year, French president Emmanuel Macron delivered a one-hour speech on France’s ambitions in Artificial Intelligence. AI ‘is a technological, economic, social and obviously ethical revolution,’ Macron said. ‘This revolution won’t happen in 50 or 60 years, it’s happening right now.’
Under the hashtag #AIforhumanity, Macron described France’s primary goal as to be leading in AI in Europe by developing a powerful ecosystem that deals with ‘mobility, defense, healthcare, fintech, etc.’.
To catch up with China and the US, France allocates €1.5B over a period of five years. That effort seems considerable, but turns pale in comparison to the world’s leading tech companies that spent between $20B to $30B on AI in 2016 alone. No less than 90 percent of that amount was spent on R&D. That year, Europe also was seriously lagging behind in external AI investment, totaling $3 to $4B, compared to the $8 to $12B investments in Asia and $15 to $23B in North America.
The French AI strategy evidently stretches beyond the injection of public money. It’s all about establishing an ecosystem connecting French engineers, data scientists, math schools and AI researchers to international private companies and start-ups by creating a national AI research program and by having French administrations making data sets available for data scientists to build AI services using those data sets. France also want to triple the number of AI professionals by 2022.
Hub France AI
The organization to deploy this strategy, Hub France IA, was launched December 21, 2017. It was presented at diSummit by its co-founder and managing director, Nathanaël Ackerman.
A former advisor to French State Secretaries for Innovation and AI, Christophe Sirugue and Axelle Lemaire, Mr. Ackerman is a Belgian civil engineer who at a certain moment in his career worked for six years at the Brussels Free University (ULB) Technology Transfer Office (TTO). The role of the Technology Transfer Office is to promote collaborative research and business development between the university and its external partners (businesses, public bodies, competition clusters, professional associations, etc.), as well as participating in local and regional development.
Mr. Ackerman’s role at Hub France IA is approximately that: connecting businesses, industries, academics and experts. ‘The Hub is an organization, created by a broad community of experts in AI and business, that aims at creating an actual AI industrial sector in France and Europe,’ Mr. Ackerman told The innovator, the English-language supplement on innovation of the French financial daily Les Echos.
During his talk at diSummit’s plenary session, he made it a clear point that France not only wants to be leading in AI development but also aims at connecting AI hubs in Paris, London, Berlin and everywhere else in Europe, moving towards a European AI Alliance.
‘Launched roughly six months ago, Hub France AI now includes most industries and leading academics. ‘It is important for AI that the Hub is much more than data analytics alone,’ Nathanaël Ackerman told the diSummit plenum. ‘We want it to evolve from a Think Tank to a Do Tank. It counts 18 business groups already. For each group we drafted a deep cartography and formulated short-term goals for the next one to three years. Important corporations are already taking part. For the mobility business group, that’s for instance Michelin, Renault, Air France and others. Each of the groups defines their own goals. The purpose is also to develop new products and services with AI.'
‘The hub is a one stop shop for the AI community and offers advanced services to its members (training, recruitment, legal, etc …),’ Mr. Ackerman said in the Innovator interview. ‘If one has strategic or technical questions they can ask them to the Hub and the Hub will find the person, or the company that is able to answer. We plan to introduce progressively a marketplace for talent, technology, projects and mergers and acquisitions beginning this summer. Training for corporate executives has already started and includes the strategic impact of AI and management level courses on best practices in AI adoption and ethics.’
Next to the business groups, Hub France IA also counts 11 transversal groups, including an ‘ethics’ group, where the ethical dimension of developments from the business groups are discussed.
Déclaration de Montréal pour un développement responsable de l'intelligence artificielle
Ethics in the development of AI proves to be a main concern for people considering it on a high level, not necessarily disquieting young developers pitching their solution to the diSummit plenum.
AI and Ethics, nevertheless, was one of the focal points of this year’s diSummit. Data to Insight’s expert practice leader Toon Borré did a master class on the subject: ‘The Need for a Fellowship’. His article ‘Asimov’s Oath’ contains the main thoughts and solutions discussed during the master class. The subject was also addressed by Nathanaël Ackerman and Jean-Luc Dormoy, who has a PhD in Artificial intelligence and works as an energy and IT innovator. It was also one of the main themes of the closing panel with Marc-Antoine Dilhac, Canada Research Chair in Public Ethics and initiator of the ‘’Montreal Declaration for a Responsible Development of Artificial Intelligence’, Khalil Rouhana, Director at the EU Directorate for Digital Industry, and Françoise Soulié-Fogelman, a professor at the School of Computer Software at Tianjin University and one of the 52 members of the High-Level Expert group whose general objective is to support the implementation of the European strategy on AI.
With the announcement of the ‘Montreal Declaration for a Responsible Development of Artificial Intelligence’, Canada definitely is leading the way. An initiative of the Université de Montréal, the Declaration wants ‘to spark a broad dialogue between the public, the experts and government decision-makers.’ Led by a steering and a development committee, the Declaration has gained broad support in Canadian society.
It is based on a declaration of ethical principles built around 7 core values: well-being, autonomy, justice, privacy, knowledge, democracy and responsibility. ‘These values, suggested by a group of ethics, law, public policy and artificial intelligence experts,’ the gremium states on its website, ‘have been informed by a deliberation process. This deliberation occurred through consultations held over three months, in 15 different public spaces, and sparked exchanges between over 500 citizens, experts and stakeholders from every horizon.’
‘For Canada, the EU is a trusted trade partner that is neither like Canada itself willing to compromise on security and values,’ Development Committee Leader Marc-Antoine Dilhac said to the diSummit plenum. 'That’s why Canada is willing to engage into dialogue with the EU, like it is already doing with the US, Japan and the whole of Latin America. For an ethical development of AI, it is essential to seek international common ground.'
Towards a Central Registry for Algorithms
In a sense, Mr. Dilhac was answering Toon Borré’s call for an international solution. In his master class on AI and ethics, Toon held a pledge for what he calls a ‘Central Registry for algorithms’, a ‘legal deposit’ where ordinary citizens can check whether an algorithm contains biases and preferences. An independent supervisory body will have to score the algorithms on their ethics and social impact, while developers will have to follow a code of conduct and guidelines, formalized by the taking of an oath comparable to the Hippocratic Oath for physicians.
Toon has systematically been pointing out the distribution of moral responsibility in AI development. When asked if they would be willing to write an algorithm that calculates if it would be cheaper for a hospital and an insurance company to let a patient die on an operating table, algorithm developers sometimes wave responsibility, because ‘it’s the client who ordered the algoritm, not me’.
But in ethical matters, there’s no looking away.
‘This debate fundamentally involves three parties,’ Toon Borré says, ‘the data scientists who develop the algorithms, the organizations that commission them to do so, and the public authorities, which have to create the legislative framework within which algorithm development takes place in society. Each of these parties must face up to its responsibilities. The appointment of the EU's High-Level Expert group is already a first big step towards an international approach.
But as the Montreal Declaration makes clear, there’s definitely a fourth party that should be involved: the citizens, the subjects who, in their daily lives, will most likely be directly impacted by AI developments.