Curricula - Knowledge - Navigation

PARTICIPATION Governance of Responsible AI: From Ethical Guidelines to Cooperative Policies

This research article explores how to develop and use AI ethically and democratically. The authors suggest a framework of democratic experimentation based on social inquiry. They claim that this framework can involve civil society in AI governance and protect democratic decision-making. As well, they criticize the current national strategies and ethics guidelines on AI, and show how their framework can improve them. 

In this research article, Robert Gianni, Santtu Lehtinen and Mika Nieminen are discussing ethical and political challenges of Artificial Intelligence (AI) and the limitations of current national strategies and ethics guidelines. It proposes a framework of democratic experimentation based on the method of social inquiry, inspired by classical pragmatist scholars. The article argues, that this framework can help to involve civil society in the governance of AI and to safeguard democratic decision-making processes. 

AI and society: Ethical and political challenges of a disruptive technology

The increasingly pervasive role of Artificial Intelligence (AI) in our societies is radically changing the way that social interaction takes place within all fields of knowledge. The obvious opportunities in terms of accuracy, speed and originality of research are accompanied by questions about the possible risks and the consequent responsibilities involved in such a disruptive technology. In recent years, this twofold aspect has led to an increase in analyses of the ethical and political implications of AI. 

As a result, there has been a proliferation of documents that seek to define the strategic objectives of AI together with the ethical precautions required for its acceptable development and deployment. Although the number of documents is certainly significant, doubts remain as to whether they can effectively play a role in safeguarding democratic decision-making processes. Indeed, a common feature of the national strategies and ethical guidelines published in recent years is that they only timidly address how to integrate civil society into the selection of AI objectives. 

A pragmatist proposal for a participatory governance model

Although scholars are increasingly advocating the necessity to include civil society, it remains unclear which modalities should be selected. If both national strategies and ethics guidelines appear to be neglecting the necessary role of a democratic scrutiny for identifying challenges, objectives, strategies and the appropriate regulatory measures that such a disruptive technology should undergo, the question is then, what measures can we advocate that are able to overcome such limitations? Considering the necessity to operate holistically with AI as a social object, what theoretical framework can we adopt in order to implement a model of governance? What conceptual methodology shall we develop that is able to offer fruitful insights to governance of AI? 

Drawing on the insights of classical pragmatist scholars, we propose a framework of democratic experimentation based on the method of social inquiry. In this article, we first summarize some of the main points of discussion around the potential societal, ethical and political issues of AI systems. We then identify the main answers and solutions by analyzing current national strategies and ethics guidelines. After showing the theoretical and practical limits of these approaches, we outline an alternative proposal that can help strengthening the active role of society in the discussion about the role and extent of AI systems.

Links

Website: https://participation-in.eu/the-project/ 

Facebook: https://www.facebook.com/ParticipH2020/ 

Twitter: https://twitter.com/ParticipH2020 

LinkedIn: https://www.linkedin.com/showcase/participation-h2020/ 

Keywords

Extremism, Prevention, Radicalisation, Empowerment, Violence, Polarisation, Jihad, School, Student, Teacher, Curriculum, Gender