Stakeholder Participation in AI

As an agile software development team we want to ensure that our products are ethically sound. The societies in which our AI systems operate are complex and dynamic, which is why human participation is essential (Parasuraman, p. 293). Digital technologies are at the same time challenging us to reevaluate our ethical frameworks and if AI is to serve the public as a whole, it is essential that civil society and state organs together concern themselves with how AI is be used ethically (DEK, p.13-14; Parasuraman, p. 286). As such, we are concerned with many interrelated and potentially conflicting domains. As the German Trade Union Confederation (DGB) notes, the importance ascribed to AI systems is backed not only by scientific and altruistic interests, but by economic and political ones while UNESCO observes that AI technology is developed and led by multinational companies, most of them operating in the private sector and less obligated to the public good (DGB, p. 2; UNESCO, p. 8). If AI is to enrich society and adhere to human ethical principles, it is critical that these vast dimensions of stakeholders are harmonized and included in our designs.

Providing epistemic justice

Germany’s federal commission of ethics (DEK) holds that the state must provide the legal means for citizens, companies and state organizations to both make use of their ethically based rights but also to comply with their duties (DEK, p. 69). In digital society this includes providing infrastructure and technical prerequisites, such as enabling technologies, institutions and mediators, whilst including a broad spectrum of stakeholders within civil society (DEK, p. 69). As a federal organization, it is not surprising that the solutions listed by DEK mostly relate to bureaucratic and legal domains, as developing further data protection laws, strengthening supervising institutions, setting up norms and standards and strengthening consumer- and commercial associations (DEK, p. 17, 28, 29, 30).

UNESCO however reminds us that such public policy measures restrict themselves to AI governance, good practice and education for engineers while keeping the stakeholder scope rather narrow (Unesco, p.23, 24). As Parasuraman notes, AI is able to or is already making its mark on a large part of our everyday lives so that [a]lgorithms have come to play a crucial role in the selection of information and news that people read, the music that people listen to, and the decisions people make (Unesco, p.3; Parasuaraman, p. 286). If a common agreement on AI is to be reached which represents all stakeholders, easily neglected aspects must be included, such as culture, education, science and communication (Unesco, p.24). In this view, it is to be safeguarded that no epistemic injustice is created with the development of AI, such as being wronged in one’s capacity as a knower or being deprived of knowledge (Goldman & O’Connor).

Side A: Data

One aspect of AI and digital society relates to the use of data which is created, gathered and used within a complex network of stakeholders. The ethical dimensions of data on one hand include objective requirements on how data may be used (which is often already regulated by laws such as the GDPR), but also subjective rights which stakeholders have against other stakeholders (DEK, p. 16). Data exists in the networks of stakeholders and poses a challenge to regulation which needs to safeguard the rights of individual or collective stakeholders in increasingly complex and dynamic data-ecosystems (DEK, p. 17). Important is the DEK take on data ownership: generating data does not lead to ownership, instead other stakeholders have a veto in the data economies which they all are a part of. The way in which data is used ethically thus depends on factors which stakeholders themselves bring to the table, such as how data is gathered, what stakeholder interests are concerned, public interest, democratic values, etc. (DEK, p. 17). In philosophical terms, data ought to be viewed within the epistemic paradigm of collective agents where knowledge, in this case data, is negotiated via networks (Goldman & O’Connor).

That data usage is today not always negotiated with all concerned stakeholders becomes clear In the context of the workplace. Here the handling of employee data must be designed to protect the employees’ personal rights, something which is to some extent handled by laws such as the European GDPR (DGB, p. 5). However, there is no data protection law specifically aimed at protecting the data of employees as employee stakeholders, nor do current laws provide for a general right of co-determination (DGB, p. 5). There are however such regulations with regards to co-determination of and participation in the introduction and use of technical equipment suitable for monitoring performance and behaviour (DGB, p. 5). This however falls short of providing employees with a veto right once a system is in place which alters their professional life by using anonymous data.

Side B: Algorithmic systems

In contrast to data, the ethical discourse resides with the relation between man and machine, especially with regards to automation and placing decision making processes in the hands of autonomous systems. In contrast to the data perspective, stakeholders do not necessarily have anything to do with the data which the system is processing, but may still be affected in an ethically relevant way (DEK, p. 24). DEK advocates a human-centered design which includes human values into the system by considering basic rights and freedoms (DEK, p. 163). This view should run through the entire organisation and process of software development, equipped with an inclusive and participatory stance (DEK, p. 163; Goldman & O’Connor). As with the question of data, algorithmic systems too must include a variety of stakeholders who can safeguard ethical compliance and minimize risk for bias, discrimination, epistemic injustice, etc. However, the network idea of data is not as easily applied to the algorithmic systems, so that developers and the organisations building these systems are given a greater expert role, which however also carries with it larger social responsibility (DEK, p. 163).

The solution is not technological

Most developers will tell you: the solution is not the technology, but the design. A decisive factor for the success of AI in society is the transparent, verifiable and controllable design of the system which is created by and around humans (DEK, p. 163; DGB, p. 4). As the European Union notes, promoting the trustworthiness in machines is necessary to create broad acceptance which in turn will enable us to reap the benefits of such systems (EU, p. 4). Essential is a broad network of participation, where stakeholders are involved and can co-determine the definition of the objectives of AI systems (Freeman & Phillips, p. 333). The state is to create the foundations to enable ethical and inclusive AI by regulation, but also by helping civil society to better understand and control its data and the systems which are being applied. As UNESCO has shown, the state’s view is limited and in order to prevent any epistemic injustice and to safeguard our rights, freedoms and welfare of populations, locally, nationally or globally, the real challenge is to include all stakeholder views in an interconnected global and digital world. This cannot merely be an issue for policy making, but needs to be a task for politics and civil society (Goldman & O’Connor).


  • Parasuraman, Raja; Sheridan, Thomas B.; Wickens, Thomas D. – A  Model for Types and Levels of Human Interaction with Automation ,IEEE Transactions  on Systems, man, and Cybernetics;  (2000)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s