Artificial Intelligence has been through several ups and downs since its early beginnings in the 1950s. At this point in time it is clear that this technology is here to stay and with increasing speed it is making its mark on human domains, from global economics to individual lives. Facing increasing possibilities and risks it becomes vital to safeguard that such systems are ethical, reflecting human interests and human values. This paper argues that ensuring ethical AI necessarily means that ownership of AI must be distributed amongst its stakeholders, including the public as a whole. This argument is supported by the rawlsian notion of property-owning democracy, but also by a lockean view of the public as creators and owners of their data.
The challenge of responsibility
The complexity of AI systems creates great challenges when trying to locate who is responsible for ensuring its ethical soundness. Besides algorithms and the programs which a system can run, AI relies on its algorithms and the data which they are fed and trained on. AI systems are also often black boxes and intransparent to humans. Precisely because the structures of AI systems, which influence everyday life, are obscure it is vital to map out who is responsible for them. Noorman lists three points which qualify an agent as being responsible. The agent must have a causal connection to the outcome, has to have sufficient knowledge and be able to calculate possible consequences of actions. Finally, the agent must be able to freely choose to act in a certain way.
In software development, development teams in close contact with their clients have direct control over their product. They have the last say in where the development is headed, what is prioritized and when deployment is done. They ensure sufficient QA & testing, security and resilience, transparency and documentation. The team is thus accountable for the software in the sense that it is capable of explaining its workings, and responsible in their role as developers. However, limitations derive from the fact that development teams are often not fully in charge of their product, due to top-down hierarchies or requirements by paying customers. The team may also be dependent on technical elements outside of its product. If the team has not full control or insight into what is being fed into their system, responsibility on their part decreases and is instead spread throughout the paths of causal contribution. It is in light of this complexity in which Deutsche Telekom and DataEthics propose to view the entire organization behind the software as being morally responsible to different degrees. The challenge to ethical AI which is interwoven in a net of complex causal relations is thus to map out where responsibility and accountability lies and ensure that it is brought to bear.
The problem of bias
Responsibility for ethical AI means ensuring that the system performs ethically sound functions. One infamous challenge for ethical AI regards its proneness to bias which bitkom defines as machine prejudice, morally objectionable demographic disparities in algorithmic systems. There are for instance scenarios where minorities are statistically underrepresented in datasets and thus discriminated by the algorithm. But bias also works in more subtle ways. O’Neil shows how a system based on socially biased policing will calculate, relying on arrest rates, that it needs to keep sending its resources to the same areas which are already being heavily policed, thereby increasing the likelihood of finding more crimes. Values may also act as proxies for race and gender and for instance discriminate minority job applicants due to name, language, etc.
AI is especially prone to bias because it learns from generalization of example data. Based on statistics and probability theory it looks for patterns in large amounts of data. The systems is however weak when it comes to context, including logic, differentiating between causation and correlation, understanding language and applying human reasoning. AI is hence helplessly dependent on the data which it is fed. It has no opinion of the data encountered and will execute its commands regardless. Hence, if the data used entails bias, then the system is at risk of being discriminating, which is incompatible with ethical AI. It is thus obvious that bias is not fundamentally a technical, but a social issue and that biased data merely reflects prejudices that are inherent in society.
The limits of tokens
O’Neil suggests a rather extensive lists of measurements to prevent bias in AI, including codes of ethics, regulation, auditability, but also a wider conception of the role of the computer scientist, broad participation and a discussion on how system success is measured. As will be argued, it is broad participation and the resulting collective effort in managing AI which is fundamentally important to ensure ethical AI. Participation is often seen as providing a broad and inclusive assessment of the system, the idea being that input from concerned parties throughout the stages of development and assessment, often via diverse teams, could prevent and detect bias. This approach is however limited. Hiring a person of turkish descent does not mean that one gets a turkish view on things. Social groups are constituted intersectionally, ethnicity, class, sex and age being some of the interacting categories. Thus, hiring a cosmopolitan, well educated developer with turkish descent may hardly grasp the kind of turkish minority view one tries to get at. We can also observe that the humanities and social sciences keep discovering the troubled lives of people which have historically been hidden, while at the same time new such groups emerge. Counteracting bias by promoting diverse organizations hence misses that organizations will have trouble reflecting all of society which may be affected by AI solutions and risks producing a lopsided picture while continuing to develop systems which carry bias against people who are once again not being heard.
Broad participation and property-owning
Society is complex and dynamic, broad participation in AI is hence essential and something which civil society and state organs must concern themselves with. To both protect citizens and also enable them to reap the benefits of AI, the public must have a veto right, have a say in what is developed and the right to file complaints. This approach is backed by social epistemology which has championed the importance of bringing together citizens from different walks of life to define, through discussion, the principal problems they confront and what might be the most promising solutions.
Support is also lent by the theme of ownership. As we have already noted, AI algorithms are trained on datasets and whatever value such a system produces is fundamentally based on it’s access to usable data. Today the human is a data source itself. Not only our purchases and movements produce data, but even our sleeping habits, steps and stress level. Ignoring current legal aspects of data, the argument from a lokean perspective holds that data is in fact a product of an individual’s actions:
[…] every man has a property in his own person. This nobody has any right to but himself. The labour of his body, and the work of his hands, we may say, are properly his
As data is created by the labour of our bodies, it is hence something that fundamentally belongs to us. Ownership of data entails power over and responsibility for what the data is used for, which directly includes having a say in the AI algorithms and systems which are being developed, if one’s data is to be used. According to DataEthics, the individual being in charge of her data is a prerequisite for ethical AI.
The proposed lockean ownership may however have a drawback. Just as traditional economic resources can be distributed unevenly, so can digital resources, which according to Rawls leads to inequalities in the ability to exercise one’s liberties. Locke himself proposed a social contract where free agents handed some of their autonomy to a government in order to organize society. To Rawls then, property rights are means to an end and must serve to realize moral powers. In his theorem, ownership is to serve two principles of justice as fairness: 1) each person has the same liberties and opportunity to exercise them, and 2) inequalities must derive from fair equality of opportunity and be of the greatest benefit of the least-advantaged members of society. He thereby echoes the ethical principles of the EU:s HLEG Model, in which AI must respect citizens rights, democracy and justice, the freedom of the individual, equality and promote non-discrimination and solidarity.
Rawls’ solution on how resources are to be distributed to achieve these moral principles is encapsulated in the property-owning democracy which entails widespread predistribution of resources, creating a background equality and undercutting the tendency for domination. Legislators and political parties become independent of large concentrations of private power while self-respect is provided to the public which is guaranteed autonomy and self determination. Property-owning democracy also has an educational effect in that citizens are motivated to participate in public debate. This furthers inclusion and avoids false beliefs, strengthening democracy.
Rawls’ theorem is compatible with the public owning its data and thereby determining AI systems. But doesn’t ownership and participation in this form carry the risk of AI becoming a battlefield of conflicting stakeholders? As the German Trade Union Confederation (DGB) notes, the importance ascribed to AI systems is backed not only by scientific and altruistic interests, but by economic and political ones while UNESCO observes that AI technology is developed and led by multinational companies, most of them operating in the private sector and less obligated to the public good. However, stakeholder theory has pointed out the opportunities in stakeholder cooperation, which generates value precisely because stakeholders can jointly satisfy their needs and desires by making voluntary agreements with each other and will furthermore, due to their common agreement, also share responsibility. Data too ought to be viewed within the epistemic paradigm of collective agents where knowledge is negotiated via networks. This is supported by the German Data Ethics Commission (DEK) who view data ownership as a common good distributed within data economies in which all stakeholders have a veto.
Conclusion
We have reflected on the challenge which ethical AI poses to responsibility; in complex and dynamic AI systems we will often be unable to map all causal paths which determine AI development. If we are to prevent biased AI systems, we must include as much of society as possible who can identify and prevent bias and who will share moral responsibility. Since an organization only has a limited scope, the suggested solution is to turn the public into stakeholders who have a direct say in how AI is developed. Turning ownership of AI to the public can be argued for by Locke’s notion of ownership applied to the society of late modernity, where each individual is an extensive source of data resulting from the body. An equal distribution of digital resources is furthermore essential to democratic participation and fairness while at the same time promoting democratic cooperation amongst stakeholders. Following these arguments, the task ahead is to define how such a digital distribution can take form; what does data ownership entail and what is the exact relation between AI and data? What is the role of the state and the public and how are different stakeholders such as private persons, large corporations and science to interact and how is the public enabled to make informed decisions about AI. Such issues must be tackled, not only for the sake of ethical AI in terms of democratic and social equality, but also since AI may put us in a spot where we need to choose between several moral imperatives. At what point do we for instance accept biased systems in favor of some disease curing solution. These are however questions which once again lay at the heart of society and which its moral human agents need to answer.
Literature and Resources
AI HLEG – Ethics Guidance for Trustworthy AI – The EC High-Level Expert Group on Artificial Intelligence – (2019), URL: https://ec.europa.eu/futurium/en/ai-allianceconsultation
Alpaydin, Ethem – Machine Learning: The New AI – MIT Press, Cambridge (2016)
bitkom – Empfehlungen für den verantwortlichen Einsatz von KI und automatisierten Entscheidungen – (2018), URL: https://www.bitkom.org/sites/default/files/file/import/180202-Empfehlungskatalog-online-2.pdf
Buckner, Cameron- Adversarial Examples and the Deeper Riddle of Induction – (2020), URL: https://arxiv.org/ftp/arxiv/papers/2003/2003.11917.pdf
Barocas, Solon; Hardt, Moritz; Narayanan, Arvind – Fairness and Machine Learning: Limitations and Opportunities – (2019), URL: https://fairmlbook.org/pdf/fairmlbook.pdf
DataEthics – DataEthics Principles – (2018), URL: https://dataethics.eu/data-ethics-principles/
Datenethikkommission der Bundesregierung DEK, Bundesministerium des Innern, für Bau und Heimat – Gutachten der Datenethikkommission – (2019), URL: https://www.bmi.bund.de/SharedDocs/downloads/DE/publikationen/themen/it-digitalpolitik/gutachten-datenethikkommission.pdf?__blob=publicationFile&v=4
Deutscher Gewerkschaftsbund DGB – Artificial Intelligence and the Future of Work A discussion paper of the German Confederation of Trade Unions concerning the debate on artificial intelligence (AI) in the workplace – (2019), URL: https://www.dgb.de/themen/++co++4f242f08-18a7-11e9-b2c1-52540088cada
Deutsche Telekom – Leitlinien für Künstliche Intelligenz – (2018), URL: https://www.telekom.com/de/konzern/digitale-verantwortung/details/ki-leitlinien-der-telekom-523904
Dreyfus, Hubert L. – What Computers Still Can’t Do: A Critique of Artificial Reason – The MIT Press, USA (1992)
Ethikbeirat HR Tech – Richtlinien für den verantwortungsvollen Einsatz von Künstlicher Intelligenz und weiteren digitalen Technologien in der Personalarbeit – (2019), URL: https://www.ethikbeirat-hrtech.de/wp-content/uploads/2019/09/Ethikbeirat_und_Richtlinien_Konsultationsfassung_final.pdfethikbeirat-hrtech.de)
Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS – Trustworthy Use of Artificial Intelligence – (2019), URL: https://www.iais.fraunhofer.de/content/dam/iais/KINRW/Whitepaper_Thrustworthy_AI.pdf
Freeman, Edward R. & Phillips, Robert A. – Stakeholder Theory: A Libertarian Defense – Business Ethics Quarterly, 12(3): 331–349 (2002)
Goldman, Alvin & O’Connor, Cailin; Zalta, Edward N. (ed.) – Social Epistemology – The Stanford Encyclopedia of Philosophy (2019), URL: https://plato.stanford.edu/entries/epistemology-social
Locke, John – Two Treatises of Government and A Letter Concerning Toleration – CreateSpace Independent Publishing Platform (2015)
Manifesto for Agile Software Development, URL: https://agilemanifesto.org/
Mitchell, Melanie – Artificial Intelligence: A Guide for Thinking Humans – Penguin Books UK (2020)
Noorman, Merel; Zalta, Edward N. (ed.) – Computing and Moral Responsibility – The Stanford Encyclopedia of Philosophy (2018), URL: https://plato.stanford.edu/archives/spr2018/entries/computing-responsibility/
O’Neil, Cathy – Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy – Penguin Books UK (2017)
Parasuraman, Raja; Sheridan, Thomas B.; Wickens, Thomas D. – A Model for Types and Levels of Human Interaction with Automation ,IEEE Transactions on Systems, man, and Cybernetics – (2000)
Rawls, John (1) – A Theory of Justice: Revised Edition – Harvard University Press; 2nd edition (1999)
Rawls, John (2) – Justice as Fairness – A Restatement – Harvard University Press (2003)
UNESCO & COMEST – Preliminary study on the Ethics of Artificial Intelligence – (2019), URL: https://unesdoc.unesco.org/ark:/48223/pf0000367823
Wesche, Tilo – The Concept of Property in Rawls’s Property-Owning Democracy – Analyse & Kritik 35 (1):99-111 (2013)