Litterär rationalismen i det klassiska Aten

Ett flertal litterära genrer utvecklades i Grekland under den arkaiska (700-480 f.Kr.) och klassiska (480-323 f.Kr.) tidsepoken. Samtidigt utvecklades den analytiska tankeverksamheten, synlig i vetenskap och filosofi. Syftet med denna text är att belysa hur de litterära genren attisk tragedi och historieskrivning i det antika Aten reflekterar denna utveckling med utgångspunkt i att litterära genrer alltid förhåller sig till en sociokulturell kontext.

Den grekiska världsbilden
Litterära genrer uppstår ur specifika retoriska situationer och de tillhörande behov och förväntningar bland publiken (exigenser), en genre är således grundad i uppgiften den tar sig åt inom en sociokulturellt konstruerad kontext. För att förstå kontexten inom vilken attisk tragedi och historieskrivning agerar vill vi först se på de grundläggande sociokulturella dragen inom vilka genrerna agerar, för att sedan undersöka exempel på själva genrerna.

Den arkaiska perioden bevittnade främjandet av rationellt tänkande i det antika Grekland, intellektuella ville åt världens underliggande lagar bortom religiösa förklaringsmodeller. Präglande för den grekiska världsbilden var även en interaktiv relation mellan det gudomliga och mänskliga. Med tiden intog människan en allt större roll, vilket även syns i konsten som mer och mera närmade sig ett mänskligt ideal. Åren av elände under det Peloponnesiska kriget (431–404 f.Kr.) ifrågasätta etablerade föreställningar, vilket gav upphov till betydande filosofiska verk som Platons dialoger.
Det sociokulturella rum som litterära genrer i dåtida Aten agerade inom innehåller alltså en antropocentrisk rationalism som torde speglats i litterära former och publikens respektive förväntningar. Det klassiska Aten präglas vidare av demokratiskt styre, vilket inte bara medförde kritisk vana och debattkultur, men innebär även att staden drog till sig intellektuella från resten av den grekiska världen.

Attisk tragedi
Draman och de festivaler där de uppfördes var centrala i samhällslivet. Många profilerade sig genom att sponsra uppsättningarna. Dessa var välbesökta med upp till 20.000 åskådare (manliga medborgare som hade råd). Särskilt tragedin levererade diskussionsmaterial om dramatiska händelser och människans väsen. Attisk tragedi uppfördes inför en publik som var demokratiskt engagerad och deltog i hur deras polis styrdes. Det är inte förvånansvärt att tragedin inte bara underhöll, men även inspirerade och lärde. Även traditionellt hade poeter ansetts som en sorters lärare. Tragedin förväntades därför att vara politisk och kritisk. Samtidigt var den rituell med stark anknytning till religiösa riter och teman. Materialet baserade på kända myter som publiken kände till, förväntades dock att interpreteras och utvecklas på ett nytt sätt.

Aischylos (525-456 f.Kr.) är den första av Atens store tragediförfattare. Tragedierna handlar om stordåd och gudar, samtidigt som människans väsen iscensätts. Tragedin Orestien är politisk och tematiserar konflikten mellan social- och religiös ordning, vilket reflekterar stadens samtida politiska händelser.

Sofokles (ca. 496-406 f.Kr.) ger större vikt åt människan samtidigt som han ger röst åt den atenska kvinnan. I Antigone måste karaktären vid samma namn välja mellan att uppfylla sina moraliska obligationer gentemot sin döda broder eller att följa kung Creons lagar. Hennes situation liknas vid de atenska kvinnorna som saknar en manlig beskyddare (epíklēros). Här ifrågasätter Sofokles rådande normer och visar på essentiella svårigheter inom samhället. Euripides (ca. 485-406 f.Kr.) ger människan den centrala rollen. Inte minst i tragedin om den utländska och ytterst intelligenta prinsessan Medea återfinner även här en stark kvinnlig röst. Hennes make, Jason, framhäver sin och greklands överlägsenhet, även när han lämnar henne för att gifta sig med en grekisk prinsessa. Tragedin ifrågasätter traditionella ideer om hjältar och den grekiska kulturen. Euripides uttryck för kvinnors öde är ytterst stark:

Of all creatures that have breath and sensation, we women are the most unfortunate. First at an exorbitant price we must buy a husband and master of our bodies. […] And the outcome of our life’s striving hangs on this, whether we take a bad or a good husband. For divorce is discreditable for women and it is not possible to refuse wedlock.

Dessa tragedier var inte moraliserande, men ställde ändå grundläggande frågor om samhällets ordning och värderingar, vilket inbegriper kvinnornas passiva och utsatta samhällsroll. I Perikles liktal säger han till de atenska kvinnorna:


[…]Great will be your glory in not falling short of your natural character; and greatest will be hers who is least talked of among the men whether for good or for bad.

Det är uppenbart att många kvinnliga figurer i den attiska tragedin bildar den raka motsatsen till samhällets norm. Detta uppfattades säkerligen som en moralisk fara av vissa, det ändrar dock inte att tragedin tematiserade och ifrågasatte normer och skeenden inför en stor demokratisk publik.

Historieskrivning
För grekerna fanns det ingen klar skiljelinje mellan myt och historia och likaså har Herodotos (490/480-430/420 f.Kr.) å ena sidan en stor vördnad för Gudarna samtidigt som hans tematiseringar tar avstamp i människans handlingar och siktar mot rationella förklaringar. Dessa är de mäktigas korruption och dåliga självinsikt. Samtidigt ifrågasätter han grekernas negativa föreställningar om icke-grekiska kulturer och kvinnor. Flory menar att Herodotos enda kända verk Historia, som analyserar orsaken och följden av perserkrigen, var för lång och svår för att läsas av en bred publik, även om läskunnigheten ökade under hans tid. Böcker var sällsynta och korta så att man kunde lära sig innehållet utantill. Förmodligen läste Herodotus endast utdrag inför en publik. Herodotos måste därav ha skrivit Historia för att nå en liten och specialiserad litterär elit. Herodotos föddes under perserkrigens höjdpunkt och sätter således i efterhand ihop material från olika historiska och mytologiska källor. Analysen av perserkrigen uppfyllde det rationalistiska behovet av en intellektuell klass vilken säkerligen inkluderade viktiga politiker, samtidigt som de betydelsefulla händelserna sätts i kontext med mytologisk och panhellenistisk världssyn som talar mot en bredare publik.

Thukydides (460/455-411/397 f.Kr.) ser människan i världens centrum och ignorerar Gudarna. I Peloponnesiska krigets historia siktar även han mot nya objektiva förklaringar när han beskriver orsakerna och förloppet av greklands största krig. Han själv uppger att han dokumenterade konflikten från och med dess start då han tidigt insåg dess historiska betydelse. Thukydides skriver alltså medans kriget pågår, innehållet är tidsenlig och mera komplett än hos Herodotos samtidigt som han försöker att exkludera romantiska element. Boken är även denna lång och bryter med de litterära vanorna av den allmänna publiken. Liksom Herodotos är verket menad för en liten elit, fast boken uppnådde större popularitet. Det är även under denna tid som det blev alltmer vanligt att i en krets diskutera över litteratur. Samtidigt ger den svar till orsakerna till greklands största krig under en hög politiserad tid där demokratin i Aten fortfarande regerade.

Facit
Vi har anledning till att tro att de atenska medborgarna i den arkaiska och klassiska epoken var präglade av rationalistiska strömmar, både i människosynen och förmågan att ställa kritiska frågor. Det peloponnesiska kriget tvingar senare atenarna att ställa sig grundläggande frågor om världen. Vi ser ett spel mellan religion, tradition och upplysning där litterära former distanserar sig från religionen, hos Euripides och Thukydides spelar Gudarna ingen aktiv roll längre. Religionen försvinner dock aldrig komplett. I attisk tragedi speglas dessa sociokulturella egenskaper och bredvid de religiösa elementen är det troligt att frågor kring moral, samhälle och politik förväntades av publiken, vilket blir evident av tragediernas kvinnofigurer, men även då publiken hade stor politisk vana. Historieskrivningen följer dessa rationalistiska motiv, fast riktade sig till en intellektuell elit som förväntade sig renodlad vetenskaplig diskurs och litterär noggrannhet.

Litteratur
Andrew Ford – The Origins of Criticism – Literary Culture and Poetic Theory in Classical Greece – Princeton University Press, New Jersey (2002)

Carolyn Miller – Genre as Social Action – Quarterly Journal of Speech 70:2, pp. 151–176 (1984)

Euripides – Medea –
http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.01.0114%3Acard%3D1

Jennifer Tolbert Roberts, Sarah B. Pomeroy, Walter Francis Donla – Ancient Greece: A Political, Social, and
Cultural History – Oxford University Press, New York (1999)

Justina Gregory – Euripides as Social Critic – Greece & Rome Vol. 49, No. 2 (Oct., 2002), pp. 145-162

Rebecca Bushnell – A Companion to Tragedy – John Wiley & Sons, Incorporated (2013)

Stewart Flory – Who Read Herodotus’ Histories? – The American Journal of Philology Vol. 101, No. 1 (Spring, 1980), pp. 12-28

Thucydides – The Peloponnesian War – http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.01.0200%3Abook%3D1%3Achapter
%3D1%3Asection%3D1

Advertisement

AI as public ownership

Artificial Intelligence has been through several ups and downs since its early beginnings in the 1950s. At this point in time it is clear that this technology is here to stay and with increasing speed it is making its mark on human domains, from global economics to individual lives. Facing increasing possibilities and risks it becomes vital to safeguard that such systems are ethical, reflecting human interests and human values. This paper argues that ensuring ethical AI necessarily means that ownership of AI must be distributed amongst its stakeholders, including the public as a whole. This argument is supported by the rawlsian notion of property-owning democracy, but also by a lockean view of the public as creators and owners of their data.

The challenge of responsibility

The complexity of AI systems creates great challenges when trying to locate who is responsible for ensuring its ethical soundness. Besides algorithms and the programs which a system can run, AI relies on its algorithms and the data which they are fed and trained on. AI systems are also often black boxes and intransparent to humans. Precisely because the structures of AI systems, which influence everyday life, are obscure it is vital to map out who is responsible for them. Noorman lists three points which qualify an agent as being responsible. The agent must have a causal connection to the outcome, has to have sufficient knowledge and be able to calculate possible consequences of actions. Finally, the agent must be able to freely choose to act in a certain way.

In software development, development teams in close contact with their clients have direct control over their product. They have the last say in where the development is headed, what is prioritized and when deployment is done. They ensure sufficient QA & testing, security and resilience, transparency and documentation. The team is thus accountable for the software in the sense that it is capable of explaining its workings, and responsible in their role as developers. However, limitations derive from the fact that development teams are often not fully in charge of their product, due to top-down hierarchies or requirements by paying customers. The team may also be dependent on technical elements outside of its product. If the team has not full control or insight into what is being fed into their system, responsibility on their part decreases and is instead spread throughout the paths of causal contribution. It is in light of this complexity in which Deutsche Telekom and DataEthics propose to view the entire organization behind the software as being morally responsible to different degrees. The challenge to ethical AI which is interwoven in a net of complex causal relations is thus to map out where responsibility and accountability lies and ensure that it is brought to bear.

The problem of bias

Responsibility for ethical AI means ensuring that the system performs ethically sound functions. One infamous challenge for ethical AI regards its proneness to bias which bitkom defines as machine prejudice, morally objectionable demographic disparities in algorithmic systems. There are for instance scenarios where minorities are statistically underrepresented in datasets and thus discriminated by the algorithm. But bias also works in more subtle ways. O’Neil shows how a system based on socially biased policing will calculate, relying on arrest rates, that it needs to keep sending its resources to the same areas which are already being heavily policed, thereby increasing the likelihood of finding more crimes. Values may also act as proxies for race and gender and for instance discriminate minority job applicants due to name, language, etc.
AI is especially prone to bias because it learns from generalization of example data. Based on statistics and probability theory it looks for patterns in large amounts of data. The systems is however weak when it comes to context, including logic, differentiating between causation and correlation, understanding language and applying human reasoning. AI is hence helplessly dependent on the data which it is fed. It has no opinion of the data encountered and will execute its commands regardless. Hence, if the data used entails bias, then the system is at risk of being discriminating, which is incompatible with ethical AI. It is thus obvious that bias is not fundamentally a technical, but a social issue and that biased data merely reflects prejudices that are inherent in society.

The limits of tokens

O’Neil suggests a rather extensive lists of measurements to prevent bias in AI, including codes of ethics, regulation, auditability, but also a wider conception of the role of the computer scientist, broad participation and a discussion on how system success is measured. As will be argued, it is broad participation and the resulting collective effort in managing AI which is fundamentally important to ensure ethical AI. Participation is often seen as providing a broad and inclusive assessment of the system, the idea being that input from concerned parties throughout the stages of development and assessment, often via diverse teams, could prevent and detect bias. This approach is however limited. Hiring a person of turkish descent does not mean that one gets a turkish view on things. Social groups are constituted intersectionally, ethnicity, class, sex and age being some of the interacting categories. Thus, hiring a cosmopolitan, well educated developer with turkish descent may hardly grasp the kind of turkish minority view one tries to get at. We can also observe that the humanities and social sciences keep discovering the troubled lives of people which have historically been hidden, while at the same time new such groups emerge. Counteracting bias by promoting diverse organizations hence misses that organizations will have trouble reflecting all of society which may be affected by AI solutions and risks producing a lopsided picture while continuing to develop systems which carry bias against people who are once again not being heard.

Broad participation and property-owning

Society is complex and dynamic, broad participation in AI is hence essential and something which civil society and state organs must concern themselves with. To both protect citizens and also enable them to reap the benefits of AI, the public must have a veto right, have a say in what is developed and the right to file complaints. This approach is backed by social epistemology which has championed the importance of bringing together citizens from different walks of life to define, through discussion, the principal problems they confront and what might be the most promising solutions.
Support is also lent by the theme of ownership. As we have already noted, AI algorithms are trained on datasets and whatever value such a system produces is fundamentally based on it’s access to usable data. Today the human is a data source itself. Not only our purchases and movements produce data, but even our sleeping habits, steps and stress level. Ignoring current legal aspects of data, the argument from a lokean perspective holds that data is in fact a product of an individual’s actions:

[…] every man has a property in his own person. This nobody has any right to but himself. The labour of his body, and the work of his hands, we may say, are properly his

As data is created by the labour of our bodies, it is hence something that fundamentally belongs to us. Ownership of data entails power over and responsibility for what the data is used for, which directly includes having a say in the AI algorithms and systems which are being developed, if one’s data is to be used. According to DataEthics, the individual being in charge of her data is a prerequisite for ethical AI. 

The proposed lockean ownership may however have a drawback. Just as traditional economic resources can be distributed unevenly, so can digital resources, which according to Rawls leads to inequalities in the ability to exercise one’s liberties. Locke himself proposed a social contract where free agents handed some of their autonomy to a government in order to organize society. To Rawls then, property rights are means to an end and must serve to realize moral powers. In his theorem, ownership is to serve two principles of justice as fairness: 1) each person has the same liberties and opportunity to exercise them, and 2) inequalities must derive from fair equality of opportunity and be of the greatest benefit of the least-advantaged members of society. He thereby echoes the ethical principles of the EU:s HLEG Model, in which AI must respect citizens rights, democracy and justice, the freedom of the individual, equality and promote non-discrimination and solidarity.

Rawls’ solution on how resources are to be distributed to achieve these moral principles is encapsulated in the property-owning democracy which entails widespread predistribution of resources, creating a background equality and undercutting the tendency for domination. Legislators and political parties become independent of large concentrations of private power while self-respect is provided to the public which is guaranteed autonomy and self determination. Property-owning democracy also has an educational effect in that citizens are motivated to participate in public debate. This furthers inclusion and avoids false beliefs, strengthening democracy. 

Rawls’ theorem is compatible with the public owning its data and thereby determining AI systems. But doesn’t ownership and participation in this form carry the risk of AI becoming a battlefield of conflicting stakeholders? As the German Trade Union Confederation (DGB) notes, the importance ascribed to AI systems is backed not only by scientific and altruistic interests, but by economic and political ones while UNESCO observes that AI technology is developed and led by multinational companies, most of them operating in the private sector and less obligated to the public good. However, stakeholder theory has pointed out the opportunities in stakeholder cooperation, which generates value precisely because stakeholders can jointly satisfy their needs and desires by making voluntary agreements with each other and will furthermore, due to their common agreement, also share responsibility. Data too ought to be viewed within the epistemic paradigm of collective agents where knowledge is negotiated via networks. This is supported by the German Data Ethics Commission (DEK) who view data ownership as a common good distributed within data economies in which all stakeholders have a veto.

Conclusion

We have reflected on the challenge which ethical AI poses to responsibility; in complex and dynamic AI systems we will often be unable to map all causal paths which determine AI development. If we are to prevent biased AI systems, we must include as much of society as possible who can identify and prevent bias and who will share moral responsibility. Since an organization only has a limited scope, the suggested solution is to turn the public into stakeholders who have a direct say in how AI is developed. Turning ownership of AI to the public can be argued for by Locke’s notion of ownership applied to the society of late modernity, where each individual is an extensive source of data resulting from the body. An equal distribution of digital resources is furthermore essential to democratic participation and fairness while at the same time promoting democratic cooperation amongst stakeholders. Following these arguments, the task ahead is to define how such a digital distribution can take form; what does data ownership entail and what is the exact relation between AI and data? What is the role of the state and the public and how are different stakeholders such as private persons, large corporations and science to interact and how is the public enabled to make informed decisions about AI. Such issues must be tackled, not only for the sake of ethical AI in terms of democratic and social equality, but also since AI may put us in a spot where we need to choose between several moral imperatives. At what point do we for instance accept biased systems in favor of some disease curing solution. These are however questions which once again lay at the heart of society and which its moral human agents need to answer.

Literature and Resources

AI HLEG – Ethics Guidance for Trustworthy    AI – The EC High-Level Expert Group on Artificial Intelligence – (2019), URL: https://ec.europa.eu/futurium/en/ai-allianceconsultation 

Alpaydin, Ethem – Machine Learning: The New AI – MIT Press, Cambridge (2016)

bitkom – Empfehlungen für den verantwortlichen Einsatz von KI und automatisierten Entscheidungen –  (2018), URL: https://www.bitkom.org/sites/default/files/file/import/180202-Empfehlungskatalog-online-2.pdf

Buckner, Cameron- Adversarial Examples and the Deeper Riddle of Induction –  (2020), URL: https://arxiv.org/ftp/arxiv/papers/2003/2003.11917.pdf

Barocas, Solon; Hardt, Moritz; Narayanan, Arvind – Fairness and Machine Learning:  Limitations and Opportunities – (2019), URL: https://fairmlbook.org/pdf/fairmlbook.pdf

DataEthics – DataEthics Principles – (2018), URL: https://dataethics.eu/data-ethics-principles/

Datenethikkommission der Bundesregierung DEK, Bundesministerium des Innern, für Bau und Heimat – Gutachten der Datenethikkommission – (2019), URL: https://www.bmi.bund.de/SharedDocs/downloads/DE/publikationen/themen/it-digitalpolitik/gutachten-datenethikkommission.pdf?__blob=publicationFile&v=4 

Deutscher Gewerkschaftsbund DGB – Artificial Intelligence and the Future of Work A discussion paper of the German Confederation of Trade Unions concerning the debate on artificial intelligence (AI) in the workplace – (2019), URL: https://www.dgb.de/themen/++co++4f242f08-18a7-11e9-b2c1-52540088cada 

Deutsche Telekom – Leitlinien für Künstliche Intelligenz – (2018), URL: https://www.telekom.com/de/konzern/digitale-verantwortung/details/ki-leitlinien-der-telekom-523904

Dreyfus, Hubert L. – What Computers Still Can’t Do: A Critique of Artificial Reason – The MIT Press, USA (1992)

Ethikbeirat HR Tech – Richtlinien für den verantwortungsvollen Einsatz von Künstlicher Intelligenz und weiteren digitalen Technologien in der Personalarbeit – (2019), URL: https://www.ethikbeirat-hrtech.de/wp-content/uploads/2019/09/Ethikbeirat_und_Richtlinien_Konsultationsfassung_final.pdfethikbeirat-hrtech.de)

Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS – Trustworthy Use of Artificial Intelligence – (2019), URL: https://www.iais.fraunhofer.de/content/dam/iais/KINRW/Whitepaper_Thrustworthy_AI.pdf

Freeman, Edward R. & Phillips, Robert A. – Stakeholder Theory: A Libertarian Defense – Business Ethics Quarterly, 12(3): 331–349 (2002)

Goldman, Alvin & O’Connor, Cailin; Zalta, Edward N. (ed.) – Social Epistemology – The Stanford Encyclopedia of Philosophy (2019), URL: https://plato.stanford.edu/entries/epistemology-social 


Locke, John – Two Treatises of Government and A Letter Concerning Toleration – ‎ CreateSpace Independent Publishing Platform (2015)

Manifesto for Agile Software Development, URL: https://agilemanifesto.org/

Mitchell, Melanie – Artificial Intelligence: A Guide for Thinking Humans – ‎Penguin Books UK (2020)

Noorman, Merel; Zalta, Edward N.  (ed.)  – Computing and Moral Responsibility – The Stanford Encyclopedia of Philosophy (2018), URL: https://plato.stanford.edu/archives/spr2018/entries/computing-responsibility/ 

O’Neil, Cathy – Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy – Penguin Books UK (2017)

Parasuraman, Raja; Sheridan, Thomas B.; Wickens, Thomas D. – A Model for Types and Levels of Human Interaction with Automation ,IEEE Transactions on Systems, man, and Cybernetics – (2000) 

Rawls, John (1) – A Theory of Justice: Revised Edition – Harvard University Press; 2nd edition (1999)

Rawls, John (2) – Justice as Fairness – A Restatement – Harvard University Press (2003)

UNESCO & COMEST – Preliminary study on the Ethics of Artificial Intelligence – (2019), URL: https://unesdoc.unesco.org/ark:/48223/pf0000367823 

Wesche, Tilo – The Concept of Property in Rawls’s Property-Owning Democracy – Analyse & Kritik 35 (1):99-111 (2013)

Stakeholder Participation in AI

As an agile software development team we want to ensure that our products are ethically sound. The societies in which our AI systems operate are complex and dynamic, which is why human participation is essential (Parasuraman, p. 293). Digital technologies are at the same time challenging us to reevaluate our ethical frameworks and if AI is to serve the public as a whole, it is essential that civil society and state organs together concern themselves with how AI is be used ethically (DEK, p.13-14; Parasuraman, p. 286). As such, we are concerned with many interrelated and potentially conflicting domains. As the German Trade Union Confederation (DGB) notes, the importance ascribed to AI systems is backed not only by scientific and altruistic interests, but by economic and political ones while UNESCO observes that AI technology is developed and led by multinational companies, most of them operating in the private sector and less obligated to the public good (DGB, p. 2; UNESCO, p. 8). If AI is to enrich society and adhere to human ethical principles, it is critical that these vast dimensions of stakeholders are harmonized and included in our designs.

Providing epistemic justice

Germany’s federal commission of ethics (DEK) holds that the state must provide the legal means for citizens, companies and state organizations to both make use of their ethically based rights but also to comply with their duties (DEK, p. 69). In digital society this includes providing infrastructure and technical prerequisites, such as enabling technologies, institutions and mediators, whilst including a broad spectrum of stakeholders within civil society (DEK, p. 69). As a federal organization, it is not surprising that the solutions listed by DEK mostly relate to bureaucratic and legal domains, as developing further data protection laws, strengthening supervising institutions, setting up norms and standards and strengthening consumer- and commercial associations (DEK, p. 17, 28, 29, 30).

UNESCO however reminds us that such public policy measures restrict themselves to AI governance, good practice and education for engineers while keeping the stakeholder scope rather narrow (Unesco, p.23, 24). As Parasuraman notes, AI is able to or is already making its mark on a large part of our everyday lives so that [a]lgorithms have come to play a crucial role in the selection of information and news that people read, the music that people listen to, and the decisions people make (Unesco, p.3; Parasuaraman, p. 286). If a common agreement on AI is to be reached which represents all stakeholders, easily neglected aspects must be included, such as culture, education, science and communication (Unesco, p.24). In this view, it is to be safeguarded that no epistemic injustice is created with the development of AI, such as being wronged in one’s capacity as a knower or being deprived of knowledge (Goldman & O’Connor).

Side A: Data

One aspect of AI and digital society relates to the use of data which is created, gathered and used within a complex network of stakeholders. The ethical dimensions of data on one hand include objective requirements on how data may be used (which is often already regulated by laws such as the GDPR), but also subjective rights which stakeholders have against other stakeholders (DEK, p. 16). Data exists in the networks of stakeholders and poses a challenge to regulation which needs to safeguard the rights of individual or collective stakeholders in increasingly complex and dynamic data-ecosystems (DEK, p. 17). Important is the DEK take on data ownership: generating data does not lead to ownership, instead other stakeholders have a veto in the data economies which they all are a part of. The way in which data is used ethically thus depends on factors which stakeholders themselves bring to the table, such as how data is gathered, what stakeholder interests are concerned, public interest, democratic values, etc. (DEK, p. 17). In philosophical terms, data ought to be viewed within the epistemic paradigm of collective agents where knowledge, in this case data, is negotiated via networks (Goldman & O’Connor).

That data usage is today not always negotiated with all concerned stakeholders becomes clear In the context of the workplace. Here the handling of employee data must be designed to protect the employees’ personal rights, something which is to some extent handled by laws such as the European GDPR (DGB, p. 5). However, there is no data protection law specifically aimed at protecting the data of employees as employee stakeholders, nor do current laws provide for a general right of co-determination (DGB, p. 5). There are however such regulations with regards to co-determination of and participation in the introduction and use of technical equipment suitable for monitoring performance and behaviour (DGB, p. 5). This however falls short of providing employees with a veto right once a system is in place which alters their professional life by using anonymous data.

Side B: Algorithmic systems

In contrast to data, the ethical discourse resides with the relation between man and machine, especially with regards to automation and placing decision making processes in the hands of autonomous systems. In contrast to the data perspective, stakeholders do not necessarily have anything to do with the data which the system is processing, but may still be affected in an ethically relevant way (DEK, p. 24). DEK advocates a human-centered design which includes human values into the system by considering basic rights and freedoms (DEK, p. 163). This view should run through the entire organisation and process of software development, equipped with an inclusive and participatory stance (DEK, p. 163; Goldman & O’Connor). As with the question of data, algorithmic systems too must include a variety of stakeholders who can safeguard ethical compliance and minimize risk for bias, discrimination, epistemic injustice, etc. However, the network idea of data is not as easily applied to the algorithmic systems, so that developers and the organisations building these systems are given a greater expert role, which however also carries with it larger social responsibility (DEK, p. 163).

The solution is not technological

Most developers will tell you: the solution is not the technology, but the design. A decisive factor for the success of AI in society is the transparent, verifiable and controllable design of the system which is created by and around humans (DEK, p. 163; DGB, p. 4). As the European Union notes, promoting the trustworthiness in machines is necessary to create broad acceptance which in turn will enable us to reap the benefits of such systems (EU, p. 4). Essential is a broad network of participation, where stakeholders are involved and can co-determine the definition of the objectives of AI systems (Freeman & Phillips, p. 333). The state is to create the foundations to enable ethical and inclusive AI by regulation, but also by helping civil society to better understand and control its data and the systems which are being applied. As UNESCO has shown, the state’s view is limited and in order to prevent any epistemic injustice and to safeguard our rights, freedoms and welfare of populations, locally, nationally or globally, the real challenge is to include all stakeholder views in an interconnected global and digital world. This cannot merely be an issue for policy making, but needs to be a task for politics and civil society (Goldman & O’Connor).

References

  • Parasuraman, Raja; Sheridan, Thomas B.; Wickens, Thomas D. – A  Model for Types and Levels of Human Interaction with Automation ,IEEE Transactions  on Systems, man, and Cybernetics;  (2000)

AI Consciousness Test and the Turing Test

Susan Schneider goes beyond the Turing Test and suggests that whether a computer system can be viewed as conscious should be determined by looking at a variety of criteria and tests. One of these tests is the AI Consciousness Test (ACT). It is worth noting that while the Turing Test focuses on behavior by avoiding the question of what is actually going on inside the mind of the machine, the ACT, also focusing on behavior, aims at revealing the properties of the machine’s mind (Kind, p. 13). Like Turing, Schneider believes that passing the ACT should be seen as sufficient but not necessary for consciousness, thereby avoiding the humancentric bias and opening for the option that there may be other types of consciousness than the one developed in humans (Kind, p. 12).


The ACT then is to determine if a machine is conscious by evaluating if the machine has developed views of its own about consciousness and whether it is reflective about and sensitive to the qualitative aspects of experience (Kind, p. 12) . 


Importantly, we need to ensure that the machine has not been provided with any information about consciousness, we do not want a table of mappings whose rules a system merely repeats. It has to come up with an answer itself

The machine is asked questions which are held to be answerable only by a system which is conscious about itself and its surroundings

These questions would for example include

What is it like to be you right now?

Could you survive the permanent deletion of your program? 

How does it react to seeing a color for the first time and how does it describe the experience? (Kind, p.12) 


There are several objections to the Turing test which can be adapted to the ACT. The Lady Lovelace’s objection holds that a machine passing the Turing test shows only that it has good programming, not that it thinks. In order to be counted as thinking, a machine would have to show originality or creativity relative to its programming (Kind, p.10) However, as Kind notes, this sets an unreasonably high bar for thinking, since each human is in a way programmed throughout life. Nevertheless, the fact that ACT has no mapped information about the asked consciousness-questions, this objection seems not to hold for the ACT since the system is actually using other bits of data creatively to find a matching answer. How this is then done in the system, maybe a different story. 


More problematic to the ACT is the argument from consciousness which holds that we can’t identify mental states with behavior and that to be thinking is more than behaving in a thinking manner (results in a matching output to input); what matters is what’s going on inside. The problem here of course is that there is no way to know exactly what’s going on in a machine. Turing’s response is that neither do we know what’s going on inside other humans and so we cannot deny computers the concept of thinking, while giving this privilege to humans about whom we have basically the same evidence (Kind, p.11). 

The argument from consciousness, which is also posed by Searle, remains a problem for ACT. Indeed we are able to ask the system fundamental questions which supposedly only can be answered by a conscious being. But here Searle’s objection to the Turing test and the chinese room comes into play: how do we know that the output, even though in ACT it is not programmed as a mapped answer, carries any meaning to the machine. How do we know that a response “I am confused” is in some sense similar to the phenomenological experience of being confused (there is something that it’s like to be confused). Turing’s response is not very convincing, since in everyday life humans need to work with many assumptions about the world. One of these assumptions is that other humans which are of the same fundamental structure as I am, have a similar mental or psychological structure to mine. Even science is based on assumptions which are verified by not failing.  Turing’s objection is a logical, but abstract one. It also seems reasonable to hold that the same mental structure is realizable in systems which have the same makeup, as we have seen in the differentiation between pain realized in a human, a machine and an octopus (Kim, p.152).

  • Kim, Jaegwon (2011) “Mind as Computer: Machine Functionalism” in: Philosophy of Mind. Routledge, Chapter 5.
  • Kind, Amy (2020) “Machine Minds” chapter 5 in: Philosophy of Mind. Routledge

It’s human nature: how companies handle bias in Artificial Intelligence

As an agile software development team we view ourselves as being accountable for the AI which we deploy; we safeguard its usage and effect by technical means, but also the process of distribution which consists of human action. The latter is vital, since these systems are only capable of choosing the means and not the actual goal (Fraunhofer, p. 16). They do not think autonomously, but are trained to fulfill specific subtasks autonomously (bitkom, p. 4) and in contrast to humans, to AI everything is merely a simulation (Ethikbeirat HR-Tech, p. 5).

The ethical challenge

As the system is to follow human goals, we need to make sure that the desired results of AI are ethically sound (Ethikbeirat HR-Tech, p. 4), whilst being aware that AI systems may have effects beyond its original scope which concern society as a whole. Such ethics which an AI system must adhere to can however not be implemented as a code in which every question that arises produces a binary yes/no answer from a specific problem context (Fraunhofer, p. 12). Ethics is both subject to change and there is no consensus on the correct moral system. Safeguarding ethical AI is hence a task which humans must perform. Fraunhofer and Ethikbeirat both offer approaches deriving from moral frameworks: Fraunhofer uses Germany’s Grundgesetz, stating that people cannot be degraded to mere objects of actions, banning disproportionate restriction of individual- or group-autonomy (Fraunhofer, p.13, 16) but also protecting general social interests (Fraunhofer, p.15). This framework also includes the principle of fairness, banning treating the same social issues unequally or differing ones equally (Fraunhofer, p.16). Individuals may thus not be discriminated on the basis of affiliation to a certain social group, being for example given a better or worse machine evaluation (Fraunhofer, p.16). Ethikbeirat points out the individual’s right to dignity, autonomy and privacy and the worth of individuality itself (Ethikbeirat HR-Tech, p. 5, 20).

Bias

AI applications learn from generalization of example data, the quality of the system hence greatly depends on the data stock used. bitkom defines bias as machine prejudice, which entails discrimination due to data input or algorithm (bitkom, p. 5, 9). Ethical AI hence needs to be statistically representative for the data that occur during operation and free of prejudice (Fraunhofer, p.11, 16). However, bias in data is not created in a vacuum, but reflects the prejudices that are inherent  in society (Ethikbeirat HR-Tech, p. 13). What in our opinion the focus on mere clean-data hence misses, is that bias can be inserted into the system at any stage of the workflow, from conceptualizing the AI, how the system is run, to making decisions based on its results. How AI is used ethically must thus according to Ethikbeirat result from the right interaction of data quality, its analysis and how conclusions and actions are derived (Ethikbeirat HR-Tech, p. 13).

Solutions in development

On the software side, the use case which is developed should include the application area, purpose, and scope as well as affected persons, so that all settings and players affected are involved in the development process (Fraunhofer, p.15; Ethikbeirat HR-Tech, p. 19). Ethikbeirat adds that whoever uses AI needs to ensure that they fully understand the AI system, from the basic technical structure to the interpretation of output and need to train their employees in the usage of AI (Ethikbeirat HR-Tech, p. 20). bitkom suggests that the development should include both ethical training, but also that the concerned team be diverse in order to detect potential bias (bitkom, p. 9).

Special focus should also be put on the quality assurance of data (bitkom, p. 9). Harder than finding data is to ensure that clean and relevant data is collected which is free from bias (Ethikbeirat HR-Tech, p. 14; Fraunhofer, p.16). Thus a variety of QA measurements are to be taken including documentation, finding potential sources of error and an ethical analysis of the output (bitkom, p. 8). Another instrument against bias would be programming a quantifiable fairness term, depending on the defined groups that should not be discriminated against (Fraunhofer, p.16). 

Another question is if certain ML models or datasets should be used at all. Ethikbeirat or instance questions the extent to which prognosis on human behavior can have validity in the future at all, since humans change and evolve (Ethikbeirat HR-Tech, p. 13,14). Historic data to them also should not be understood as providing any normative specification. There are other guidelines, such as quotas, which are ethically important (Ethikbeirat HR-Tech, p. 21). Since AI systems are often black boxes in which only the external behavior can be observed and the internal function mechanisms are not accessible, it may be advisable to avoid certain types of ML models or if necessary supplement the ML model with an explanation model, that calculates which parts of the input were decisive for a certain result (Fraunhofer, p.11, 15). 

Transparency and inclusion

The interaction possibilities between the AI application and user need to be clearly and transparently regulated (bitkom, p. 7) and users need to be familiarized with the risks related to potential impairment of their autonomy, their rights, obligations, and options to intervene (Fraunhofer, p.16; bitkom, p. 5). Ethikbeirat adds broad participation as an important component for ensuring a broad and inclusive assessment of the system which may prevent bias (Ethikbeirat HR-Tech, p. 19). bitkom also addresses transparency, but is more restrictive, since transparency regarding source code may also make systems more prone to misuse and hacker attacks, transparency must also stay compatible with trade secrecy (bitkom, p. 5,7).

Conclusion

We are glad to see an overall awareness about the ethical implications of AI, as becomes clear in the three guidelines presented. Overall, a critical view on data and it’s interpretation is present together with a toolbox of both technical solutions, which however have their limits, and an understanding that the ethical soundness of a system must from beginning to end be provided by humans, whose own bias must also be addressed. It is also the bias-prone human who is to make any final decision (bitkom, p. 7, 9; Ethikbeirat HR-Tech, p. 19). 

It is this second point which is stressed especially by Ethikbeirat, who emphasize and have a clear vision of the human and normative domain which must guide AI. We see a risk that bias is handled merely via QA of input data, thereby ignoring the limits of historic data, but also that bias can be induced at several points of the system and its application. Furthermore, though we may hire diverse teams, bias remains something which steadily needs to be analyzed and refined.

We are glad to see that both Fraunhofer and Ethikbeirat apply current moral frameworks as a means to analyze that an AI solution produces ethically sound and fair results. The digitalization gives rise to new ethical problems, such as the implications of human/machine interaction (Fraunhofer, p.12; bitkom, p. 6). Only by tackling these topics can we create a basis from which AI systems are to be interpreted, how data is freed from bias and how decisions based on AI predictions are to be made. 

References

General Problem Solvers and the Frame Problem

The General Problem Solver GPS solves problems such as the “Missionaries and Cannibals” puzzle, a task set within a monotonic world where a few static sets of rules and bits of knowledge are needed and the best or next solution can be calculated within a limited world (the puzzle or the game). It would probably fare well with chess too. GPS is symbolic in that it has subprograms which change its current state and rules which encode the constraints. Thus, the system walks through several scenarios, always trying to change its current state to make it more similar to the desired state, always checking its coded subprograms and rules (Mitchell, p. 7). Its symbolic nature is also evident in that to GPS it does not matter what is contained in its strings of code, any string of nonsense can be processed (Mitchell, p. 8).
To Dennett, GPS avoids the frame problem precisely because it takes the shortcut to solving a problem; it merely defines a limited world and installs the required knowledge and rules which are needed to solve this specific task (Dennett, p. 198). A similar approach could be taken with chess or any other task where the knowledge and the rules needed are limited.
However, this approach of installing all that is needed in a machine runs into trouble when the knowledge and rules needed for a task are not as clear. In the end, interacting with the world as humans do cannot be framed as easily as a chess game or a maths or logic puzzle. The frame problem addresses exactly the problem of defining what knowledge is needed in any situation, how it is to be used and how it must change in an everyday situation, like making a sandwich in the middle of the night. An intelligent agent to Dennet must engage in swift information-sensitive “planning ” which has the effect of producing reliable but not foolproof expectations of the effects of its actions (Dennett, p. 193).

The semantic problem of systems like GPS is just what information must be installed in order to fulfill a certain task. In chess or the cannibal puzzle, the rules and knowledge for solving the puzzle of playing the game are limited and do not change, i.e. they are framed. However, in everyday situations, like getting a midnight snack, enormous amounts of detailed information are required. Furthermore, an agent could believe all that it needs to believe in an empirical matter and still be unable to represent it in the right way/to make use of it (Dennett, p. 194). Hence, what must be known is also relative to the situation at hand. We thus must also on the fly be able to use and update the information in a changing and dynamic world. This hints at the question of relevance, which is an issue both for which information needs to be called upon and just as importantly which information is irrelevant to the situation, but also what information needs be updated: if I move around in the room, my position and relation to my surroundings change, but which of these changes are relevant is exactly the frame problem. The syntactic problem then regards the logic of how information is stored. This means we run into problems both with regards to storing all those bits of information, but also how a system calls upon only the relevant bits of data in a fitting order, without the processing taking forever. GPS would calculate the best next move on a game board or which logical move will move it close to a desired state, but we run into the problem of relevance and/or machine processing power if the system is to calculate all possible moves and results in a world of limitless possibilities.

Dennett, Daniel C. (1984) “Cognitive Wheels: The Frame Problem of AI” in: C. Hookway (ed.) Minds, Machines and Evolution. Cambridge University Press.

Mitchell, Melanie. (2019) Artificial Intelligence. Farrar Strauss & Giroux. Chapter 1: “The Roots of Artificial Intelligence”

What constitutes a mind according to machine functionalism?

Functionalism is the philosophic paradigm on which machine functionalism rests. It is easy to see why functionalism would seem attractive to computational views and machine functionalism. Functionalism starts of with Realization Physicalism:

If something x has some mental property M (or is in mental state M) at time t, then x is a physical thing and x has M at t in virtue of the fact that x has at t some physical property P that realizes M in x at t. (Kim, p.130)

Hence, anything that exhibits mentality must be a physical system. Furthermore, every mental property is physically based; each occurrence of a mental property is due to the occurrence of a physical realizer of the mental property (Kim, p.131). This relates to the second theme of functionalism, the multiple realization of mental properties, which holds that different physical systems realize the same mental properties. The behavioristic view on mentality then, which the term functionalism implies, is that mental concepts are defined by their function, not by the realizing system in the background. As an example, an engine may be constructed using various different techniques, but all engines perform the same basic job. For functionalism, what binds multiple realizations of mental concepts together, is thus sought at a causal-functional level. Hence, the concept of pain is defined in terms of its function, which serves as a causal intermediary between typical pain inputs and typical pain outputs (Kim, p. 133). Important is also that the causal conditions that activate mental mechanisms can include other mental states and that outputs of mental mechanisms can include mental states as well (Kim, p. 134). This holistic approach to the mind hence views mental events as both causes and effects of a given mental network and forms a complex causal network which engages with input from the outer world and converts it into a fitting output (Kim, p.138). 

At this point it is easy to see why functionalism lends itself easily to computational views of the mind and machine functionalism in particular. One one hand, there is the conception of a mental state occupying a certain specific causal role in a network, which, if definable or formalizable, also can be computed. On the other hand, there is the idea of multiple realization of internal states. Just as vastly different biological systems consist of the same cognitive processes, different computer systems should be able to execute the same computational program. Machine functionalists hence think of the mind as a Turing machine and what it is for something to have mentality, is for it to be a physically realized Turing machine with its mental states identified with the realizers of the internal states of the machine’s instructions (Kim, p.148).


The question if (machine) functionalism amounts to intelligence very much depends on what one means by intelligence. Different projects aim at either getting to appropriate outputs through any kind of system, while others are more concerned with creating something resembling the human mind. Even Turing noted that it is not the case that the Turing test is necessary for thinking, it merely shows that it is sufficient for thinking to be going on. Hence, if by intelligence one means that certain inputs amount to fitting outputs in specific situations, functionalism may indeed be called intelligent. After all, related forms like neural networks and weak AI seem to fare well with this approach of structuring complex causal systems. 

However, our conclusion is more complex if we take human intelligence as a reference. As Kim points out, multiple realization means that two systems need to have an identical psychological setup. However, we cannot believe that a human, a machine and an octopus share the same psychology (Kim, p.152). Functionalists may answer that it is not necessary that the total psychologies coincide, but only that there is some Turing machine which covers some specific mental concept in both systems. This however leaves us with the practical problem of how to isolate for example “pain psychology” from the entire psychology. A related issue regards Hubert Dreyfus’ observation that human intelligence is necessarily bound to the human body and what distinguishes persons from machines is precisely having an involved, situated, material body (Dreyfus, p. 235-237). It is hence not the case that large chunks of information are stored and processed in our brain, being able to act in the world has rather more to do with practical skills of maneuvering on the fly without processing large amounts of data (Dreyfus, p. 260). 


This also relates to Searle’s chinese room, which is an extension of the Turing imitation game, and the critique that even though a computer may be computed as to, and produces the same content as a human presumably would, the end product has no meaning to the machine; it may act as if it was talking chinese, but actually it has no idea of what language is, what Chalmers would call a zombie. Intelligence it could be argued, is not just about functions which produce some end result, but there is something that it is like to feel pain or speak a language or to drink a cold beer (Chalmer, p. 104, 295; Russell & Norvig, p. 1033). Chalmers is however no materialist and the qualia of experience is to him reducible to purely material aspects (Chalmers, p. 26). 



Chalmers, David J. – The Character of Consciousness – Oxford University Press, New York 2010

Dreyfus, Hubert L. – What Computers Still Can’t Do: A Critique of Artificial Reason – The MIT Press, USA 1992 

Kim, Jaegwon (2011) “Mind as Computer: Machine Functionalism” in: Philosophy of Mind. Routledge, Chapter 5.

Russell, Stuart & Norvig, Peter – Artificial Intelligence – A modern approach, third edition – Pearson Education Limited, Harlow 2016

Inscrutable Features: The epistemology of Artificial Intelligence and limits of human knowledge

Are Deep Learning Networks able to provide knowledge about the world which we ourselves as humans cannot gather? Do such networks even tell us something about a reality which is not intelligible to humans? These questions are at the heart of the debate sparked by some of the recent developments in Deep Neural Networks systems brought about by Machine Learning. Machine Learning Networks are based on statistics and probability theory, what these systems thus do extremely well is to find statistically based patterns in large amounts of data (Mitchell 2019). Usually, an engineer will train a system by both adding training data and providing an evaluation on the system’s calculation, giving the system reason to further finegrain it’s algorithm. However, this is also where the systems’ weakness lies, for while deep learning models can routinely achieve superior performance on novel natural data points which are similar to those they encountered in their training set, presenting them with unusual points in data space […] can cause them to produce behavior that the model evaluates with extreme confidence, but which can look to human observers looks like a bizarre mistake (Buckner, p.2). Such systems regularly make mistakes in understanding human language and logic (Alpaydin, p.91) or classifying images (Dreyfus, p. xv, p. xxxiii, p. xxxv) and they are also willingly deceived by strategically manipulated adversarial examples (Buckner 2020). But what if a falsely identified picture is not really a mistake, but contains pieces of knowledge about the world brought about by a non-human intelligence. Researchers have recently shown that DNN were discovering highly-predictive features in adversarial examples that generalize well to novel real-world data; they were hence not merely concerned with junk, but gained usable insights from the manipulated data, apparently because the features they detected carry predictively-useful information that is present in real-world input data (Buckner, p.6). So what we perceive may not be all there is to the world and machines may help us get at those hidden truths. This thesis has some philosophical tradition to back it up. Plato held that intelligence means acting according to ideal ideas, which however remain unknown to us (Platon, p.283-285), Kant worried that nothing we perceive can be guaranteed independent of processing constraints imposed by our cognitive architecture (Buckner, p.18). More recently, Kris McDaniel has argued in a similar manner that humans make inductive inferences based on a large enough sample size, which is however never complete, it’s just a way of getting on in the world (McDaniel 2020; Buckner, p.9). There however remain things in the world which are “projectible”, corresponding to objects that objectively belong together, but which we do not directly perceive (McDaniel 2020). The fundamental claim underlying the idea of human inscrutable features thus relates to the tradition which differentiates between what something looks like, for a human or a machine, and what it really is (Buckner, p.6).

There has been some critique aimed at this line of thought, but this is not what this paper is about. So given that there is a distinction between features of the world and features which humans perceive, if there is a realm of knowledge which stays hidden to the human mind, could it still be of use to us? The science of protein folding had long been held to consist of  properties which cannot be reduced to patterns in lower level details, which is why abstract explanatory models were needed (Buckner 2020). The AlphaFold system was however able to beat these models on a majority of the test proteins given and achieve a 15% jump in accuracy (Buckner 2020). I am not a scientist, I don’t know if this 15% increase is a world-changing development. But what this hints at, according to Buckner, is that the “interaction fingerprints” which deep learning neural networks were able to identify in this scenario, are just like the sorts of (non-robust) features that cause image-classifying networks to be susceptible to adversarial attacks (Buckner, p.11). They are features which human intelligence does not grasp.

If we assume that ideas as Buckner’s carry some weight, the question arises if and how information derived from human inscrutable features should be used.  

One one hand, there are technical limitations which we ought to be concerned about. As Melanie Mitchell points out, Neural Networks become ever more intransparent with growing depth and complexity. This is also due to the fact that the Machine Learning system autonomously finds a way to create the most accurate result (Mitchell 2020). So if a system is not only intransparent in its inner workings, but its output is also based on human inscrutable features, it seems to become rather tricky if not impossible to do any form of quality assurance on the system. A trustworthy system, according to the European Commission’s expert group on AI, is also a system which adheres to explicability, it needs to be transparent about why it generated a certain output (European Commission, p.19), something which a system using human inscrutable features will have a tough time doing. 

Another drawback due to the system’s setup is that subsymbolic systems may be well suited for perceptual or motor tasks for which humans can’t easily define rules, but are weak when it comes to logic and reasoning (Mitchell, 2019). As Buckner notes, modeling the difference between causation and correlation is a characteristic weak spot for deep learning (Buckner, p. 21). So there are definitely limits on what kind of information can validly be provided but also an urgency aimed at the users to put the results into context.

Another problem is that since we are concerned with features which may or may not be junk, we really do not know if something may be of value or not. Furthermore, even if some feature is recognized as being predictive, this does not mean that it will tell us something about the causes and effects that we are interested in (Buckner, p.16). Buckner sees this too, and believes that if and how such features are used must depend on its relevance to our purposes and how we interpret it (Buckner, p.14). This leaves however the problem of how such an analysis is to be done. Buckner believes that establishing tools of taxonomy, etiological theory and causality will help us frame the human inscrutable features and make them more usable and predictable (Buckner, p.11). But there is a valid suspicion I believe. Not only would such a framing-system as proposed by Buckner be man-made and perhaps distort the features when forced into human context, but human inscrutable features can also be held to merely be results of man-made training data and evaluations.

On a more abstract level, we can ask what good information derived from human inscrutable features can actually provide. On one hand these systems seem, without us knowing how, to enhance our tooling as becomes apparent in AlphaFold. But without us knowing how this was done does not seem to enhance our knowledge in any meaningful way. It is a bit like giving somebody a fish instead of teaching the person how to fish (Buckner, p.10). 

All this does not take away the credit due to a system finding patterns such as in the case of AlphaFold. It seems suited to provide insights within fields of vast and detailed data, where the result may trigger progress regardless of our knowledge about the causes increasing. This may work well for many projects in natural science, but less for philosophy, where arguably it is the way in which we derive a conclusion which matters. And this I hold, goes for many of mankind’s projects. As we do not know about the validity of the data (junk or inscrutable, biased by humans), we cannot depend on it blindly, and we surely should not make life-changing decisions based on such calculations. On the ethical side of things, we will find it hard to hold a system or it’s engineers accountable for any results the system may produce, if the level of abstraction makes its inner workings ever more intransparent.

Literature

Alpaydin, Ethem (2016) – Machine Learning: The New AI – MIT Press, Cambridge

Buckner, Cameron (2020) – Adversarial Examples and the Deeper Riddle of Inductionhttps://arxiv.org/ftp/arxiv/papers/2003/2003.11917.pdf

Dreyfus, Hubert L. (1992) – What Computers Still Can’t Do: A Critique of Artificial Reason – The MIT Press, USA

European Commission (2019) – Ethics guidelines for trustworthy AIhttps://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

Platon (1952) – Sämtliche Werke II – Phaidon Verlag, Wien

McDaniel, Kris (2020) This is Metaphysics. Wiley Blackwell. Chapter 1.4 Do Things objectively belong together?

Mitchell, Melanie (2019) Artificial Intelligence. Farrar Strauss & Giroux. Chapter 2: Neural Networks and the Ascent of Machine Learning

A brief summary of Deep Neural Networks and Machine Learning

As a response to the limitations of the symbolic AI systems, connectionism started in the 1980s, what we now call neural networks. Inspired by the brain, the idea is that knowledge placed in networks of weighted connections between units would enable systems to learn on their own and would be better suited than other approaches to solve tasks strictly placed in the human domain.

Multilayered networks consist of some input layer, at least one layer of hidden (non-output) units, along with a layer of output units. Each output unit corresponds to one of the possible results. Each input then has a weighted connection to a hidden unit, and each hidden unit has a weighted connection (usually randomly selected) to the other hidden units in the next layer or an output unit. This is meant to resemble the brain, in which some neurons directly control outputs such as muscle movements, but most neurons simply communicate with other neurons. Networks that have more than one layer of hidden units are called deep networks.

Each unit multiplies each of its inputs by the weight on that input’s connection and then sums the results. Each unit uses its sum to compute the unit’s “activation” or probability value (close to 0 if the sum is low, close to 1 if the sum is high). The network then performs its computations layer by layer, fine graining the input, with each hidden unit computing its activation value; at the end these activation values become the inputs for the output units, which then compute their own activations. The output unit with the highest score is the system’s answer. What this amounts to is a system which is structurally breaking down inputs, looking for patterns, comparing the variations and trying to match its results with certain output types. 

What is called the cost function then is part of how a system is actively taught to make less errors. Basically, the function estimates how the neural network is performing with regards to the relationship on an x y gradient (result and goal). The system is then run iteratively with changes being made, steadily comparing x and y with the goal of finding the setup which minimises the cost function. 

The feedback or correctional process of backpropagation which trains the network is especially interesting but often hard to get a grip on. Basically, the backpropagation algorithm takes errors observed in the output units and looks backward starting from the last layer for the cause of the error in the hidden units (since neural networks are set up sequentially), which are the weights which minimize the cost function. Learning in neural networks hence consists in gradually modifying the weights on connections. The trouble is, that neural networks will have thousands of units in several layers, which is why partial derivative is applied.

Symbolic systems are engineered by humans who to some extent control the contents of the system. Subsymbolic systems’ procedures are harder to get a grip on. As Mitchell hints at, this has to do with the enormous amounts of nodes and connections which would need to be analyzed and visualized. But it also seems to be because the system is really working autonomously in looking for patterns, while the programmer merely states if errors were made and adds input, no other parameters are added or changed transparently. Furthermore, as many other systems it is a general machine which contains no prior settings with regards to the issue it is working on.

There are some philosophical questions which come to mind.

The neural network is supposed to structure intelligent processes in a similar way as the brain does. Due to its complexity, it’s hard to understand exactly how a neural network got to some result, since it is autonomously looking for patterns and matching them to some output. This has some moral implications for sure. We would like a system to be able to explain how it got to certain results. But how different is this from human experience really? Sure, if asked what motivates our actions or thoughts we are able to conjure up plausible explanations, but with all that we know about our unconscious motivations, how is our black box different from an AI black box? Beyond the technical setup of neural networks, how does this approach differ philosophically from systems which we have encountered thus far? At first it seems impressive what a bit of software is able to do. But when we think about the critiques of AI, from the frame problem, the issues with turing machines or the chinese room, do neural networks solve any of these issues? Or are they merely placed in a new system?

Mitchell, Melanie (2019) Artificial Intelligence. Farrar Strauss & Giroux. Chapter 2: “Neural Networks and the Ascent of Machine Learning”