It’s human nature: how companies handle bias in Artificial Intelligence

As an agile software development team we view ourselves as being accountable for the AI which we deploy; we safeguard its usage and effect by technical means, but also the process of distribution which consists of human action. The latter is vital, since these systems are only capable of choosing the means and not the actual goal (Fraunhofer, p. 16). They do not think autonomously, but are trained to fulfill specific subtasks autonomously (bitkom, p. 4) and in contrast to humans, to AI everything is merely a simulation (Ethikbeirat HR-Tech, p. 5).

The ethical challenge

As the system is to follow human goals, we need to make sure that the desired results of AI are ethically sound (Ethikbeirat HR-Tech, p. 4), whilst being aware that AI systems may have effects beyond its original scope which concern society as a whole. Such ethics which an AI system must adhere to can however not be implemented as a code in which every question that arises produces a binary yes/no answer from a specific problem context (Fraunhofer, p. 12). Ethics is both subject to change and there is no consensus on the correct moral system. Safeguarding ethical AI is hence a task which humans must perform. Fraunhofer and Ethikbeirat both offer approaches deriving from moral frameworks: Fraunhofer uses Germany’s Grundgesetz, stating that people cannot be degraded to mere objects of actions, banning disproportionate restriction of individual- or group-autonomy (Fraunhofer, p.13, 16) but also protecting general social interests (Fraunhofer, p.15). This framework also includes the principle of fairness, banning treating the same social issues unequally or differing ones equally (Fraunhofer, p.16). Individuals may thus not be discriminated on the basis of affiliation to a certain social group, being for example given a better or worse machine evaluation (Fraunhofer, p.16). Ethikbeirat points out the individual’s right to dignity, autonomy and privacy and the worth of individuality itself (Ethikbeirat HR-Tech, p. 5, 20).

Bias

AI applications learn from generalization of example data, the quality of the system hence greatly depends on the data stock used. bitkom defines bias as machine prejudice, which entails discrimination due to data input or algorithm (bitkom, p. 5, 9). Ethical AI hence needs to be statistically representative for the data that occur during operation and free of prejudice (Fraunhofer, p.11, 16). However, bias in data is not created in a vacuum, but reflects the prejudices that are inherent  in society (Ethikbeirat HR-Tech, p. 13). What in our opinion the focus on mere clean-data hence misses, is that bias can be inserted into the system at any stage of the workflow, from conceptualizing the AI, how the system is run, to making decisions based on its results. How AI is used ethically must thus according to Ethikbeirat result from the right interaction of data quality, its analysis and how conclusions and actions are derived (Ethikbeirat HR-Tech, p. 13).

Solutions in development

On the software side, the use case which is developed should include the application area, purpose, and scope as well as affected persons, so that all settings and players affected are involved in the development process (Fraunhofer, p.15; Ethikbeirat HR-Tech, p. 19). Ethikbeirat adds that whoever uses AI needs to ensure that they fully understand the AI system, from the basic technical structure to the interpretation of output and need to train their employees in the usage of AI (Ethikbeirat HR-Tech, p. 20). bitkom suggests that the development should include both ethical training, but also that the concerned team be diverse in order to detect potential bias (bitkom, p. 9).

Special focus should also be put on the quality assurance of data (bitkom, p. 9). Harder than finding data is to ensure that clean and relevant data is collected which is free from bias (Ethikbeirat HR-Tech, p. 14; Fraunhofer, p.16). Thus a variety of QA measurements are to be taken including documentation, finding potential sources of error and an ethical analysis of the output (bitkom, p. 8). Another instrument against bias would be programming a quantifiable fairness term, depending on the defined groups that should not be discriminated against (Fraunhofer, p.16). 

Another question is if certain ML models or datasets should be used at all. Ethikbeirat or instance questions the extent to which prognosis on human behavior can have validity in the future at all, since humans change and evolve (Ethikbeirat HR-Tech, p. 13,14). Historic data to them also should not be understood as providing any normative specification. There are other guidelines, such as quotas, which are ethically important (Ethikbeirat HR-Tech, p. 21). Since AI systems are often black boxes in which only the external behavior can be observed and the internal function mechanisms are not accessible, it may be advisable to avoid certain types of ML models or if necessary supplement the ML model with an explanation model, that calculates which parts of the input were decisive for a certain result (Fraunhofer, p.11, 15). 

Transparency and inclusion

The interaction possibilities between the AI application and user need to be clearly and transparently regulated (bitkom, p. 7) and users need to be familiarized with the risks related to potential impairment of their autonomy, their rights, obligations, and options to intervene (Fraunhofer, p.16; bitkom, p. 5). Ethikbeirat adds broad participation as an important component for ensuring a broad and inclusive assessment of the system which may prevent bias (Ethikbeirat HR-Tech, p. 19). bitkom also addresses transparency, but is more restrictive, since transparency regarding source code may also make systems more prone to misuse and hacker attacks, transparency must also stay compatible with trade secrecy (bitkom, p. 5,7).

Conclusion

We are glad to see an overall awareness about the ethical implications of AI, as becomes clear in the three guidelines presented. Overall, a critical view on data and it’s interpretation is present together with a toolbox of both technical solutions, which however have their limits, and an understanding that the ethical soundness of a system must from beginning to end be provided by humans, whose own bias must also be addressed. It is also the bias-prone human who is to make any final decision (bitkom, p. 7, 9; Ethikbeirat HR-Tech, p. 19). 

It is this second point which is stressed especially by Ethikbeirat, who emphasize and have a clear vision of the human and normative domain which must guide AI. We see a risk that bias is handled merely via QA of input data, thereby ignoring the limits of historic data, but also that bias can be induced at several points of the system and its application. Furthermore, though we may hire diverse teams, bias remains something which steadily needs to be analyzed and refined.

We are glad to see that both Fraunhofer and Ethikbeirat apply current moral frameworks as a means to analyze that an AI solution produces ethically sound and fair results. The digitalization gives rise to new ethical problems, such as the implications of human/machine interaction (Fraunhofer, p.12; bitkom, p. 6). Only by tackling these topics can we create a basis from which AI systems are to be interpreted, how data is freed from bias and how decisions based on AI predictions are to be made. 

References

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s