AI and Responsibility in agile environments

In agile organizations we believe in the autonomy of the self-organizing product development team and the team being responsible for its product. As the UK Department of Health & Social Care (NHS) exemplifies, the Product Owner, as part of a cross functional team, is aware of the department and user requirements and context, and according to the agile framework also has the last say on where the development is headed, what is prioritized and when a deployment is done. This is not done in a vacuum, but it is rather the team’s combined effort which steers the product. It is their task not only to develop the software in alignment with stakeholders and other teams, but also to ensure sufficient QA & testing, security, stability and resilience, transparency and documentation. It is thus the team which carries active and passive role responsibility and it is accountable for the AI product, since part of their task is to know the product on the highest level, which includes being able to explain the AI behavior. 

There are however two specific limitations to the team’s accountability and responsibility. One limitation derives from the fact that agile teams are often not fully in charge of their product as the agile manifesto and frameworks require. This usually includes top-down hierarchies, hard requirements by paying customers or large dependencies on other software or data. Since freedom is a prerequisite for responsibility, any infringement upon the team’s autonomy will also spread the responsibility accordingly. This is vital, since such a spreading must be both made explicit and traceable.
The second limitation is somewhat similar, and has to do with any software’s dependency on elements outside of the team. Whether this is an API or dataset which is added to the algorithm does not matter here, but if the team has not full control or insight into what is being fed into their product, their informedness decreases and thus responsibility, in the case of data also accountability decreases, and is instead spread throughout the paths of causal contribution. This is exemplified by the NHS who defines certain instances of control and quality assurance, like data controllers, data protection officers and clinical safety officers. 

Deutsche Telekom and DataEthics take this approach a step further, namely that even though role responsibility lies mostly with the product team, actually the entire organization behind the software is morally responsible to different degrees. DataEthics states that Accountability is an integral part of all aspects of data processing, and efforts […] Sustainable personal data processing is embedded throughout the organisation and ensures ethical accountability in the short, medium and long term. An organisation’s accountability should also apply to subcontractor’s and partners’ processing of data. Telekom goes one step further and writes that As human beings, it is our decision which values artificial intelligence shall mirror in the end. Each and every human generation must take responsibility for respecting these values and this must never be forgotten. […] Since AI as a technology itself is neutral, it is our responsibility as human beings to decide what we want to utilize AI for. Technology only serves as a tool; the rights and wrongs of its usage must be and are questioned by digital ethics. It is hence not only the coding which produces the product, but the culture surrounding it which steers its path. As Deutsche Telekom has shown, informedness is spread through workshops, internal communication campaigns and eLearnings. But it is also the transparency which agile frameworks promote which keeps the organization in the loop of what is currently being developed, which further enhances informedness and thus accountability and responsibility. 

In the case of the NHS, we are thus troubled by the fact that active responsibility is put exclusively on the software provider, who should consider the needs of a diverse set of users, […] are responsible for not only ensuring their innovation complies with relevant legislation such as GDPR/Data Protection Act 2018[…]. They should also ensure the legal basis for processing confidential patient information. While we do agree that role responsibility lies with the developing instances, we want our users and customers to have an active role in our development which goes beyond providing feature requests. Yes, we are helped by our partners providing an extensive documentation of details on what has to be provided legally and ethically by the product, as the NHS does. This also means that the partner is adhering to their active responsibility. We however believe that our partner, who in our agile framework is informed and actively included in our development process, must continue to take on moral responsibility for the product, which is developed together over a period of time. 

Last but not least, it is our goal to make our users aware of the systems which they are using and how their data is being used. As DataEthics states, the individual is to be actively involved in regards to the data recorded about them. The individual has the primary control over the usage of their data, the context in which his/her data is processed and how it is activated. We want our product development cycle not to stop at the deploy button, but believe that our product becomes functionally, but also ethically better, if the end users are both informed and free to control their data and thus take part in our shared responsibility for future AI.


A guide to good practice for digital and data-driven health technologies – GOV.UK (

Data Ethics Principles · Dataetisk Tænkehandletank

Leitlinien für Künstliche Intelligenz | Deutsche Telekom 

Principles behind the Agile Manifesto

What is a Product Owner? (


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s