Understanding Automated Decisions: sprint 1 learnings

Ian and I have been working with London School of Economics (LSE) to explore how to make automated decisions understandable. Automated decisions are being used to make increasingly important decisions for people without explanation of why the decision has been reached. This project is an opportunity to bring academics, technologists and designers together to explore ways of explaining automated decisions so that people are able to understand, challenge and oversee automated decisions made about them. We’ve been posting outcomes of the sprint on our Tumblr but wanted to summarise what we’ve found so far.

Starting with a shared language

Before we began, Dr Alison Powell, Arnav Joshi, Paul-Marie Carfantan and Nandra Galang Anissa, programme director and researchers from the LSE’s Data and Society programme, did some background research on how to explain automated decisions. They prepared a literature review packed with theories from papers that explore different ways of explaining automated decisions. For example, Kroll et al. (2016) discuss procedural regularity, and how automated systems should make decisions consistently and without bias. Also, Wachter et al. (2017) talk about how showing counterfactual explanations could inform people about why certain decisions are made and help them understand what would need to change in order to receive a different result.

This approach to research is very different to the design research methods we are used to. It was exciting to learn about new ways of explaining automated decisions and think about how they could be applied practically.

Technical terms for non-specialists

We realised there were lots of new terms used in the desk research we needed to understand and define. This would allow us to create definitions accessible to people who are new to automated decision making.

Coming up with simple definitions for these different concepts was challenging. They may not be completely right, but defining the terms helped us reach a common understanding for what they meant to us as a team.

Automated Decision Definition

Definition of an automated decision. Image by IF CC BY-SA

Prototyping practical responses to academic theory

We pulled out some interesting theories from the literature review and used them to inform our prototypes. We used Flock, a pay-as-you-fly drone insurance company, as a case study service for applying the theories we had learned about. We met Antton, the founder of Flock at an event and he was really interested in exploring how to better explain the automated decisions in their service. We think Flock already have some interesting ways of explaining automated decisions and have added a pattern from their service to our data permissions catalogue. We came up with some more ways you could explain Flock’s automated decisions and published the prototypes on our Tumblr.

Quotes In Same Location 1

Revealing other quotes for the same location to suggest that an automated system is making consistent decisions. Image by IF CC BY-SA.

How Factors Influenced Quote 1

Using counterfactual explanations to show how different wind speeds and weather conditions could have contributed to a cheaper quote Image by IF CC BY-SA

We showed the prototypes to Antton, to see how feasible some of the designs were. Antton talked to us about the challenges of balancing commercial needs, like keeping parts of the decision making system hidden from competitors, with the drone operators needs, like understanding exactly how a decision was made. Antton felt that some of the prototypes that revealed in detail how different data inputs changed the result may be showing competitors too much information.

Reviewing prototypes together

At the end of the week we came together to talk about how we had applied the theories, where they were relevant and if there were any areas we had missed. It was great to loop back in with the LSE team to critically analyse the prototypes from an academic perspective. We were able to understand if we were correctly interpreting and applying the theories, and how we could improve the prototypes. We iterated the definitions and prototypes and posted them on our Tumblr.

Flock uses a rule based system to calculate their insurance quotes and does not currently use machine learning. This makes it possible to explain to people exactly which factors contributed to the result of the automated decision. With machine learning models, it’s more difficult to confidently say which factors influenced the end result.

The next sprint will focus on how to explain decisions made by systems that use machine learning. Looking ahead we hope to publish a summary of all our work online and will be publicly showing the work in an exhibition in October.