Understanding Automated Decisions: sprint 2 learnings

Automated decisions are part of many services we use everyday, but how they work is rarely explained or understood.

This matters because automated decisions have an impact at scale. Decisions made by automated systems should be transparent, explainable and accountable.

We’ve been working with London School of Economics to explore different ways of explaining automated decisions. We’re exhibiting our work from today, to Friday 9th November at the Atrium gallery at the London School of Economics. We are showing this work to demonstrate to industry and the public why automated decisions must be explained.

I recently wrote about our first design sprint, where we prototyped practical responses to academic theories. This post will focus on our second sprint, where we developed prototypes that show the possibilities of explaining decisions made by machine learning systems.

Flock’s rule-based system

In the first design sprint we used Flock as a case study. Flock is, a pay-as-you-fly drone insurance company that uses a rule-based approach to calculate the cost of insurance. This means Flock developers have preprogrammed the system with a set of rules for the system to follow. For example, if a pilot wants to fly when it is windy, the flight is higher risk and the premium will cost more.

In a rule-based system like this, the risk of the flight, and therefore the cost of insurance, is determined by the people who programmed the automated system.

A machine learning approach

If machine learning is used, developers create a model, and the model makes predictions about risk. These predictions are based on large datasets and the process of reaching a prediction may not follow a defined path. In the drone flight example, the system would take many factors into account to predict how likely it is for the pilot to have an incident, and therefore how risky it is.

In a system that uses machine learning, it is the machine that makes predictions about risk. So the machine determines the cost of insurance, instead of the people who developed the system.

The challenge is understanding why the machine learning system predicted the incident and being able to demonstrate that it was a fair decision.

Explaining machine learning results

We created a new use case to explore the challenges around explaining decisions in a machine learning system.

We used a fictional car insurance company that provides policies based on data about how people drive. Data is collected from a sensor in a driver’s car and sent to their insurance company. The data analysed using machine learning, which determines the cost of their insurance.

Machine learning could make risk prediction more accurate, but because of the complexity of many automated systems it will be hard for companies to be completely transparent. On top of that, how automated systems work is often a unique part of a company’s business model and may be commercially sensitive information.

Showing how individual data compares to a training dataset

The way organisations develop machine learning models can influence how the automated system makes decisions.

Developers train models to make predictions using a training data set. Where that data comes from and how it was collected can affect the way the automated systems make decisions.

Comparing Data

This example shows a driver how data about their driving compares to information in the training data set. This could help drivers understand more about the automated decisions made about them. Eslami et al. (2018), Communicating Algorithmic Process in Online Behavioural Advertising

Showing the datapoints used to calculate a result

Machine learning systems use many factors to make predictions. It can be challenging for the people who developed the system to understand why certain decisions were made, let alone people using the service.

Attentive Explanation

This example uses a colour code to show a driver how the system analysed a recent journey. The orange bars show the parts of the journey the system paid more attention to. An explanation tells the driver why the section is highlighted. Park, D. H., Hendricks, L. A., Akata, Z., Schiele, B., Darrell, T., & Rohrbach, M. (2016), Attentive Explanations: Justifying Decisions and Pointing to the Evidence

These are just two of the prototypes we created, we published the full list on our Tumblr.

What’s next?

It’s been brilliant to explore how to apply academic research to create more informed, practical outcomes. There is a lack of academic enquiry in product development, and this kind of partnership is something we want to see happening more.

Come and have a look at our exhibition running from today, until 9th November at LSE!