We should know what's made a decision

Yesterday, Matt wrote about a research project we carried out into ways automated decisions made by public services could be made more legible. The themes explored in this work, however, are not just relevant for public services.

We're seeing more and more of a need for this kind of thinking across many of the commercial services we use on a daily basis.

Accountability isn’t just a public service problem

The sketches we made showed how we could make automated decisions more legible to users, the groups who support them, and to the people ultimately responsible for them. We think these kinds of patterns will be important outside of public services too, especially as more machine-learning algorithms are used in products.

Scenario1 1
Three design sketches showing how a service could make it clear that an algorithm has been involved in making a decision


For instance, Facebook users are presented with pre-selected news and stories that affect what they understand to have taken place. The selection of news and stories seem to be unique, and tailor made based on insights that users are not privy to.

Or insurance, a sector undergoing huge change because of the increased data available. As insurance becomes more granular and instant, as we've explored before, the error margins increase. People will need ways of correcting data, and to do that it’s going to need to be clear what data they’re able to influence.

Making services powered by algorithms more legible and accountable is critical. At the very least, service owners need to ensure that the products they make meet requirements of the General Data Protection Regulation. More importantly, it’s the right thing to do.

If you're interested in exploring any of this further, do get in touch. This is an area that we'll be increasingly focussing on this year.

Contact

If you want to work with us, or just fancy a coffee, get in touch.

Email us

Our work involves

‘Thinking by making’

Read more about the studio