The DeepMind Health team were experiencing backlash as they developed Streams, a product that supported doctors and nurses to deliver faster, better care to patients. The team wanted to do the right thing, and asked IF to help them.
Working alongside DeepMind's engineering and product team, we needed to solve a series of complex ethical, operational and product challenges:
To IF, transparency can be a statement that’s regularly issued to disclose the statistics related to requests for user data and records. But we looked to go beyond that and consider general transparency, through the lens of people and what they need. We felt that was important because we anticipated that DeepMind Health would develop more systems that use health data in ways that people don’t expect.
If DeepMind Health were to succeed, they need people to trust them. Transparency is a powerful way to build trust and it’s a way that DeepMind Health could be held to account. This is particularly important in healthcare, as openness is essential to building a transformative digital innovation health ecosystem.
None of our work has involved working with patient records or live data.
We believed that one way transparency for patients and clinicians could be achieved, would be in the form of a user-facing audit log of how data was being used. This built off the emerging technology of using verifiable data structures that establish confidence in how data is being used in practice. To test hypotheses around this idea, we created prompts and provocations of how this transparency infrastructure could show up along a patient pathway, and began researching with a range of users. We gathered insights from this exploratory research, listed user needs and used these to hone our ideas, and later prototypes.
Alongside research and design, we iteratively built up a technical architecture of how verifiable data structures could form a trust layer for Streams. This helped us test our ideas, and make the prototypes more robust from a technical perspective.
We were prototyping with FHIR, an emerging open healthcare standard at the time, verifiable data structures and data models. Prototyping helped us understand how to best make use of them. For example we used a speculative technical architecture, to tell the story of how data could be safely accessed from multiple trusts. We wanted to make sure that our work was rooted in the reality of what these technologies could do, and what they made possible.
We wanted to explore ways patients could be shown trustworthy, verifiable information about their treatment. We hypothesised this had the potential to give people more power when it comes to health data, and enable them to decide how they contribute to things like medical research. As we thought about how to introduce that information to patients, we started to use a mix of digital and non-digital approaches.
Discharge letters are already familiar to many patients. Combining information about someone’s treatment – in a depth enabled by verifiable data structures – as well as information about whether the technology itself proved effective.
In our initial research it became clear that exposing the whole log to people straight away was an overwhelming prospect. Instead, we developed a pattern of progressive disclosure, where patients who want that extra information can request it.
We mapped an unhappy journey of a patient entering the hospital, and intertwined this journey with clinicians and information governance officers to understand where the biggest risks were to Streams. This user journey was the foundation of our thinking for the first few months as we returned to it to prioritise our prototypes and wider work.
We identified the risks to the user and wider ecosystem, and in particular focussed on authentication and consent constraints for the service as ways to design in more trust. For example, given that it is not possible to have a single method of 2 factor authentication, how could we create a ceremony that enabled enough trust to be created for a clinician to access patient data on their personal phone?
This was made tangible for the business and stakeholders through prototypes, one of which was selected to be taken into production as it was identified as a high risk area. We took the authentication pattern and developed it into an Android application with the engineering team. It could read NHS identity cards, verify an identity and generate a unique access code. The hardware and software for this was built and used in hospitals.
The DeepMind Health team were able to deploy Streams into two NHS Trusts as a result of the authentication product that we designed and developed. Upon launch, Streams enabled clinicians to diagnose and treat patients early, resulting in 1 life saved per week.
Leadership could explain how transparency would practically work and show up to patients, clinicians, policymakers and civil society. This was enabled by a series of prototypes and films, underpinned by a technical architecture, that made the flow of data easier to understand, at point of use, within the Streams app. These were used internally by product teams but also to communicate with policymakers and civil society.
For instance, many patients will have visited a number of different hospitals during the course of their treatment, and had tests done in each. Currently, the process for accessing test results is either complicated, or relies on informal channels. It can be hard to understand who has seen what information. In our prototype, we showed how verifiable data structures can support accountable access to data.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
Unordered list
Bold text
Emphasis
Superscript
Subscript