Major US health insurer

Building trustworthy AI in healthcare

IF built the capacity for a major US health insurer to deploy AI that improved care and cost savings, mitigated potential risks, and created great experiences.

An illustration with a sage coloured background. The illustration shows a series of posters that describe elements of the clients’ responsible AI work.
"I really want to have a strong position to operate from and to think from, and a strong advisor who’s committed to that, then I think that’s why you want to work with IF."

[.quote-author]Head of Innovation[.quote-author]

Problem

Our client had developed an AI system that could detect disease earlier in a care pathway, recommending whether people need to be sent for a cervical cancer screening. This had the potential of reducing costs of providing healthcare, as well as delivering better health outcomes for members. 

The team knew the AI system performed reliably for the specific context and demographics of one state of the US, but needed to adapt the technology so that they could prove it was safe to use across other states.

Part of the challenge was earning public, clinical and member trust in the use of AI in healthcare. However, there could also be reputational and clinical harm if the power that comes with predictive models were misused, if predictions are made without meaningful consent, or if testing impacts are not rigorously considered.

An illustration with a sage coloured background. The illustration shows a phone with a breaking news story on it.

Our approach

Earning trust with AI is innovation work. There are no silver bullets. The systems that AI is applied to are often complex, and the impacts of changes difficult to predict. But we felt that starting early would enable our client to explore more ambitious uses of AI, and ultimately deliver better services for people.

Emerging technology and platforms

We ran a series of download workshops with the client, and used our experience of building machine learning systems to ask questions that explored where the hard problems were. From this we were able to draw an AI lifecycle, and identify who should be accountable for each stage.

The term bias in AI sparks a lot of fear for companies but not many people understand what it means. We started describing bias as whether there is a balance of representation in training data. This could be in terms of social demographics, or how long someone had been a member. We shared how imbalances in any of these factors will have an impact on how accurate a model will be on different groups. They need to be understood and designed for. It depends on what impact the bias will have, and whether that’s relevant to the people our client wanted to apply the model to.

This technical work resulted in a series of audit questions that spanned from setting up the model development environment to storing and acting on model predictions.

Full stack design and development

We drew user journeys to draw out unhappy paths through the service, where trust is more likely to be relied on, to find points of risk. In particular we were building an understanding of the unintended consequences of model predictions, and the impact on downstream services.

This helped give the team a shared understanding and alignment of the opportunity space and prioritisation of risks. This resulted in a short term plan, with practical next steps, and a risk register for the technical team to run a prototype audit against.

Hype free digital strategy

Throughout the project we were drawing our insights and expertise into a digital strategy for an AI audit process. The strategy spanned company policy, audit structures and communication strategies for different user groups. It was situated in our clients’ product maturity plans, in the context of changing regulation and impacts on how and when they develop their AI audit programme.

An illustration with a sage coloured background. The illustration shows steps in a journey, from write AI ethics principles, to run trial audit, to learn/ improve/ repeat in a cycle, to AI audit 2020.

Outcomes

Our work we helped the team lay the foundations for responsible innovation, which had lasting impact within the business. We were able to change their cost model, which had two significant benefits: the team was able to reduce medical and operating expenses by $50M just in year one, and it became easy to do the right thing. 

Ultimately, the team were able to deploy the AI systems within care pathways to provide more proactive care. They also created a Medical Council, enabling the organisation to undertake more innovation with AI well ahead of competitors.

As a result of being able to deploy the AI system, our client not only benefited from cost savings but also twice the level of engagement from members too.

An illustration with a sage coloured background. The illustration shows a low-fi prototype of a training data card.

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

This is some text inside of a div block.

Get in touch

Looking to reduce risk in a product? Have an idea you want to realise?

Book a call  →