Explaining AI decision making: A sociotechnical approach

Dr Marisela Gutierrez Lopez has been collaborating with BDFI partner LV=General Insurance to explore opening up processes behind AI decision making. How will this benefit organisations and people who are affected by automated decisions?

Sociotechnical methods are helping us to create more inclusive ways of AI discussion across public, private and academic sectors. They are therefore crucial to investigate how people shape and are shaped by AI systems, and explore the interrelations between people, algorithms, data, organisational procedures and other factors that constitute these systems. For this purpose, we integrate social and technological expertise from across the University of Bristol, and our partners in industry and communities to empirically examine what makes AI explainable from a sociotechnical perspective.

In July 2020, the Explaining AI project was started with the vision to examine the concept of “Explainable AI” (or XAI) in machine learning approaches used in decision making. Our aim was to move beyond technocratic perspectives where explanations are framed as technical challenges towards more inclusive approaches that consider what AI might mean for diverse data publics – particularly those not usually included in discussions about AI or explainability.

Working collaboratively with LV= General Insurance (LV= GI), a leading insurance provider in the UK, we are investigating the different levels of explanation of the decision-making processes informed by machine learning models and their outcomes. In addition to our investigations at a commercial setting, we have also teamed up with two local partners – Black South West Network and Knowle West Media Centre – to explore the types of explanations that would make machine learning intelligible and actionable to these communities.

Reaching out to local communities

The community strand of our project is underpinned by design justice as a framework for reconstructing Explainable AI in collaboration with those at the margins of innovation. We avoid positioning ourselves as outsiders that tell communities what AI is or why it matters. We are not aiming to solve to the black-box problem. Instead, we start from the “bottom-up”, exploring community interests and concerns as a first step.

We are co-producing community-led XAI initiatives with our community partners to ensure machine learning decisions are communicated in relatable and actionable ways. This has given our partners ownership over the project and its outcomes. For example, each community partner is shaping up their initiative by defining their research questions and the focus of their community engagements.

woman speaking in a groupThese community-led initiatives allow for open and speculative conversations that generate knowledge (in opposition to traditional forms of XAI), moving from individual to community understandings of what constitutes AI, and shifting the focus of attention from the past and present to possible futures. The next steps of our project involve supporting community engagements by the community groups to reach into their local areas and produce new XAI approaches that empower and give agency to different data publics.

Embedding our research at LV= GI

For the organisational strand, we set up a participatory ethnography where BDFI researchers are embedded in the LV= General Insurance data science team. As a result, the project offers a unique opportunity to closely analyse organisational practices and ways of working between data science and other business functions.

This project allows us to collectively explore ways to explain machine learning models beyond providing technical accounts of data and complying with legal requirements. It shifts the perception on what makes AI explainable with an enhanced understanding of how machine learning is shaping the organisation. Moreover, it has given us, both the research team and LV= GI practitioners, space to form deep connections, share co-working spaces, and expand our partnership even further.

Putting together industry and community – XAI perspectives 

Our project responds to the current ethical turn in AI by disrupting the concept of explainability, moving away from a purely technical solution to explaining the practice of AI rather than the principle itself. Sociotechnical methods are helping us to make research results actionable, where outputs are not abstract or distant but directly applicable in the context of each project partner.

The knowledge and dynamics generated using these methods are also helping us to connect the outcomes of the organisational and community strands of the project. Putting together cross-sector collaborations in XAI involves mutual learning, where the perspectives of all partners are equally important, and we learn from each other strengths. Additionally, it requires flexibility to adjust our priorities and facilitate two-way conversations. These conversations will become crucial in the last year of the project as we reconstruct Explainable AI together, in consideration of the findings from each place of inquiry. This will allow us to create more inclusive processes for the development of machine learning in the future.

Leave a Reply

Your email address will not be published. Required fields are marked *