Do Pixels Have Feelings Too?

BDFI co-director Professor Daniel Neyland hosted a fascinating and informative lecture about the ethics around artificial intelligence. Here he follows up that lecture with a thought-piece on the proliferation of AI, ethical principles and questions that can be applied, and the importance of trust and truth.

Daniel Neyland lecture

We appear to be moving into a period where the number of AI applications being launched is proliferating rapidly. All indications are that these applications will utilize a range of data, and operate at a speed and on a scale that is unprecedented. The ethical impact of these technologies – on our daily lives, our workplaces, modes of travel and our health – is likely to be huge.

This is a familiar story – we have perhaps heard similar narratives on previous occasions (for example in relation to CCTV in the 1990s, the internet in the late 1990s and early 2000s, biometric IDs from the early 2000s until around 2010, smartphones from around 2008 onwards, and so on). We are always told as part of these narratives that trying to address the impact emerging through these technologies will be incredibly difficult. However, the development of AI systems does seem to pose further specific challenges.

Firstly, for the most part, AI developments are even more opaque than some of the other technologies we have seen developed in recent decades. We don’t get to see the impacts of these systems until they are launched into the world, we may not even be aware that such systems exist before they are launched. In order to assess the likely problems specific AI applications will create, we need to open up the design and development stage of these systems to greater scrutiny. If we can intervene at the design stage, we might have a greater chance of reducing the number and range of harms that these systems might otherwise create.

Secondly, with generative AI and machine learning neural networks, systems have a certain amount of autonomy to produce their outputs. This means that if we want to manage the ethics of AI, we cannot work with the designers and developers of these systems alone. We need to work with the AI. Key to success here will be to engage with carefully bounded experiments to assess how AI engages with the social world, in order to assess its likely impacts and any changes to system design that are needed. We have an imperative to experiment with AI before it is launched into the world, but this imperative is in danger of being swept aside by the current drive to gain a market advantage by being the first mover in any particular AI application.

Thirdly, when we do have access to these AI applications, we need to attune our ethical assessment to the specific technology in focus. Not all AI is the same. In this lecture, I provide a range of broad ethical principles that draws on existing work in the field, but I also demonstrate how these principles can be given a specific focus when looking at a particular AI application – a machine learning, neural network that uses digital video to do emotion recognition.

I utilize broad ethical principles to raise questions regarding how a specific AI system can be re-designed. The ethical principles and associated questions set out one way we can discover and address concerns in the development of new AI systems. These include:

  • Consultation – at the design stage, how can we actively foster engagement with emerging AI systems to assess perceptions, trust and sentiment, for example, toward an emerging system?
  • Confidence – do we have confidence that the system will perform as we expect, how can we assess confidence (what kinds of experiments might we carry out, for example, to test how well a system works), and how can we address concerns raised by a system that is not operating as anticipated?
  • Context – in what setting is the system designed for use and what concerns arise from switching contexts of use?
  • Consequence – what happens as a result of the system being used, who is subject to AI decision making and for what purpose?
  • Consent – how can people give agreement that they should be subjects of AI decision making, that their data should be processed by AI, or that they are happy to work with an AI system in their workplace?
  • Compliance – what are the relevant legal and regulatory frameworks with which a system must comply? How might we design regulatory compliance into the technology?
  • Creep – if we carry out an ethical assessment in relation to a new and emerging technology in one use case, how might we guard against or assess the issues that might arise if that technology is used in other contexts?

These ethical principles and questions are not designed to be exhaustive, but I suggest, these need to be applied, developed, added to or moved in different directions when they are applied to specific technologies under development. They seem to represent a useful starting point for asking questions. In the lecture on neural networks for machine learning, I suggest that two significant concerns that arise through asking these questions are trust and truth. Drawing on over 50 years of social science research on trust[1], I suggest we can engage with AI systems to explore to what extent these systems provide the basic conditions for trust: does the system operate in line with our expectations of it (linking back to the ethical principle of confidence)? But we can go further and ask do we trust that the system will place our interests ahead of those who own or operate the system? We can also look at how trust is managed in demonstrations of AI and how AI disrupts the routine grounds of social order through which trust would normally persist.

With regards to truth, in the lecture I pose questions of the nature, source and reliance upon somewhat simplistic notions of truth that seem to pervade AI system development. I suggest this becomes problematic when assumptions are made that AI systems do no more than reflect truth that is already out there in the world independent of the technology. Without straying into debates about post-truth and its associated politics, it nonetheless seems problematic that systems with a generative capacity to create their own truth (at least to an extent) are then presented to the world as doing no more than re-presenting a truth that already exists independent of the system. In the lecture I also suggest that truth can be considered as an input (through the notion of ground truths that the system itself partially creates) and output (through the system’s results).

[1] For example, Barber’s (1983) work on trust, Shapin (1994), Garfinkel (1963)

Barber, B. (1983) The Logics and Limits of Trust (Rutgers University Press, NJ, USA)

Shapin, S. (1994) A Social History of Truth (University of Chicago Press, London)

Garfinkel, H. (1963) A conception of and experiments with ‘trust’ as a condition of stable concerted actions, in Harvey, O. (ed) Motivation and Social Interaction (Ronald Press, NY, USA) pp. 197-238

 

Leave a Reply

Your email address will not be published. Required fields are marked *