BDFI’s Net Zero Mission

With World Earth Day soon on the horizon, we caught up with Prem Kumar Perumal, PhD researcher, to hear about BDFI’s efforts in becoming net zero. 

BDFI entrance and biowalls

If you have visited BDFI recently, you may have noticed the wall of green plants that greet you as you approach the entrance. 

These biowalls are one element of BDFI’s Net Zero facilities, part of the Sustainable Campus Testbed project which is implementing and researching carbon reduction technologies.  

The plants in the green walls have been carefully selected to suit the north and south facing elevations, with a focus on species that help with air purification. The walls feature a matrix of plants that grow well together to create year-round coverage and seasonal interest. 

Prem Kumar Perumal outside BDFI

Prem Kumar Perumal has been leading on the monitoring of air quality around the building and explains how it works. 

“We have sensors installed in different locations in and around the building. There are six sensors above the green wall and an additional two sensors inside the building on the first and ground floor. They measure critical environmental parameters including temperature, sound levels, CO2 levels, carbon monoxide and small particles in the air.” 

“We have been monitoring data from the sensors since June 2023 and are already seeing some interesting findings. Last November when Canford Park held a fireworks fiesta, we measured an increase in particulate matter levels for eight hours at BDFI, 4.7 miles away from the source.  

“This shows the impact of fire and fireworks on the surrounding area, not only in the distance travelled by particulates but also in how long they are present for.” 

Particulate Matter (PM) refers to a complex mixture of solid particles and liquid droplets in the air.  

Prem explains: “These particles vary in size, composition, and origin. PM can originate from both natural sources, such as wildfires, volcanic eruptions, and dust storms, as well as anthropogenic sources, including industrial processes, vehicle emissions, construction activities, and agricultural operations. 

Prem checking the air monitor sensors on the BDFI biowalls

“Governments and environmental agencies worldwide monitor and regulate PM levels to protect public health and the environment. Strategies to reduce PM pollution include improving emission standards for vehicles and industrial facilities, implementing cleaner technologies, controlling dust emissions from construction sites, and promoting alternative transportation modes. 

At BDFI, Prem monitors the different particulates to monitor trends and patterns. 

He said: “I am monitoring the spikes in the dashboard on a regular basis and gather information to understand the source. This could be an outdoor event or building works, for example.” 

The biowalls and sensors are just one aspect of the Net Zero work going on at BDFI. You can read about the other carbon reduction technologies we are working on, including our smart energy system, on our website.  

What should the law do about deepfakes?

From Taylor Swift to the Royal Family – deepfakes are rarely out of the news. BDFI’s Prof. Colin Gavaghan asks what we can do to protect ourselves and if lawmakers should be doing more. 

Credit: Kenzie Saunders/flickr

The camera does lie. It always has. For as long as we’ve had photography, we’ve had trick photography. Some of this is harmless fun. I remember as a child delighting in forced perspective photos that made it look like I was holding a tiny building or relative in the palm of my hand. Some of it is much less than harmless. Stalin was notorious for doctoring old photographs to excise those who had fallen from his favour.

The development of AI deepfakes has taken this to a new level. It’s not just static images that can be manipulated now. People can be depicted saying and doing things that are entirely invented.

Credit: GabboT/flickr

If anyone hadn’t heard of deepfakes before, the first few months of 2024 have surely remedied that. First, in January, deepfake sexual images of Taylor Swift – probably the world’s most famous pop star – were circulated on X and 4chan. This month, deepfakes were back among the headlines, when rumours circulated that a family picture by the Princess of Wales had been digitally altered by AI.

In some ways, the stories couldn’t be more different. The Taylor Swift images were made and circulated by unknown actors, without the subject’s consent, and in a manner surely known or intended to cause embarrassment and distress.

Source: The Guardian

Princess Kate’s picture, in contrast – which it turns out was more likely edited by more basic software like Photoshop – was made and shared by the subject herself, and any embarrassment will be trivial and to do with her amateur photo editing skills.

In other ways, though, the two stories show two sides of the challenge these technologies will pose.

The challenges posed by intimate deepfakes are the more obvious, and have been known about long before Taylor Swift became their most high profile victim. As with ‘revenge porn”, the victims are overwhelmingly women and girls, and the harm it can do is well documented.  

There have been legal responses to this. The new Online Safety Act introduced a series of criminal offences aimed at the intentional sharing of “a photograph or film which shows, or appears to show, another person in an intimate state” without their consent. The wording is specifically intended to capture AI generated or altered images. These offences are not messing around either. The most serious of them carries a maximum prison sentence of two years.

Source: X

That sort of regulatory response targets the users of deepfake technologies. Though it’s hoped they have some deterrent effect, they are retrospective responses, handing out punishment after the harm is done. They also don’t have anything to say about a potentially even more pernicious use of deepfakes; the generation of fake political content. In 2022 a fake video circulated of Ukrainian president Volodymyr Zelensky appearing to announce the country’s surrender to Russia. And in January this year, voters in new Hampshire received a phone call from a deepfake “Joe Biden”, telling them not to vote in the Democrat primary.

Unlike intimate deepfakes, political deepfakes don’t always have an obvious individual victim. The harms are likely to be more collective – to the democratic process, perhaps, or national security. It would be possible to create specific offences to cover these situations too. Indeed, the US Federal Communications Commission acted promptly after the Biden deepfake to do precisely that.

An alternative response, though, would be to target the technologies themselves. The EU has gone some way in this direction. Article 52 of the forthcoming AI Act  requires that AI systems that generate synthetic content must be developed and used in such a way that their outputs are detectable as artificially generated or manipulated. The Act doesn’t specify how this would be done, but suggestions have included some sort of indelible watermark.

Will these responses help? It’s likely that the new offences will deter some people, but as with previous attempts to regulate the internet, problems are likely to exist with identification – you can’t punish someone for creating such images if you can’t find out who they are – and with jurisdiction.

What about the labelling requirements? There are technical doubts about how easy it will be to circumvent the detection system. And even when content is labelled as fake, it’s uncertain how this will affect the viewer. Early research suggests we should be cautious about assuming warnings will insulate us against fakery, with some researchers pointing out a tendency to overlook or filter out the warning: “Even when they’re there, audience members’ eyes—now trained on rapid-fire visual input—seem to unsee watermarks and disclosures.”

As for intimate deepfakes, detection systems may help a bit. But I’m struck by how the harm to these women and girls seems to persist, even when the images are exposed as fakes. In a case in Spain last year, teenaged girls had deepfake nudes created and circulated by teenaged boys. As one of the girls’ mothers told the media, “They felt bad and were afraid to tell and be blamed for it.” This internalisation of blame and shame by the victims of these actions suggests that a deeper problem may lie in persistent and damaging attitudes towards female bodies and sexuality, rather than any particular technology.

Source: bandeepfakes.org

Maybe in a better future, intimate deepfakes won’t cause that level of harm. We might hope that schoolmates and neighbours will rally round the victims, and that any stigma will be reserved for the bullies and predators who have created the images. We can hope. But meanwhile, these technologies are being used to inflict considerable suffering. One solution that is gaining support would be to ban deepfake technologies altogether. Maybe the potential for harm just outweighs any potential benefit. That was certainly the view of my IT Law class last week!

But what precisely would be subject to the ban? That question brings me back to Kate’s family pic. If we are to ban “deepfakes”, where would we draw the line? Does image manipulation immediately become pernicious when AI is involved, but remain innocent when it’s done with established techniques like Photoshop? If lawmakers are going to go after the technology, rather than the use, then we’re going to have to think about precisely what technology we have in our sights.

‘If you can’t tell, does it matter?’ Do we need new law for human-like AI?

With the persistent rise in chatbots and other human-like AI, Prof. Colin Gavaghan, BDFI’s resident tech lawyer, asks: do we need regulatory protection from manipulation?

Stills from WestWorld filmRobots and AI that look and act like humans is a standard trope in science fiction. Recent films and tv series have supplemented the shelves of books taking this conceit as a central concept. One of the most celebrated – at least in its first season – was HBO’s reimagining of Michael Crichton’s 1973 film WestWorld.

The premise of WestWorld is well known. In a futuristic theme park, human guests can pay exorbitant sums to interact with highly realistic robots or ‘hosts’. In an early episode, a human guest, William, is greeted by Angela, a “host.” When William enquires as to whether she is “real” or a robot, Angela responds: ‘Well if you can’t tell, does it matter?’

As we move through an era where AI and robotics acquires ever greater realism in its representations of humanity, this question is acquiring increasing salience. If we can’t tell, does it matter? Evidently, quite a lot of people think it matters quite a lot. For instance, take a look at this recent blog post from the excellent Andres Gaudamuz (Technollama).

But why might it matter? In what contexts? And what, if anything, should the law have to say about it?

What’s the worry about humanlike AI?

Writing in The Atlantic a few months ago, philosopher Dan Dennett wrote this:

“Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created. These counterfeit people are the most dangerous artifacts in human history, capable of destroying not just economies but human freedom itself.”

The most dangerous artifacts in human history?! In a year when the Oppenheimer film – to say nothing of events in Ukraine – have turned our attention back to the dangers of nuclear war, that is quite a claim! If we are to make sense of Dennett’s claim, far less decide whether we agree with it, we need to understand what Dennett means by “counterfeit people”. The term could refer to a number of things.

One obvious way in which AI can impersonate humans is through applications like ChatGPT, that can generate text indistinguishable from that generated by humans. When this is linked to a real-time conversational agent – a chatbot or an AI assistant – it can result in a conversation in which the human participant might reasonably believe the other party is also a human. Google’s “Duplex” personal assistant added a realistic spoken dimension to this in 2018, its naturalistic “ums” and “ahs” giving the impression of speaking to a real PA.

More recently, the Financial Times reported that Meta intends to release a range of AI “persona” chatbots, including one that talks like Abraham Lincoln, to keep users engaged with Facebook. Presumably, users will be aware that these are chatbots (does anyone think Abe Lincoln is actually on Facebook?) In other cases, the true identities of the chatbots will be concealed, as when bot accounts are used to spread propaganda and disinformation.

Those examples read and sound like they might be human. But AI can go further. Earlier this year, Sen. Richard Blumenthal (D-CT) kicked off a Senate panel hearing with a fake recording of his own voice, in which he described the potential risks of AI technology. So as well as impersonating humans, we now have to be alert for AI impersonating particular humans.

Soul MachinesAs the technology evolves, we’ll find AI that can impersonate humans across a whole range of measures – not only reading and sounding human, but looking and acting like it too. This is the sort of work being done by Soul Machines, whose mission is to use “cutting edge AI technology … to create the world’s most alive Digital People.”

Other than a vague unease caused by these uncanny valley denizens, why should this bother us?

One of the main concerns relates to manipulation. Writing in The Economist in April, Yuval Noah Harari claimed that AI has “hacked the operating system of human civilisation”. His concern was with the capacity of AI agents to form faux intimate relationships, and thereby exert influence on us to buy or vote in particular ways.

This concern is far from fanciful. Research is already emerging, suggesting that we are, if anything, more likely to trust AI-generated faces. Imagine an AI sales bot that is optimized to look trustworthy, and combine that with software that lets it appear patient and friendly, but also able to read our voices and faces so it knows exactly when to push and when to back off.

So great are these concerns that we have already seen some legal responses. In 2018, California introduced the BOT (Bolstering Online Transparency) Act, which bans the use of pretend-human bots if they’re used to try to influence purchasing or voting decisions. Art 52 of the EU’s new AI Act adopts a similar measure to the Californian one.

Are mandatory disclosure laws the answer?

AI agents are certainly being optimized to pass for human, with a view to sell, persuade, seduce and nudge us into parting with our attention, our money, our data, our votes. What’s less obvious is how much mandatory disclosure will insulate us against that. Will knowing that we’re interacting with an AI protect us against its superhuman persuasive power?

There is some reason to think it might play a role. One study from 2019 found that potential customers receiving a cold call about a loan renewal offer were as or more likely to take up the offer when it was made by an AI. But this advantage largely dissipated when they were told in advance that the call was from a chatbot.

Interestingly, the authors of the 2019 paper reported that late disclosure of the chatbot’s identity – that is, after the offer has been explained, but before the customer makes up their mind about whether to accept it – seemed to cancel out the antipathy to chatbots. This leads them to the provisional conclusion that experience of talking to chatbot will allay some of their concerns about it. In other words, as we get more used to talking with AIs, our intuitive suspicion of them will likely dissipate.

Another reason to be somewhat sceptical of mandatory disclosure solutions is that telling me whether something was generated by AI tells me little or nothing about whether it’s true, or about whether the person I’m talking to is who they claim to be. Ultimately, I don’t really care if content comes from a bot, a human scammer, a Putin propaganda troll farm, or a genuine conspiracy theorist. Is “Patrick Doan”, the “author” of the email I received recently, a person or a bot? Who cares. He/it is clearly phishing me either way:

Phishing email

So much for cognitive misrepresentation. What about emotional manipulation? Will knowing that I’m talking to an AI help us resist the sort of emotional investment that will help the AI lead me into bad decisions?DuoLingo owl

My answer for now is: I just don’t know. What I do know, from many hours of personal experience, is that I am by no means immune to emotional investment even in the very weak AI we have now. They don’t even need to look remotely human.  I’m even a sucker for the blatant emotional nudges from the little green owl if I don’t do my DuoLingo practice!

Vulnerable and lonely people are going to be even easier prey. Phishing and catfishing are likely still to be problems, whether the fisher is a human or an AI. Imagine trying to resist that AI Abraham Lincoln (or Taylor Swift or Ryan Gosling), when it’s been optimized to hit all the right sweet-talking notes.

Targeted steps forward

If this all sounds like a counsel of despair, it isn’t meant to. I think there are meaningful steps that can be taken to mitigate the manipulative threat posed by human-like AI. But I suspect those measures will likely have to be properly targeted if they’re to have that effect. Simply telling me that I’m talking to a “counterfeit person” is unlikely to be enough to protect me from its persuasive superpowers.

We could, for instance, consider seriously the prospect of going hard after this sort of technology, or the worst examples of it anyway. Under the EU AI Act, those AI systems which are deemed to present an unacceptable risk are to be banned outright. This includes AI that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm.

Perhaps there will soon be a case for adding highly persuasive AI systems to that list.

The UK Government seems to be going in a very different direction with regard to AI regulation, and the protections of the AI Act are unlikely to apply here. But other options exist. We could, for instance, consider stronger consumer law protections against manipulative AI technologies, to match those we have for “deceptive” and “aggressive” sales techniques.

In truth, I don’t have a clear idea right now about the best regulatory strategy. But it’s a subject I’m planning to look into more closely. Maybe it does matter if we can tell AI from human – at least to some people, at least some of the time. But on its own, I fear that knowledge will be nowhere near enough to prevent ever smarter AI, to use Harari’s words, hacking our operating systems.

This content is based on a paper given at the Gikii 2023 Conference in Utrecht, and at this year’s annual guest lecture at Southampton Law School. Colin is grateful for the helpful comments received at both. 

Network applications as an enabler for AI-driven autonomous networking

BDFI academic Dr Xenofon Vasilakos recently attended the IEEE ICC 2023 Industry Forum and Exhibition in Rome where he gave a speech to the IF&E workshop. In this blog he goes into detail about the topics covered in the speech, as we move from the fifth (5G) towards to the sixth (6G) generation of telecommunication networks.

5GASP explores self-managing and self-organizing automation for the development of sixth generation (6G) intelligent future networks. This is achieved through an ecosystem of specialized AI-driven network applications that enable automation. These applications fulfil the automation requirements of other “enhanced” network applications or services. The prototypes of these applications include network and performance prediction systems that enable proactive resource management and a human-centric approach, adapting to the dynamic nature of 6G networks and users without the need for human intervention. This AI-based automation provides improved network and service quality, while also ensuring compliance with business requirements and enhancing service agility.

Below, we provide a summary of the prototypes for AI-driven enablers, self-organised, or managed network applications.

(1) Efficient MEC Handover (EMHO) network application (Univ. of Bristol, AI-driven Autonomy enabler)

The functioning of this network application depends on collaborative machine learning (ML) predictions to maintain and potentially improve the quality of service provided by enhanced network applications operating on a multi-access edge computing (MEC) platform. The existing prototype utilizes mobile radio resource control (RRC) monitoring data along with an additional ML layer consisting of cooperative models that predict MEC handovers.

(2) Virtual On-Board Unit (vOBU) provisioning Network Application (OdinS, AI self-organisation)

This network application deploys a digital twin (DT) of a car on-board unit (OBU) on the nearest MEC node of its location. The DT can be “migrated” to car’s nearest edge as a twin (virtual) vOBU acting as a proxy, and its migration automatically begins upon cars’ movement. To avoid bottlenecks, this network application can pose an intent for forecasting future car locations with EMHO’s mobility prediction ML, thus allowing it to proactively deploy vOBU.

(3) PrivacyAnalyser Network Application (Lamda Networks, self-management)

PrivacyAnalyser is a cross-vertical cloud-native application running either at network Core or MEC. Among other features, it caters for ML network data classification from UE and/or IoT devices, and privacy evaluation and analysis. Also, PrivacyAnalyser is converging toward ML-based network management and orchestration via EMHO’s exposed ML predictions, enabling smart scale-in/out MEC pods proactively, better than the default container autoscaling for improving energy efficiency.

(4) Remote Human Driving Network Application (DriveU.auto, AI-driven self-management & self-organisation)

This Network Application enables remote autonomous vehicle operation in unusual/dangerous situations. The intent is to ensure reliable, low-latency, high-quality real-time video transmission via AI-optimised network latency, but also via EMHO Network Application handover predictions to automatically deploy appropriate applications with optimised slice features matching dynamic needs.

Future Steps, Impact & sociotechnical aspects

5GASP aims to establish an Open Source Software (OSS) repository and a VNF marketplace that caters to small and medium-sized enterprises (SMEs). It also focuses on fostering a community of network application developers by providing them with tools and services. These resources enable developers to achieve the following goals: (i) implement AI-driven network automation in network applications to improve network quality with minimal human intervention by capturing business and other intents through continuous monitoring, (ii) validate and certify network services early on to ensure alignment with business and other sociotechnical goals, and (iii) prioritize inter-domain use-cases for daily testing, validation, and ensuring security and trust of third-party intellectual property rights (IPR) in their testbeds.

The key lessons learned so far can be summarized as follows:

  • AI-driven automation plays a vital role in enhancing network and service automation by minimizing the need for human intervention and improving quality of service (QoS). Moreover, it allows the adoption of higher-level policies through proper orchestration decisions. Therefore, several sociotechnical aspects can be captured by translating key value indicators (KVIs) to network performance KPIs targets for AI enabler applications.
  • AI-driven network applications and the consumption of AI-driven artefacts (such as predictions or dynamic network orchestration suggestions) make 6G network automation achievable. Again, this can enable the adoption/imposition of sociotechnical targets and policies.

As for the next steps, the project has achieved a level of maturity where network applications are already deployed using the developed tools and procedures. The project is currently seeking network application developers, individuals or SMEs, outside of the consortium who are interested in validating their 5G applications and adopting the 5GASP methodology, tools, and innovative 6G automation network applications.

Related work

[1] A. Bonea et. al, Automated onboarding, testing and validation for Network Applications and Verticals, ISSCS Iasi, 2021.

[2] Kostis Trantzas et al., An automated CI/CD process for testing and deployment of Network Applications over 5G infrastructure, IEEE International Mediterranean Conference on Communications and Networking, 7–10 September 2021.

[3] X. Vasilakos et al., Towards Low-latent & Load-balanced VNF Placement with Hierarchical Reinforcement Learning, IEEE International Mediterranean Conference on Communications and Networking, 7–10 September 2021.

[4] M. Bunyakitanon et al., HELICON: Orchestrating low-latent & load-balanced Virtual Network Functions, IEEE ICC 2022.

[5] V. A. Siris et al. Exploiting mobility prediction for mobility & popularity caching and DASH adaptation, IEEE 17th International Symposium on A World of Wireless, Mobile and Multimedia Networks, 2016.

[6] R. Direito, et al., Towards a Fully Automated System for Testing and Validating Network Applications, NetSoft 2022, 2022.

[7] X. Vasilakos et al., Towards an intelligent 6G architecture: the case of jointly Optimised handover and Orchestration, WWRF47, 2022.

[8] N. Uniyal et al., On the design of a native Zero-touch 6G architecture, WWRF47, 2022.

 

Connected communities: are hybrid futures the way forward?

Following the publication of the ‘Post’ Pandemic Hybrid Futures report, Ella Chedburn from Knowle West Media Centre reflects on the pro and cons of connecting remotely during Covid, and what positives we should be taking forward from our different experiences of connecting during the pandemic.

Knowle West Fest

For many of us, the Covid-19 pandemic involved a huge shift from in-person to digital encounters across all areas of life. Here at KWMC, from the very first lockdown we knew we needed to find ways to keep working with and stay connected to our community, so we got creative with digital and blended ways of working. There were many positives to connecting remotely, through online platforms / posted packs etc. For some people, joining meetings, events or workshops from home was suddenly possible and more accessible. However, there were lots of negatives to purely online spaces too – not everyone has access to webcams or is familiar using technology, and some of these spaces had negative health impacts too. 

As we emerged from lockdowns, we wondered: could we get the best of both worlds by merging online and physical (‘hybrid’) spaces? We explored this in our ‘Come Together’ programme in 2021 and learned so much about the vices and virtues of these hybrid setups. We have lots of useful resources and examples on the website for anyone to use. However, as 2022 rolled around it became more and more tempting for institutions to forget these learnings and revert to in-person events that are often easier to run. 

The ‘Post’ Pandemic Hybrid Futures project came at the perfect time for us to pause and reflect on what learnings we could realistically carry forward from the pandemic. Through this collaboration, we were able to further develop some of the hybrid tools and methods we had learnt from workshops, community events, live broadcasts, festivals and blended programmes. We focused our collaboration on a specific experiment – how could we make a local community festival (Knowle West Fest) more accessible through hybrid means? 

Learning from the process

From the Knowle West Fest (KWfest) experiments one of our main learnings was that a rough-and-ready style works really well when it comes to livestreams. It seemed that the more authentic and casual style of Facebook Live resonated with many of our audiences. People in the physical space were also much more relaxed about being featured in a Facebook Live, with many seeming excited to talk on camera. Plus, the more informal nature meant that any pauses from lack of internet felt far less painful in both the online space and the physical space compared to Zoom. This livestream was also not too taxing on our staff, so it is realistic for us to continue doing them long-term. The biggest surprise was the success of our Facebook livestream afterwards – gaining over 1,000 views during the following week. Here we learned the importance of allowing digital audiences to engage in their own time.  

In comparison, only a couple of people joined our Zoom livestream. While marketing it, a few people responded negatively to the idea of Zoom – associating it with work and lockdown. People also expect events on this platform to be more professional and smoothly run, which adds pressure to staff. Despite our best efforts to market the space as a ‘cozy online portal’, these workplace associations will take more effort to overcome. Instead, we recommend using Zoom to fully engage in a single activity, allowing participants to get hands-on and make the most of the more personal space. Or even creating a pre-recorded complimentary offering to access from home instead. These have both worked very well in our previous projects. 

PostcardsAlongside our two livestream experiments, we left postcards around the festival for people to send to friends and family via a ‘post box’ in the cafe. On the back of the postcards was a link to a YouTube playlist of acts playing at the festival. Surprisingly, this activity went down particularly well with children and has a lot of scope for further experimentation such as adding art, or posting to (consenting!) strangers, or posting back and forth between people. It can also be less intense for staff to run and eliminates the stress of technology failures. After the festival we sent out craft packs to some people with links to online content – again demonstrating that to access a festival experience it doesn’t all have to synchronise or be live. 

The BDFI partnership 

BDFI’s aim to create more inclusive, sustainable and prosperous digital futures aligned well with our ethos at KWMC.  

BDFI’s support was invaluable in helping us to collate all our previous research and reflect on it from both internal and external perspectives. This allowed us to fully absorb and integrate our learnings then use them as a springboard for more experimentation.  

On a practical level, the extra staff from BDFI meant that we had enough people power to confidently deliver the hybrid elements. We learned the hard way through the Come Together project that hybrid events often need double the staff and can be more demanding for facilitators and producers, so it is important that they are properly resourced and well planned.  

Next steps

At KWMC, we hope to cultivate a more inclusive future by combining the best of digital and physical spaces. We are also keen to ensure that Knowle West communities continue to benefit from the research and experiments that they have participated in. We will be sharing these learnings with the 2023 KWfest producing team and exploring ways in which we can share the research more broadly with those working in the education, community, creative and charity sectors. 

Do Pixels Have Feelings Too?

BDFI co-director Professor Daniel Neyland hosted a fascinating and informative lecture about the ethics around artificial intelligence. Here he follows up that lecture with a thought-piece on the proliferation of AI, ethical principles and questions that can be applied, and the importance of trust and truth.

Daniel Neyland lecture

We appear to be moving into a period where the number of AI applications being launched is proliferating rapidly. All indications are that these applications will utilize a range of data, and operate at a speed and on a scale that is unprecedented. The ethical impact of these technologies – on our daily lives, our workplaces, modes of travel and our health – is likely to be huge.

This is a familiar story – we have perhaps heard similar narratives on previous occasions (for example in relation to CCTV in the 1990s, the internet in the late 1990s and early 2000s, biometric IDs from the early 2000s until around 2010, smartphones from around 2008 onwards, and so on). We are always told as part of these narratives that trying to address the impact emerging through these technologies will be incredibly difficult. However, the development of AI systems does seem to pose further specific challenges.

Firstly, for the most part, AI developments are even more opaque than some of the other technologies we have seen developed in recent decades. We don’t get to see the impacts of these systems until they are launched into the world, we may not even be aware that such systems exist before they are launched. In order to assess the likely problems specific AI applications will create, we need to open up the design and development stage of these systems to greater scrutiny. If we can intervene at the design stage, we might have a greater chance of reducing the number and range of harms that these systems might otherwise create.

Secondly, with generative AI and machine learning neural networks, systems have a certain amount of autonomy to produce their outputs. This means that if we want to manage the ethics of AI, we cannot work with the designers and developers of these systems alone. We need to work with the AI. Key to success here will be to engage with carefully bounded experiments to assess how AI engages with the social world, in order to assess its likely impacts and any changes to system design that are needed. We have an imperative to experiment with AI before it is launched into the world, but this imperative is in danger of being swept aside by the current drive to gain a market advantage by being the first mover in any particular AI application.

Thirdly, when we do have access to these AI applications, we need to attune our ethical assessment to the specific technology in focus. Not all AI is the same. In this lecture, I provide a range of broad ethical principles that draws on existing work in the field, but I also demonstrate how these principles can be given a specific focus when looking at a particular AI application – a machine learning, neural network that uses digital video to do emotion recognition.

I utilize broad ethical principles to raise questions regarding how a specific AI system can be re-designed. The ethical principles and associated questions set out one way we can discover and address concerns in the development of new AI systems. These include:

  • Consultation – at the design stage, how can we actively foster engagement with emerging AI systems to assess perceptions, trust and sentiment, for example, toward an emerging system?
  • Confidence – do we have confidence that the system will perform as we expect, how can we assess confidence (what kinds of experiments might we carry out, for example, to test how well a system works), and how can we address concerns raised by a system that is not operating as anticipated?
  • Context – in what setting is the system designed for use and what concerns arise from switching contexts of use?
  • Consequence – what happens as a result of the system being used, who is subject to AI decision making and for what purpose?
  • Consent – how can people give agreement that they should be subjects of AI decision making, that their data should be processed by AI, or that they are happy to work with an AI system in their workplace?
  • Compliance – what are the relevant legal and regulatory frameworks with which a system must comply? How might we design regulatory compliance into the technology?
  • Creep – if we carry out an ethical assessment in relation to a new and emerging technology in one use case, how might we guard against or assess the issues that might arise if that technology is used in other contexts?

These ethical principles and questions are not designed to be exhaustive, but I suggest, these need to be applied, developed, added to or moved in different directions when they are applied to specific technologies under development. They seem to represent a useful starting point for asking questions. In the lecture on neural networks for machine learning, I suggest that two significant concerns that arise through asking these questions are trust and truth. Drawing on over 50 years of social science research on trust[1], I suggest we can engage with AI systems to explore to what extent these systems provide the basic conditions for trust: does the system operate in line with our expectations of it (linking back to the ethical principle of confidence)? But we can go further and ask do we trust that the system will place our interests ahead of those who own or operate the system? We can also look at how trust is managed in demonstrations of AI and how AI disrupts the routine grounds of social order through which trust would normally persist.

With regards to truth, in the lecture I pose questions of the nature, source and reliance upon somewhat simplistic notions of truth that seem to pervade AI system development. I suggest this becomes problematic when assumptions are made that AI systems do no more than reflect truth that is already out there in the world independent of the technology. Without straying into debates about post-truth and its associated politics, it nonetheless seems problematic that systems with a generative capacity to create their own truth (at least to an extent) are then presented to the world as doing no more than re-presenting a truth that already exists independent of the system. In the lecture I also suggest that truth can be considered as an input (through the notion of ground truths that the system itself partially creates) and output (through the system’s results).

[1] For example, Barber’s (1983) work on trust, Shapin (1994), Garfinkel (1963)

Barber, B. (1983) The Logics and Limits of Trust (Rutgers University Press, NJ, USA)

Shapin, S. (1994) A Social History of Truth (University of Chicago Press, London)

Garfinkel, H. (1963) A conception of and experiments with ‘trust’ as a condition of stable concerted actions, in Harvey, O. (ed) Motivation and Social Interaction (Ronald Press, NY, USA) pp. 197-238