Greener Streams: Sustainability in Video Streaming

We continue the theme of sustainability on the BDFI blog as we hear from Angeliki Katsenou, senior lecturer in networked media, about her research into the environmental impact of video streaming.  

Video technologies have become a central part of daily activities spanning from entertainment to remote working and to health services (1).

Image credit: 2023 Sandvine Report

However, as the world faces the challenges of climate change, resource depletion, and ecological degradation, it is imperative to scrutinize the environmental footprint of video technologies due to their significant resource consumption.  

Alongside the electronic user devices, the digital infrastructure, which includes data centres and telecommunication networks, demands vast amounts of energy and raw materials (2) 

Even more impactful than the distributed infrastructure are end-user display devices, in particular TVs, as the main contributor to the carbon footprint of video on demand (VoD) (3) 

Together, smartphones, tablets, laptops, TVs, and other devices utilised for accessing and viewing video content require significant electricity for their operation. Although ICT companies have been procuring renewable energy faster than any other part of the economy (4), the electricity powering user devices is not necessarily generated by renewable sources.  

Compression vs quality 

From an engineering perspective, we know that by reducing the amount of streamed visual content through compression, we can reduce the energy consumption.  

However, there is a trade off between the compression level and the resulting visual quality. The more we compress, the lower the delivered visual quality.  

But what if we could make streaming greener without sacrificing user experience? Our research focuses on the sustainability of video services, exploring the energy consumption of video technologies and identifying actionable strategies to reduce their carbon footprint.  

Understanding the Energy Profile of Video Technologies 

The journey began by designing a peer-to-peer streaming scenario for user-generated content (UGC)—the videos we share daily. By isolating the video coding and decoding processes, we established a benchmark for energy consumption using both software and hardware-based power measurement frameworks.  

While software tools are cost-effective and accessible, hardware meters offer greater precision at a higher price. Through rigorous statistical analysis, we demonstrated a strong correlation between these methods, paving the way for reliable energy assessments.  

One of our breakthrough achievements was an energy-driven optimization of video streaming solutions. By fine-tuning parameters, we achieved a 30% reduction in energy consumption with negligible impact on video quality.  

Additionally, we simulated real-world scenarios, such as lowering video resolution from Full HD to HD for a month. Although this intervention yielded only minor carbon savings for short-term changes, it highlights the potential of such strategies when scaled globally or over longer periods. 

Driving Impact: Papers, Proposals, and Collaborations 

Our research has resulted in several impactful contributions to the field, including: 

– Publications: Papers presented at PCS 2024 (5), QoMEX 2024 (6), and IEEE ICIP 2024 (7), focusing on carbon reduction in video streaming and energy-aware optimisations. These efforts have also led to invitations for tutorials, special sessions, and journal contributions, amplifying the conversation on sustainable video technologies. 

– Collaborations: Partnerships with leading institutions like Fraunhofer HHI (8), academic partners such as RWTH, NTNU, and industry leaders like Tencent, advancing energy-driven video streaming solutions were initiated and led to the submission of a Horizon 2020 MSCA Training Network proposal. 

This blog serves the purpose of an open call to any interested party for collaboration in this field. Please leave a comment or contact angeliki.katsenou@bristol.ac.uk to discuss further.

_______________________________________________________________________________________________________

[1] Sanvdine, ‘‘The Global Internet Phenomena Report March 2024.

[2] J. Malmodin, N. Lövehagen, P. Bergmark, and D. Lundén, ‘‘ICT sector electricity consumption and greenhouse gas emissions – 2020 outcome,’’ Telecommunications Policy, p. 102701, 1, 2024.

[3] Carbon Trust, ‘‘Carbon impact of video streaming,’’ 2021.

[4] World Bank, ‘‘Measuring the emissions and energy footprint of the ICT sector: Implications for climate action,’’ 2023.

[5] A. Katsenou, X. Wang, D. Schien, and D. Bull, ‘‘Comparative Study of Hardware and Software Power Measurements in Video Compression,’’ in 2024 Picture Coding Symposium (PCS).

[6] D. Schien, P. Shabajee, H. Akyol, L. Benson, and A. Katsenou, ‘‘Assessing the Carbon Reduction Potential for Video Streaming from Short-Term Coding Changes,’’ in Proc. 15th International Conference on Quality of Multimedia Experience (QoMEX), 2024.

[7] A. Katsenou, X. Wang, D. Schien and D. Bull, “Rate-Quality or Energy-Quality Pareto Fronts for Adaptive Video Streaming?,” 2024 IEEE International Conference on Image Processing (ICIP).

[8] A. Katsenou, V. Menon, A. wieckowski, B. Bross, D. Marpe, “Decoding Complexity-Rate-Quality Pareto-Front for Adaptive VVC Streaming”, in 2024 IEEE Visual Communications and Image Processing (VCIP).

Building sustainable electronics

Professor Melissa Gregg joined BDFI with a mission to inspire design and engineering priorities that suit our climate futures. She explains how this starts with using the digital resources we already have to avoid carbon emissions and build a circular economy for electronics.

Melissa Gregg

For the past decade, I worked as a Senior Principal Engineer at Intel Corporation in the US, where I led user experience research in the client architecture team in the PC product group.

While working on Project Athena – the innovation program that led to the Intel EVO brand – I started noticing a change in attitudes among study participants in my team’s ethnographic research.

In Europe especially, laptop users were frustrated that tech companies forced them to upgrade their devices so often – a process known as “planned obsolescence.” For young people looking to buy quality products with credible sustainability features, the choices were few. Some even wanted to buy second hand devices in an effort to maintain their environmental principles.

This research prompted me to start an internal employee group at Intel to discuss sustainable product design, and the company’s stance on environmental issues generally.

This groundswell led to the consolidation of a Net Zero commitment from corporate leadership. It also led to new business relationships with customers. Technical teams partnered on sustainability initiatives such as Dell’s Concept Luna, which showed how the carbon footprint of a laptop could be lowered significantly.

Stack of old, broken and obsolete laptop computers for repair and recycle
Stack of old, broken and obsolete laptop computers for repair and recycle

Connections I developed with startups led to new designs with a focus on repair, reuse and longer life. The Framework computer is currently the leader in this exciting trend.

Since these early projects, I’ve been consulting and educating the tech industry on sustainability issues, from the energy footprint of new AI applications to the problem of electronic waste.

My 2023 workshop series, Electronics Ecologies, brought together industry practitioners, academics, engineers and designers to share perspectives on the full life cycle of digital devices and how best to mitigate their environmental harms.

Keeping digital devices in use for as long as possible is an important way to minimize the environmental impact of their production, which involves mining rare earth commodities, global transportation and energy costs.

For mobile phones and laptops, the majority of the carbon emissions associated with their production takes place before you even buy them. My research addresses this substantial challenge, which is to redirect high tech manufacturing’s linear trajectory of extraction-production-consumption-discard.

Melissa at the eSummit in Austin, Texas

The eSummit on Sustainable Electronics held in Austin last month is an important venue to keep track of the industry supporting digital technology reuse. The event brings together OEMs (Original Equipment Manufacturers), technology hardware resellers, ITADs (IT Asset Disposition operators) nonprofits and sustainability advocates. I chaired a panel on emerging trends with Mark Newton (Head of Corporate Sustainability, Samsung North America), Walter Alcorn (VP Environmental Affairs and Industry Sustainability, Consumer Technology Association – pictured L-R), and Sean Magann (Sims Lifecycle Services).

If you’ve ever wondered what happens to your old laptop, mobile phone or headset when it’s donated or sent to IT for recycling, chances are it goes to one of the companies or organisations that attends this forum, or others like the Reverse Logistics Association.

Speakers included Tom Marieb, VP Product Integrity for Hardware Engineering at Apple, who addressed sustainability concerns about parts pairing and repairability for iPhones.

For years, Apple has faced scrutiny for preventing consumers from being able to easily repair their devices. While Apple has partially changed its position on repair in response to advocacy, the main sustainability focus for the company is having a supply chain running on renewable energy, as well as committing to 100% recycled rare earth elements in all new products from 2025.

iFixit organized multiple sessions with OEMs and in collaboration with Repair.org, the main advocacy group seeking “right to repair” options for consumers across the US. Lenovo, HP and Microsoft all presented recent improvements to repairability scoring in response to pressure from these groups and new Eco-Design requirements coming in to effect in Europe.

A graphic showing how many people are affected by electronic reuse

Reuse and repair are becoming critical design priorities given the carbon emissions involved in hardware production and distribution, which could be reduced and avoided with more efficient reuse of e-waste and idle, unused electronics. In addition, the reverse logistics industry is an incredibly rewarding business model to support: it enables more users to experience the benefits of technology access by paving the way for new adoption paths. The photo to the left – captured at the conference showroom floor – illustrates the many people who are touched by – and make a business from supporting – electronics reuse.

We’re hoping to use the insights from reuse and repair practitioners I’ve met in the US and Australia to pilot new initiatives at BDFI – for example, a donation drive for electronics that can fuel research and training opportunities with industry and nonprofit partners. Stay tuned to get involved in these plans as they take shape, or why not add a comment below about your memories of old electronics and how you’ve recycled or disposed of them?

Figuring out futures works better in partnership

A significant part of BDFI’s mission is its collaboration with our partners. We spoke to Beckie Coleman, Professor of Digital Futures, about her current Knowledge Exchange Programme with BDFI’s community partner Knowle West Media Centre and where it is taking them.

What does your role at BDFI typically involve?

Beckie Coleman outside KWMC
Beckie Coleman outside KWMC

I have quite a varied role with BDFI. One aspect is working with BDFI partners in different ways (e.g. on research projects, in workshops and meetings) and making connections across the university with people researching and teaching on digital technologies, innovation and futures. My own research focuses on digital media, technologies and culture especially through arts-led approaches to imagining and building better futures.

At the same time I’m based in the School of Sociology, Politics and International Studies (SPAIS) and teach and supervise there as well. We’re currently developing a new MSc in Digital and Technological Societies, which should launch in September 2025.

How did the partnership with KWMC come about?

When I joined BDFI, a group of us visited KWMC where I learnt more about their long-standing work and the area of Knowle West in south Bristol. I was particularly excited about how they approach tech through arts and co-creation practices, and how they see digital innovation as happening in communities as well as in industry and academia. BDFI already had strong links with KWMC through different projects, including Digital Inequality and Explainable AI.

KWMC young people programme
Soundwave young people programme Image credit: Ibi Feher

I co-developed a pilot project with Creative Co-Director Martha King and others on ‘Post’-pandemic hybrid futures, where we experimented with different technologies to ensure questions of accessibility and inclusion raised by doing things online during the pandemic weren’t lost when we returned to in-person ways of doing things. From this project, Carolyn Hassan (who was then CEO of KWMC) and I applied for me to have a Knowledge Exchange Placement at KWMC, to strengthen the BDFI/KWMC relationship further, build further capacity and develop longer-term projects. We focused this on how digital futures can be built through community tech. We were successful, and I’m now Researcher in Residence at KWMC, spending the equivalent of a day a week there.

How does it work practically?

Creative Hub young people programme
Creative Hub young people programme

This is also quite varied! I go to KWMC regularly, usually once a week, where I attend organisational meetings, working groups and other activities, such as Creative Hub, which is part of the young people’s programme. These have really built my knowledge and understanding of KWMC and Knowle West more widely. I’m concentrating especially on how community tech has been central to what KWMC do. For example, I’m looking back at past projects to draw out the different tech they have co-created and deployed. This has included sensors, AI and digital fabrication.

I’m also interested in the creative ways they have communicated this work, including embroidered data visualisations and sound-tracks, and the different themes they address, which include green spaces and biodiversity, housing and sustainable energy, high-streets and regeneration. I’ve also been interviewing KWMC staff in order to explore what ‘community tech’ means to them. Community tech is quite a new term and so we’re trying to work out what it might mean for KWMC’s work and how it might be developed further. What does community tech encompass? How can community tech help make better futures? What do we need to develop it further?

What are the highlights?

One issue that has come out so far is the importance of community tech infrastructures. This includes hardware and software that must be maintained, and funding for this can be very hard to find. This dovetails with a project KWMC are currently doing, funded by Promising Trouble, called Makers and Maintainers, which focuses on building the resilience of existing community tech already in use by community businesses in England. We’re also thinking about how community tech infrastructures refers to the ongoing practices and more intangible knowledges and understandings that go in to ensuring that communities are invited into discussions about, and innovation of, tech.

A Creative Cuppa weekly drop in session Image credit: Ibi Feher

What is really distinctive about KWMC are the arts-led approaches they use to centre and explore issues that are important to a community, which often involve working with artists to co-create tech and to communicate the findings in imaginative and sometimes unexpected ways. For example, David Matunda is currently working with KWMC on a MyWorld Fellowship, investigating and prototyping creative uses for community tech in collaboration with the Knowle West community. We are organising a meet-up at KWMC in the autumn to explore these issues in more detail – what does approaching community tech through arts do? What do we need to support and expand this work?

You sound very busy! What other collaborations are you currently involved in?

I’m working with some of my BDFI colleagues and BDFI’s other partners – both community and industry organisations – on other collaborations. These include developing a pilot research project on the future of human/machine teams with one of our industry partners, and running workshops with other partners to scope out and co-design projects with them.

What’s the best thing about collaborating/working with other organisations? 

I think if we are serious about building more inclusive, prosperous and sustainable digital futures – as BDFI is set up to do – we need to work in cross-sector collaborations. I’m especially interested in how innovation happens in everyday life, and what kinds of infrastructures might be needed to support this further.

I’m really enjoying getting out and about around Bristol and beyond, and am finding that this is, in turn, shaping how I’m thinking about and designing research projects so that co-creation, participation and public engagement are embedded throughout (rather than seen as an add-on or something that comes at the end of a project to share findings). It’s definitely an exciting time to be at BDFI!

BDFI’s Net Zero Mission

With World Earth Day soon on the horizon, we caught up with Prem Kumar Perumal, PhD researcher, to hear about BDFI’s efforts in becoming net zero. 

BDFI entrance and biowalls

If you have visited BDFI recently, you may have noticed the wall of green plants that greet you as you approach the entrance. 

These biowalls are one element of BDFI’s Net Zero facilities, part of the Sustainable Campus Testbed project which is implementing and researching carbon reduction technologies.  

The plants in the green walls have been carefully selected to suit the north and south facing elevations, with a focus on species that help with air purification. The walls feature a matrix of plants that grow well together to create year-round coverage and seasonal interest. 

Prem Kumar Perumal outside BDFI

Prem Kumar Perumal has been leading on the monitoring of air quality around the building and explains how it works. 

“We have sensors installed in different locations in and around the building. There are six sensors above the green wall and an additional two sensors inside the building on the first and ground floor. They measure critical environmental parameters including temperature, sound levels, CO2 levels, carbon monoxide and small particles in the air.” 

“We have been monitoring data from the sensors since June 2023 and are already seeing some interesting findings. Last November when Canford Park held a fireworks fiesta, we measured an increase in particulate matter levels for eight hours at BDFI, 4.7 miles away from the source.  

“This shows the impact of fire and fireworks on the surrounding area, not only in the distance travelled by particulates but also in how long they are present for.” 

Particulate Matter (PM) refers to a complex mixture of solid particles and liquid droplets in the air.  

Prem explains: “These particles vary in size, composition, and origin. PM can originate from both natural sources, such as wildfires, volcanic eruptions, and dust storms, as well as anthropogenic sources, including industrial processes, vehicle emissions, construction activities, and agricultural operations. 

Prem checking the air monitor sensors on the BDFI biowalls

“Governments and environmental agencies worldwide monitor and regulate PM levels to protect public health and the environment. Strategies to reduce PM pollution include improving emission standards for vehicles and industrial facilities, implementing cleaner technologies, controlling dust emissions from construction sites, and promoting alternative transportation modes. 

At BDFI, Prem monitors the different particulates to monitor trends and patterns. 

He said: “I am monitoring the spikes in the dashboard on a regular basis and gather information to understand the source. This could be an outdoor event or building works, for example.” 

The biowalls and sensors are just one aspect of the Net Zero work going on at BDFI. You can read about the other carbon reduction technologies we are working on, including our smart energy system, on our website.  

What should the law do about deepfakes?

From Taylor Swift to the Royal Family – deepfakes are rarely out of the news. BDFI’s Prof. Colin Gavaghan asks what we can do to protect ourselves and if lawmakers should be doing more. 

Credit: Kenzie Saunders/flickr

The camera does lie. It always has. For as long as we’ve had photography, we’ve had trick photography. Some of this is harmless fun. I remember as a child delighting in forced perspective photos that made it look like I was holding a tiny building or relative in the palm of my hand. Some of it is much less than harmless. Stalin was notorious for doctoring old photographs to excise those who had fallen from his favour.

The development of AI deepfakes has taken this to a new level. It’s not just static images that can be manipulated now. People can be depicted saying and doing things that are entirely invented.

Credit: GabboT/flickr

If anyone hadn’t heard of deepfakes before, the first few months of 2024 have surely remedied that. First, in January, deepfake sexual images of Taylor Swift – probably the world’s most famous pop star – were circulated on X and 4chan. This month, deepfakes were back among the headlines, when rumours circulated that a family picture by the Princess of Wales had been digitally altered by AI.

In some ways, the stories couldn’t be more different. The Taylor Swift images were made and circulated by unknown actors, without the subject’s consent, and in a manner surely known or intended to cause embarrassment and distress.

Source: The Guardian

Princess Kate’s picture, in contrast – which it turns out was more likely edited by more basic software like Photoshop – was made and shared by the subject herself, and any embarrassment will be trivial and to do with her amateur photo editing skills.

In other ways, though, the two stories show two sides of the challenge these technologies will pose.

The challenges posed by intimate deepfakes are the more obvious, and have been known about long before Taylor Swift became their most high profile victim. As with ‘revenge porn”, the victims are overwhelmingly women and girls, and the harm it can do is well documented.  

There have been legal responses to this. The new Online Safety Act introduced a series of criminal offences aimed at the intentional sharing of “a photograph or film which shows, or appears to show, another person in an intimate state” without their consent. The wording is specifically intended to capture AI generated or altered images. These offences are not messing around either. The most serious of them carries a maximum prison sentence of two years.

Source: X

That sort of regulatory response targets the users of deepfake technologies. Though it’s hoped they have some deterrent effect, they are retrospective responses, handing out punishment after the harm is done. They also don’t have anything to say about a potentially even more pernicious use of deepfakes; the generation of fake political content. In 2022 a fake video circulated of Ukrainian president Volodymyr Zelensky appearing to announce the country’s surrender to Russia. And in January this year, voters in new Hampshire received a phone call from a deepfake “Joe Biden”, telling them not to vote in the Democrat primary.

Unlike intimate deepfakes, political deepfakes don’t always have an obvious individual victim. The harms are likely to be more collective – to the democratic process, perhaps, or national security. It would be possible to create specific offences to cover these situations too. Indeed, the US Federal Communications Commission acted promptly after the Biden deepfake to do precisely that.

An alternative response, though, would be to target the technologies themselves. The EU has gone some way in this direction. Article 52 of the forthcoming AI Act  requires that AI systems that generate synthetic content must be developed and used in such a way that their outputs are detectable as artificially generated or manipulated. The Act doesn’t specify how this would be done, but suggestions have included some sort of indelible watermark.

Will these responses help? It’s likely that the new offences will deter some people, but as with previous attempts to regulate the internet, problems are likely to exist with identification – you can’t punish someone for creating such images if you can’t find out who they are – and with jurisdiction.

What about the labelling requirements? There are technical doubts about how easy it will be to circumvent the detection system. And even when content is labelled as fake, it’s uncertain how this will affect the viewer. Early research suggests we should be cautious about assuming warnings will insulate us against fakery, with some researchers pointing out a tendency to overlook or filter out the warning: “Even when they’re there, audience members’ eyes—now trained on rapid-fire visual input—seem to unsee watermarks and disclosures.”

As for intimate deepfakes, detection systems may help a bit. But I’m struck by how the harm to these women and girls seems to persist, even when the images are exposed as fakes. In a case in Spain last year, teenaged girls had deepfake nudes created and circulated by teenaged boys. As one of the girls’ mothers told the media, “They felt bad and were afraid to tell and be blamed for it.” This internalisation of blame and shame by the victims of these actions suggests that a deeper problem may lie in persistent and damaging attitudes towards female bodies and sexuality, rather than any particular technology.

Source: bandeepfakes.org

Maybe in a better future, intimate deepfakes won’t cause that level of harm. We might hope that schoolmates and neighbours will rally round the victims, and that any stigma will be reserved for the bullies and predators who have created the images. We can hope. But meanwhile, these technologies are being used to inflict considerable suffering. One solution that is gaining support would be to ban deepfake technologies altogether. Maybe the potential for harm just outweighs any potential benefit. That was certainly the view of my IT Law class last week!

But what precisely would be subject to the ban? That question brings me back to Kate’s family pic. If we are to ban “deepfakes”, where would we draw the line? Does image manipulation immediately become pernicious when AI is involved, but remain innocent when it’s done with established techniques like Photoshop? If lawmakers are going to go after the technology, rather than the use, then we’re going to have to think about precisely what technology we have in our sights.

‘If you can’t tell, does it matter?’ Do we need new law for human-like AI?

With the persistent rise in chatbots and other human-like AI, Prof. Colin Gavaghan, BDFI’s resident tech lawyer, asks: do we need regulatory protection from manipulation?

Stills from WestWorld filmRobots and AI that look and act like humans is a standard trope in science fiction. Recent films and tv series have supplemented the shelves of books taking this conceit as a central concept. One of the most celebrated – at least in its first season – was HBO’s reimagining of Michael Crichton’s 1973 film WestWorld.

The premise of WestWorld is well known. In a futuristic theme park, human guests can pay exorbitant sums to interact with highly realistic robots or ‘hosts’. In an early episode, a human guest, William, is greeted by Angela, a “host.” When William enquires as to whether she is “real” or a robot, Angela responds: ‘Well if you can’t tell, does it matter?’

As we move through an era where AI and robotics acquires ever greater realism in its representations of humanity, this question is acquiring increasing salience. If we can’t tell, does it matter? Evidently, quite a lot of people think it matters quite a lot. For instance, take a look at this recent blog post from the excellent Andres Gaudamuz (Technollama).

But why might it matter? In what contexts? And what, if anything, should the law have to say about it?

What’s the worry about humanlike AI?

Writing in The Atlantic a few months ago, philosopher Dan Dennett wrote this:

“Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created. These counterfeit people are the most dangerous artifacts in human history, capable of destroying not just economies but human freedom itself.”

The most dangerous artifacts in human history?! In a year when the Oppenheimer film – to say nothing of events in Ukraine – have turned our attention back to the dangers of nuclear war, that is quite a claim! If we are to make sense of Dennett’s claim, far less decide whether we agree with it, we need to understand what Dennett means by “counterfeit people”. The term could refer to a number of things.

One obvious way in which AI can impersonate humans is through applications like ChatGPT, that can generate text indistinguishable from that generated by humans. When this is linked to a real-time conversational agent – a chatbot or an AI assistant – it can result in a conversation in which the human participant might reasonably believe the other party is also a human. Google’s “Duplex” personal assistant added a realistic spoken dimension to this in 2018, its naturalistic “ums” and “ahs” giving the impression of speaking to a real PA.

More recently, the Financial Times reported that Meta intends to release a range of AI “persona” chatbots, including one that talks like Abraham Lincoln, to keep users engaged with Facebook. Presumably, users will be aware that these are chatbots (does anyone think Abe Lincoln is actually on Facebook?) In other cases, the true identities of the chatbots will be concealed, as when bot accounts are used to spread propaganda and disinformation.

Those examples read and sound like they might be human. But AI can go further. Earlier this year, Sen. Richard Blumenthal (D-CT) kicked off a Senate panel hearing with a fake recording of his own voice, in which he described the potential risks of AI technology. So as well as impersonating humans, we now have to be alert for AI impersonating particular humans.

Soul MachinesAs the technology evolves, we’ll find AI that can impersonate humans across a whole range of measures – not only reading and sounding human, but looking and acting like it too. This is the sort of work being done by Soul Machines, whose mission is to use “cutting edge AI technology … to create the world’s most alive Digital People.”

Other than a vague unease caused by these uncanny valley denizens, why should this bother us?

One of the main concerns relates to manipulation. Writing in The Economist in April, Yuval Noah Harari claimed that AI has “hacked the operating system of human civilisation”. His concern was with the capacity of AI agents to form faux intimate relationships, and thereby exert influence on us to buy or vote in particular ways.

This concern is far from fanciful. Research is already emerging, suggesting that we are, if anything, more likely to trust AI-generated faces. Imagine an AI sales bot that is optimized to look trustworthy, and combine that with software that lets it appear patient and friendly, but also able to read our voices and faces so it knows exactly when to push and when to back off.

So great are these concerns that we have already seen some legal responses. In 2018, California introduced the BOT (Bolstering Online Transparency) Act, which bans the use of pretend-human bots if they’re used to try to influence purchasing or voting decisions. Art 52 of the EU’s new AI Act adopts a similar measure to the Californian one.

Are mandatory disclosure laws the answer?

AI agents are certainly being optimized to pass for human, with a view to sell, persuade, seduce and nudge us into parting with our attention, our money, our data, our votes. What’s less obvious is how much mandatory disclosure will insulate us against that. Will knowing that we’re interacting with an AI protect us against its superhuman persuasive power?

There is some reason to think it might play a role. One study from 2019 found that potential customers receiving a cold call about a loan renewal offer were as or more likely to take up the offer when it was made by an AI. But this advantage largely dissipated when they were told in advance that the call was from a chatbot.

Interestingly, the authors of the 2019 paper reported that late disclosure of the chatbot’s identity – that is, after the offer has been explained, but before the customer makes up their mind about whether to accept it – seemed to cancel out the antipathy to chatbots. This leads them to the provisional conclusion that experience of talking to chatbot will allay some of their concerns about it. In other words, as we get more used to talking with AIs, our intuitive suspicion of them will likely dissipate.

Another reason to be somewhat sceptical of mandatory disclosure solutions is that telling me whether something was generated by AI tells me little or nothing about whether it’s true, or about whether the person I’m talking to is who they claim to be. Ultimately, I don’t really care if content comes from a bot, a human scammer, a Putin propaganda troll farm, or a genuine conspiracy theorist. Is “Patrick Doan”, the “author” of the email I received recently, a person or a bot? Who cares. He/it is clearly phishing me either way:

Phishing email

So much for cognitive misrepresentation. What about emotional manipulation? Will knowing that I’m talking to an AI help us resist the sort of emotional investment that will help the AI lead me into bad decisions?DuoLingo owl

My answer for now is: I just don’t know. What I do know, from many hours of personal experience, is that I am by no means immune to emotional investment even in the very weak AI we have now. They don’t even need to look remotely human.  I’m even a sucker for the blatant emotional nudges from the little green owl if I don’t do my DuoLingo practice!

Vulnerable and lonely people are going to be even easier prey. Phishing and catfishing are likely still to be problems, whether the fisher is a human or an AI. Imagine trying to resist that AI Abraham Lincoln (or Taylor Swift or Ryan Gosling), when it’s been optimized to hit all the right sweet-talking notes.

Targeted steps forward

If this all sounds like a counsel of despair, it isn’t meant to. I think there are meaningful steps that can be taken to mitigate the manipulative threat posed by human-like AI. But I suspect those measures will likely have to be properly targeted if they’re to have that effect. Simply telling me that I’m talking to a “counterfeit person” is unlikely to be enough to protect me from its persuasive superpowers.

We could, for instance, consider seriously the prospect of going hard after this sort of technology, or the worst examples of it anyway. Under the EU AI Act, those AI systems which are deemed to present an unacceptable risk are to be banned outright. This includes AI that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm.

Perhaps there will soon be a case for adding highly persuasive AI systems to that list.

The UK Government seems to be going in a very different direction with regard to AI regulation, and the protections of the AI Act are unlikely to apply here. But other options exist. We could, for instance, consider stronger consumer law protections against manipulative AI technologies, to match those we have for “deceptive” and “aggressive” sales techniques.

In truth, I don’t have a clear idea right now about the best regulatory strategy. But it’s a subject I’m planning to look into more closely. Maybe it does matter if we can tell AI from human – at least to some people, at least some of the time. But on its own, I fear that knowledge will be nowhere near enough to prevent ever smarter AI, to use Harari’s words, hacking our operating systems.

This content is based on a paper given at the Gikii 2023 Conference in Utrecht, and at this year’s annual guest lecture at Southampton Law School. Colin is grateful for the helpful comments received at both. 

Network applications as an enabler for AI-driven autonomous networking

BDFI academic Dr Xenofon Vasilakos recently attended the IEEE ICC 2023 Industry Forum and Exhibition in Rome where he gave a speech to the IF&E workshop. In this blog he goes into detail about the topics covered in the speech, as we move from the fifth (5G) towards to the sixth (6G) generation of telecommunication networks.

5GASP explores self-managing and self-organizing automation for the development of sixth generation (6G) intelligent future networks. This is achieved through an ecosystem of specialized AI-driven network applications that enable automation. These applications fulfil the automation requirements of other “enhanced” network applications or services. The prototypes of these applications include network and performance prediction systems that enable proactive resource management and a human-centric approach, adapting to the dynamic nature of 6G networks and users without the need for human intervention. This AI-based automation provides improved network and service quality, while also ensuring compliance with business requirements and enhancing service agility.

Below, we provide a summary of the prototypes for AI-driven enablers, self-organised, or managed network applications.

(1) Efficient MEC Handover (EMHO) network application (Univ. of Bristol, AI-driven Autonomy enabler)

The functioning of this network application depends on collaborative machine learning (ML) predictions to maintain and potentially improve the quality of service provided by enhanced network applications operating on a multi-access edge computing (MEC) platform. The existing prototype utilizes mobile radio resource control (RRC) monitoring data along with an additional ML layer consisting of cooperative models that predict MEC handovers.

(2) Virtual On-Board Unit (vOBU) provisioning Network Application (OdinS, AI self-organisation)

This network application deploys a digital twin (DT) of a car on-board unit (OBU) on the nearest MEC node of its location. The DT can be “migrated” to car’s nearest edge as a twin (virtual) vOBU acting as a proxy, and its migration automatically begins upon cars’ movement. To avoid bottlenecks, this network application can pose an intent for forecasting future car locations with EMHO’s mobility prediction ML, thus allowing it to proactively deploy vOBU.

(3) PrivacyAnalyser Network Application (Lamda Networks, self-management)

PrivacyAnalyser is a cross-vertical cloud-native application running either at network Core or MEC. Among other features, it caters for ML network data classification from UE and/or IoT devices, and privacy evaluation and analysis. Also, PrivacyAnalyser is converging toward ML-based network management and orchestration via EMHO’s exposed ML predictions, enabling smart scale-in/out MEC pods proactively, better than the default container autoscaling for improving energy efficiency.

(4) Remote Human Driving Network Application (DriveU.auto, AI-driven self-management & self-organisation)

This Network Application enables remote autonomous vehicle operation in unusual/dangerous situations. The intent is to ensure reliable, low-latency, high-quality real-time video transmission via AI-optimised network latency, but also via EMHO Network Application handover predictions to automatically deploy appropriate applications with optimised slice features matching dynamic needs.

Future Steps, Impact & sociotechnical aspects

5GASP aims to establish an Open Source Software (OSS) repository and a VNF marketplace that caters to small and medium-sized enterprises (SMEs). It also focuses on fostering a community of network application developers by providing them with tools and services. These resources enable developers to achieve the following goals: (i) implement AI-driven network automation in network applications to improve network quality with minimal human intervention by capturing business and other intents through continuous monitoring, (ii) validate and certify network services early on to ensure alignment with business and other sociotechnical goals, and (iii) prioritize inter-domain use-cases for daily testing, validation, and ensuring security and trust of third-party intellectual property rights (IPR) in their testbeds.

The key lessons learned so far can be summarized as follows:

  • AI-driven automation plays a vital role in enhancing network and service automation by minimizing the need for human intervention and improving quality of service (QoS). Moreover, it allows the adoption of higher-level policies through proper orchestration decisions. Therefore, several sociotechnical aspects can be captured by translating key value indicators (KVIs) to network performance KPIs targets for AI enabler applications.
  • AI-driven network applications and the consumption of AI-driven artefacts (such as predictions or dynamic network orchestration suggestions) make 6G network automation achievable. Again, this can enable the adoption/imposition of sociotechnical targets and policies.

As for the next steps, the project has achieved a level of maturity where network applications are already deployed using the developed tools and procedures. The project is currently seeking network application developers, individuals or SMEs, outside of the consortium who are interested in validating their 5G applications and adopting the 5GASP methodology, tools, and innovative 6G automation network applications.

Related work

[1] A. Bonea et. al, Automated onboarding, testing and validation for Network Applications and Verticals, ISSCS Iasi, 2021.

[2] Kostis Trantzas et al., An automated CI/CD process for testing and deployment of Network Applications over 5G infrastructure, IEEE International Mediterranean Conference on Communications and Networking, 7–10 September 2021.

[3] X. Vasilakos et al., Towards Low-latent & Load-balanced VNF Placement with Hierarchical Reinforcement Learning, IEEE International Mediterranean Conference on Communications and Networking, 7–10 September 2021.

[4] M. Bunyakitanon et al., HELICON: Orchestrating low-latent & load-balanced Virtual Network Functions, IEEE ICC 2022.

[5] V. A. Siris et al. Exploiting mobility prediction for mobility & popularity caching and DASH adaptation, IEEE 17th International Symposium on A World of Wireless, Mobile and Multimedia Networks, 2016.

[6] R. Direito, et al., Towards a Fully Automated System for Testing and Validating Network Applications, NetSoft 2022, 2022.

[7] X. Vasilakos et al., Towards an intelligent 6G architecture: the case of jointly Optimised handover and Orchestration, WWRF47, 2022.

[8] N. Uniyal et al., On the design of a native Zero-touch 6G architecture, WWRF47, 2022.

 

Connected communities: are hybrid futures the way forward?

Following the publication of the ‘Post’ Pandemic Hybrid Futures report, Ella Chedburn from Knowle West Media Centre reflects on the pro and cons of connecting remotely during Covid, and what positives we should be taking forward from our different experiences of connecting during the pandemic.

Knowle West Fest

For many of us, the Covid-19 pandemic involved a huge shift from in-person to digital encounters across all areas of life. Here at KWMC, from the very first lockdown we knew we needed to find ways to keep working with and stay connected to our community, so we got creative with digital and blended ways of working. There were many positives to connecting remotely, through online platforms / posted packs etc. For some people, joining meetings, events or workshops from home was suddenly possible and more accessible. However, there were lots of negatives to purely online spaces too – not everyone has access to webcams or is familiar using technology, and some of these spaces had negative health impacts too. 

As we emerged from lockdowns, we wondered: could we get the best of both worlds by merging online and physical (‘hybrid’) spaces? We explored this in our ‘Come Together’ programme in 2021 and learned so much about the vices and virtues of these hybrid setups. We have lots of useful resources and examples on the website for anyone to use. However, as 2022 rolled around it became more and more tempting for institutions to forget these learnings and revert to in-person events that are often easier to run. 

The ‘Post’ Pandemic Hybrid Futures project came at the perfect time for us to pause and reflect on what learnings we could realistically carry forward from the pandemic. Through this collaboration, we were able to further develop some of the hybrid tools and methods we had learnt from workshops, community events, live broadcasts, festivals and blended programmes. We focused our collaboration on a specific experiment – how could we make a local community festival (Knowle West Fest) more accessible through hybrid means? 

Learning from the process

From the Knowle West Fest (KWfest) experiments one of our main learnings was that a rough-and-ready style works really well when it comes to livestreams. It seemed that the more authentic and casual style of Facebook Live resonated with many of our audiences. People in the physical space were also much more relaxed about being featured in a Facebook Live, with many seeming excited to talk on camera. Plus, the more informal nature meant that any pauses from lack of internet felt far less painful in both the online space and the physical space compared to Zoom. This livestream was also not too taxing on our staff, so it is realistic for us to continue doing them long-term. The biggest surprise was the success of our Facebook livestream afterwards – gaining over 1,000 views during the following week. Here we learned the importance of allowing digital audiences to engage in their own time.  

In comparison, only a couple of people joined our Zoom livestream. While marketing it, a few people responded negatively to the idea of Zoom – associating it with work and lockdown. People also expect events on this platform to be more professional and smoothly run, which adds pressure to staff. Despite our best efforts to market the space as a ‘cozy online portal’, these workplace associations will take more effort to overcome. Instead, we recommend using Zoom to fully engage in a single activity, allowing participants to get hands-on and make the most of the more personal space. Or even creating a pre-recorded complimentary offering to access from home instead. These have both worked very well in our previous projects. 

PostcardsAlongside our two livestream experiments, we left postcards around the festival for people to send to friends and family via a ‘post box’ in the cafe. On the back of the postcards was a link to a YouTube playlist of acts playing at the festival. Surprisingly, this activity went down particularly well with children and has a lot of scope for further experimentation such as adding art, or posting to (consenting!) strangers, or posting back and forth between people. It can also be less intense for staff to run and eliminates the stress of technology failures. After the festival we sent out craft packs to some people with links to online content – again demonstrating that to access a festival experience it doesn’t all have to synchronise or be live. 

The BDFI partnership 

BDFI’s aim to create more inclusive, sustainable and prosperous digital futures aligned well with our ethos at KWMC.  

BDFI’s support was invaluable in helping us to collate all our previous research and reflect on it from both internal and external perspectives. This allowed us to fully absorb and integrate our learnings then use them as a springboard for more experimentation.  

On a practical level, the extra staff from BDFI meant that we had enough people power to confidently deliver the hybrid elements. We learned the hard way through the Come Together project that hybrid events often need double the staff and can be more demanding for facilitators and producers, so it is important that they are properly resourced and well planned.  

Next steps

At KWMC, we hope to cultivate a more inclusive future by combining the best of digital and physical spaces. We are also keen to ensure that Knowle West communities continue to benefit from the research and experiments that they have participated in. We will be sharing these learnings with the 2023 KWfest producing team and exploring ways in which we can share the research more broadly with those working in the education, community, creative and charity sectors. 

Do Pixels Have Feelings Too?

BDFI co-director Professor Daniel Neyland hosted a fascinating and informative lecture about the ethics around artificial intelligence. Here he follows up that lecture with a thought-piece on the proliferation of AI, ethical principles and questions that can be applied, and the importance of trust and truth.

Daniel Neyland lecture

We appear to be moving into a period where the number of AI applications being launched is proliferating rapidly. All indications are that these applications will utilize a range of data, and operate at a speed and on a scale that is unprecedented. The ethical impact of these technologies – on our daily lives, our workplaces, modes of travel and our health – is likely to be huge.

This is a familiar story – we have perhaps heard similar narratives on previous occasions (for example in relation to CCTV in the 1990s, the internet in the late 1990s and early 2000s, biometric IDs from the early 2000s until around 2010, smartphones from around 2008 onwards, and so on). We are always told as part of these narratives that trying to address the impact emerging through these technologies will be incredibly difficult. However, the development of AI systems does seem to pose further specific challenges.

Firstly, for the most part, AI developments are even more opaque than some of the other technologies we have seen developed in recent decades. We don’t get to see the impacts of these systems until they are launched into the world, we may not even be aware that such systems exist before they are launched. In order to assess the likely problems specific AI applications will create, we need to open up the design and development stage of these systems to greater scrutiny. If we can intervene at the design stage, we might have a greater chance of reducing the number and range of harms that these systems might otherwise create.

Secondly, with generative AI and machine learning neural networks, systems have a certain amount of autonomy to produce their outputs. This means that if we want to manage the ethics of AI, we cannot work with the designers and developers of these systems alone. We need to work with the AI. Key to success here will be to engage with carefully bounded experiments to assess how AI engages with the social world, in order to assess its likely impacts and any changes to system design that are needed. We have an imperative to experiment with AI before it is launched into the world, but this imperative is in danger of being swept aside by the current drive to gain a market advantage by being the first mover in any particular AI application.

Thirdly, when we do have access to these AI applications, we need to attune our ethical assessment to the specific technology in focus. Not all AI is the same. In this lecture, I provide a range of broad ethical principles that draws on existing work in the field, but I also demonstrate how these principles can be given a specific focus when looking at a particular AI application – a machine learning, neural network that uses digital video to do emotion recognition.

I utilize broad ethical principles to raise questions regarding how a specific AI system can be re-designed. The ethical principles and associated questions set out one way we can discover and address concerns in the development of new AI systems. These include:

  • Consultation – at the design stage, how can we actively foster engagement with emerging AI systems to assess perceptions, trust and sentiment, for example, toward an emerging system?
  • Confidence – do we have confidence that the system will perform as we expect, how can we assess confidence (what kinds of experiments might we carry out, for example, to test how well a system works), and how can we address concerns raised by a system that is not operating as anticipated?
  • Context – in what setting is the system designed for use and what concerns arise from switching contexts of use?
  • Consequence – what happens as a result of the system being used, who is subject to AI decision making and for what purpose?
  • Consent – how can people give agreement that they should be subjects of AI decision making, that their data should be processed by AI, or that they are happy to work with an AI system in their workplace?
  • Compliance – what are the relevant legal and regulatory frameworks with which a system must comply? How might we design regulatory compliance into the technology?
  • Creep – if we carry out an ethical assessment in relation to a new and emerging technology in one use case, how might we guard against or assess the issues that might arise if that technology is used in other contexts?

These ethical principles and questions are not designed to be exhaustive, but I suggest, these need to be applied, developed, added to or moved in different directions when they are applied to specific technologies under development. They seem to represent a useful starting point for asking questions. In the lecture on neural networks for machine learning, I suggest that two significant concerns that arise through asking these questions are trust and truth. Drawing on over 50 years of social science research on trust[1], I suggest we can engage with AI systems to explore to what extent these systems provide the basic conditions for trust: does the system operate in line with our expectations of it (linking back to the ethical principle of confidence)? But we can go further and ask do we trust that the system will place our interests ahead of those who own or operate the system? We can also look at how trust is managed in demonstrations of AI and how AI disrupts the routine grounds of social order through which trust would normally persist.

With regards to truth, in the lecture I pose questions of the nature, source and reliance upon somewhat simplistic notions of truth that seem to pervade AI system development. I suggest this becomes problematic when assumptions are made that AI systems do no more than reflect truth that is already out there in the world independent of the technology. Without straying into debates about post-truth and its associated politics, it nonetheless seems problematic that systems with a generative capacity to create their own truth (at least to an extent) are then presented to the world as doing no more than re-presenting a truth that already exists independent of the system. In the lecture I also suggest that truth can be considered as an input (through the notion of ground truths that the system itself partially creates) and output (through the system’s results).

[1] For example, Barber’s (1983) work on trust, Shapin (1994), Garfinkel (1963)

Barber, B. (1983) The Logics and Limits of Trust (Rutgers University Press, NJ, USA)

Shapin, S. (1994) A Social History of Truth (University of Chicago Press, London)

Garfinkel, H. (1963) A conception of and experiments with ‘trust’ as a condition of stable concerted actions, in Harvey, O. (ed) Motivation and Social Interaction (Ronald Press, NY, USA) pp. 197-238