Figuring out futures works better in partnership

A significant part of BDFI’s mission is its collaboration with our partners. We spoke to Beckie Coleman, Professor of Digital Futures, about her current Knowledge Exchange Programme with BDFI’s community partner Knowle West Media Centre and where it is taking them.

What does your role at BDFI typically involve?

Beckie Coleman outside KWMC
Beckie Coleman outside KWMC

I have quite a varied role with BDFI. One aspect is working with BDFI partners in different ways (e.g. on research projects, in workshops and meetings) and making connections across the university with people researching and teaching on digital technologies, innovation and futures. My own research focuses on digital media, technologies and culture especially through arts-led approaches to imagining and building better futures.

At the same time I’m based in the School of Sociology, Politics and International Studies (SPAIS) and teach and supervise there as well. We’re currently developing a new MSc in Digital and Technological Societies, which should launch in September 2025.

How did the partnership with KWMC come about?

When I joined BDFI, a group of us visited KWMC where I learnt more about their long-standing work and the area of Knowle West in south Bristol. I was particularly excited about how they approach tech through arts and co-creation practices, and how they see digital innovation as happening in communities as well as in industry and academia. BDFI already had strong links with KWMC through different projects, including Digital Inequality and Explainable AI.

KWMC young people programme
Soundwave young people programme Image credit: Ibi Feher

I co-developed a pilot project with Creative Co-Director Martha King and others on ‘Post’-pandemic hybrid futures, where we experimented with different technologies to ensure questions of accessibility and inclusion raised by doing things online during the pandemic weren’t lost when we returned to in-person ways of doing things. From this project, Carolyn Hassan (who was then CEO of KWMC) and I applied for me to have a Knowledge Exchange Placement at KWMC, to strengthen the BDFI/KWMC relationship further, build further capacity and develop longer-term projects. We focused this on how digital futures can be built through community tech. We were successful, and I’m now Researcher in Residence at KWMC, spending the equivalent of a day a week there.

How does it work practically?

Creative Hub young people programme
Creative Hub young people programme

This is also quite varied! I go to KWMC regularly, usually once a week, where I attend organisational meetings, working groups and other activities, such as Creative Hub, which is part of the young people’s programme. These have really built my knowledge and understanding of KWMC and Knowle West more widely. I’m concentrating especially on how community tech has been central to what KWMC do. For example, I’m looking back at past projects to draw out the different tech they have co-created and deployed. This has included sensors, AI and digital fabrication.

I’m also interested in the creative ways they have communicated this work, including embroidered data visualisations and sound-tracks, and the different themes they address, which include green spaces and biodiversity, housing and sustainable energy, high-streets and regeneration. I’ve also been interviewing KWMC staff in order to explore what ‘community tech’ means to them. Community tech is quite a new term and so we’re trying to work out what it might mean for KWMC’s work and how it might be developed further. What does community tech encompass? How can community tech help make better futures? What do we need to develop it further?

What are the highlights?

One issue that has come out so far is the importance of community tech infrastructures. This includes hardware and software that must be maintained, and funding for this can be very hard to find. This dovetails with a project KWMC are currently doing, funded by Promising Trouble, called Makers and Maintainers, which focuses on building the resilience of existing community tech already in use by community businesses in England. We’re also thinking about how community tech infrastructures refers to the ongoing practices and more intangible knowledges and understandings that go in to ensuring that communities are invited into discussions about, and innovation of, tech.

A Creative Cuppa weekly drop in session Image credit: Ibi Feher

What is really distinctive about KWMC are the arts-led approaches they use to centre and explore issues that are important to a community, which often involve working with artists to co-create tech and to communicate the findings in imaginative and sometimes unexpected ways. For example, David Matunda is currently working with KWMC on a MyWorld Fellowship, investigating and prototyping creative uses for community tech in collaboration with the Knowle West community. We are organising a meet-up at KWMC in the autumn to explore these issues in more detail – what does approaching community tech through arts do? What do we need to support and expand this work?

You sound very busy! What other collaborations are you currently involved in?

I’m working with some of my BDFI colleagues and BDFI’s other partners – both community and industry organisations – on other collaborations. These include developing a pilot research project on the future of human/machine teams with one of our industry partners, and running workshops with other partners to scope out and co-design projects with them.

What’s the best thing about collaborating/working with other organisations? 

I think if we are serious about building more inclusive, prosperous and sustainable digital futures – as BDFI is set up to do – we need to work in cross-sector collaborations. I’m especially interested in how innovation happens in everyday life, and what kinds of infrastructures might be needed to support this further.

I’m really enjoying getting out and about around Bristol and beyond, and am finding that this is, in turn, shaping how I’m thinking about and designing research projects so that co-creation, participation and public engagement are embedded throughout (rather than seen as an add-on or something that comes at the end of a project to share findings). It’s definitely an exciting time to be at BDFI!

BDFI’s Net Zero Mission

With World Earth Day soon on the horizon, we caught up with Prem Kumar Perumal, PhD researcher, to hear about BDFI’s efforts in becoming net zero. 

BDFI entrance and biowalls

If you have visited BDFI recently, you may have noticed the wall of green plants that greet you as you approach the entrance. 

These biowalls are one element of BDFI’s Net Zero facilities, part of the Sustainable Campus Testbed project which is implementing and researching carbon reduction technologies.  

The plants in the green walls have been carefully selected to suit the north and south facing elevations, with a focus on species that help with air purification. The walls feature a matrix of plants that grow well together to create year-round coverage and seasonal interest. 

Prem Kumar Perumal outside BDFI

Prem Kumar Perumal has been leading on the monitoring of air quality around the building and explains how it works. 

“We have sensors installed in different locations in and around the building. There are six sensors above the green wall and an additional two sensors inside the building on the first and ground floor. They measure critical environmental parameters including temperature, sound levels, CO2 levels, carbon monoxide and small particles in the air.” 

“We have been monitoring data from the sensors since June 2023 and are already seeing some interesting findings. Last November when Canford Park held a fireworks fiesta, we measured an increase in particulate matter levels for eight hours at BDFI, 4.7 miles away from the source.  

“This shows the impact of fire and fireworks on the surrounding area, not only in the distance travelled by particulates but also in how long they are present for.” 

Particulate Matter (PM) refers to a complex mixture of solid particles and liquid droplets in the air.  

Prem explains: “These particles vary in size, composition, and origin. PM can originate from both natural sources, such as wildfires, volcanic eruptions, and dust storms, as well as anthropogenic sources, including industrial processes, vehicle emissions, construction activities, and agricultural operations. 

Prem checking the air monitor sensors on the BDFI biowalls

“Governments and environmental agencies worldwide monitor and regulate PM levels to protect public health and the environment. Strategies to reduce PM pollution include improving emission standards for vehicles and industrial facilities, implementing cleaner technologies, controlling dust emissions from construction sites, and promoting alternative transportation modes. 

At BDFI, Prem monitors the different particulates to monitor trends and patterns. 

He said: “I am monitoring the spikes in the dashboard on a regular basis and gather information to understand the source. This could be an outdoor event or building works, for example.” 

The biowalls and sensors are just one aspect of the Net Zero work going on at BDFI. You can read about the other carbon reduction technologies we are working on, including our smart energy system, on our website.  

What should the law do about deepfakes?

From Taylor Swift to the Royal Family – deepfakes are rarely out of the news. BDFI’s Prof. Colin Gavaghan asks what we can do to protect ourselves and if lawmakers should be doing more. 

Credit: Kenzie Saunders/flickr

The camera does lie. It always has. For as long as we’ve had photography, we’ve had trick photography. Some of this is harmless fun. I remember as a child delighting in forced perspective photos that made it look like I was holding a tiny building or relative in the palm of my hand. Some of it is much less than harmless. Stalin was notorious for doctoring old photographs to excise those who had fallen from his favour.

The development of AI deepfakes has taken this to a new level. It’s not just static images that can be manipulated now. People can be depicted saying and doing things that are entirely invented.

Credit: GabboT/flickr

If anyone hadn’t heard of deepfakes before, the first few months of 2024 have surely remedied that. First, in January, deepfake sexual images of Taylor Swift – probably the world’s most famous pop star – were circulated on X and 4chan. This month, deepfakes were back among the headlines, when rumours circulated that a family picture by the Princess of Wales had been digitally altered by AI.

In some ways, the stories couldn’t be more different. The Taylor Swift images were made and circulated by unknown actors, without the subject’s consent, and in a manner surely known or intended to cause embarrassment and distress.

Source: The Guardian

Princess Kate’s picture, in contrast – which it turns out was more likely edited by more basic software like Photoshop – was made and shared by the subject herself, and any embarrassment will be trivial and to do with her amateur photo editing skills.

In other ways, though, the two stories show two sides of the challenge these technologies will pose.

The challenges posed by intimate deepfakes are the more obvious, and have been known about long before Taylor Swift became their most high profile victim. As with ‘revenge porn”, the victims are overwhelmingly women and girls, and the harm it can do is well documented.  

There have been legal responses to this. The new Online Safety Act introduced a series of criminal offences aimed at the intentional sharing of “a photograph or film which shows, or appears to show, another person in an intimate state” without their consent. The wording is specifically intended to capture AI generated or altered images. These offences are not messing around either. The most serious of them carries a maximum prison sentence of two years.

Source: X

That sort of regulatory response targets the users of deepfake technologies. Though it’s hoped they have some deterrent effect, they are retrospective responses, handing out punishment after the harm is done. They also don’t have anything to say about a potentially even more pernicious use of deepfakes; the generation of fake political content. In 2022 a fake video circulated of Ukrainian president Volodymyr Zelensky appearing to announce the country’s surrender to Russia. And in January this year, voters in new Hampshire received a phone call from a deepfake “Joe Biden”, telling them not to vote in the Democrat primary.

Unlike intimate deepfakes, political deepfakes don’t always have an obvious individual victim. The harms are likely to be more collective – to the democratic process, perhaps, or national security. It would be possible to create specific offences to cover these situations too. Indeed, the US Federal Communications Commission acted promptly after the Biden deepfake to do precisely that.

An alternative response, though, would be to target the technologies themselves. The EU has gone some way in this direction. Article 52 of the forthcoming AI Act  requires that AI systems that generate synthetic content must be developed and used in such a way that their outputs are detectable as artificially generated or manipulated. The Act doesn’t specify how this would be done, but suggestions have included some sort of indelible watermark.

Will these responses help? It’s likely that the new offences will deter some people, but as with previous attempts to regulate the internet, problems are likely to exist with identification – you can’t punish someone for creating such images if you can’t find out who they are – and with jurisdiction.

What about the labelling requirements? There are technical doubts about how easy it will be to circumvent the detection system. And even when content is labelled as fake, it’s uncertain how this will affect the viewer. Early research suggests we should be cautious about assuming warnings will insulate us against fakery, with some researchers pointing out a tendency to overlook or filter out the warning: “Even when they’re there, audience members’ eyes—now trained on rapid-fire visual input—seem to unsee watermarks and disclosures.”

As for intimate deepfakes, detection systems may help a bit. But I’m struck by how the harm to these women and girls seems to persist, even when the images are exposed as fakes. In a case in Spain last year, teenaged girls had deepfake nudes created and circulated by teenaged boys. As one of the girls’ mothers told the media, “They felt bad and were afraid to tell and be blamed for it.” This internalisation of blame and shame by the victims of these actions suggests that a deeper problem may lie in persistent and damaging attitudes towards female bodies and sexuality, rather than any particular technology.

Source: bandeepfakes.org

Maybe in a better future, intimate deepfakes won’t cause that level of harm. We might hope that schoolmates and neighbours will rally round the victims, and that any stigma will be reserved for the bullies and predators who have created the images. We can hope. But meanwhile, these technologies are being used to inflict considerable suffering. One solution that is gaining support would be to ban deepfake technologies altogether. Maybe the potential for harm just outweighs any potential benefit. That was certainly the view of my IT Law class last week!

But what precisely would be subject to the ban? That question brings me back to Kate’s family pic. If we are to ban “deepfakes”, where would we draw the line? Does image manipulation immediately become pernicious when AI is involved, but remain innocent when it’s done with established techniques like Photoshop? If lawmakers are going to go after the technology, rather than the use, then we’re going to have to think about precisely what technology we have in our sights.

‘If you can’t tell, does it matter?’ Do we need new law for human-like AI?

With the persistent rise in chatbots and other human-like AI, Prof. Colin Gavaghan, BDFI’s resident tech lawyer, asks: do we need regulatory protection from manipulation?

Stills from WestWorld filmRobots and AI that look and act like humans is a standard trope in science fiction. Recent films and tv series have supplemented the shelves of books taking this conceit as a central concept. One of the most celebrated – at least in its first season – was HBO’s reimagining of Michael Crichton’s 1973 film WestWorld.

The premise of WestWorld is well known. In a futuristic theme park, human guests can pay exorbitant sums to interact with highly realistic robots or ‘hosts’. In an early episode, a human guest, William, is greeted by Angela, a “host.” When William enquires as to whether she is “real” or a robot, Angela responds: ‘Well if you can’t tell, does it matter?’

As we move through an era where AI and robotics acquires ever greater realism in its representations of humanity, this question is acquiring increasing salience. If we can’t tell, does it matter? Evidently, quite a lot of people think it matters quite a lot. For instance, take a look at this recent blog post from the excellent Andres Gaudamuz (Technollama).

But why might it matter? In what contexts? And what, if anything, should the law have to say about it?

What’s the worry about humanlike AI?

Writing in The Atlantic a few months ago, philosopher Dan Dennett wrote this:

“Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created. These counterfeit people are the most dangerous artifacts in human history, capable of destroying not just economies but human freedom itself.”

The most dangerous artifacts in human history?! In a year when the Oppenheimer film – to say nothing of events in Ukraine – have turned our attention back to the dangers of nuclear war, that is quite a claim! If we are to make sense of Dennett’s claim, far less decide whether we agree with it, we need to understand what Dennett means by “counterfeit people”. The term could refer to a number of things.

One obvious way in which AI can impersonate humans is through applications like ChatGPT, that can generate text indistinguishable from that generated by humans. When this is linked to a real-time conversational agent – a chatbot or an AI assistant – it can result in a conversation in which the human participant might reasonably believe the other party is also a human. Google’s “Duplex” personal assistant added a realistic spoken dimension to this in 2018, its naturalistic “ums” and “ahs” giving the impression of speaking to a real PA.

More recently, the Financial Times reported that Meta intends to release a range of AI “persona” chatbots, including one that talks like Abraham Lincoln, to keep users engaged with Facebook. Presumably, users will be aware that these are chatbots (does anyone think Abe Lincoln is actually on Facebook?) In other cases, the true identities of the chatbots will be concealed, as when bot accounts are used to spread propaganda and disinformation.

Those examples read and sound like they might be human. But AI can go further. Earlier this year, Sen. Richard Blumenthal (D-CT) kicked off a Senate panel hearing with a fake recording of his own voice, in which he described the potential risks of AI technology. So as well as impersonating humans, we now have to be alert for AI impersonating particular humans.

Soul MachinesAs the technology evolves, we’ll find AI that can impersonate humans across a whole range of measures – not only reading and sounding human, but looking and acting like it too. This is the sort of work being done by Soul Machines, whose mission is to use “cutting edge AI technology … to create the world’s most alive Digital People.”

Other than a vague unease caused by these uncanny valley denizens, why should this bother us?

One of the main concerns relates to manipulation. Writing in The Economist in April, Yuval Noah Harari claimed that AI has “hacked the operating system of human civilisation”. His concern was with the capacity of AI agents to form faux intimate relationships, and thereby exert influence on us to buy or vote in particular ways.

This concern is far from fanciful. Research is already emerging, suggesting that we are, if anything, more likely to trust AI-generated faces. Imagine an AI sales bot that is optimized to look trustworthy, and combine that with software that lets it appear patient and friendly, but also able to read our voices and faces so it knows exactly when to push and when to back off.

So great are these concerns that we have already seen some legal responses. In 2018, California introduced the BOT (Bolstering Online Transparency) Act, which bans the use of pretend-human bots if they’re used to try to influence purchasing or voting decisions. Art 52 of the EU’s new AI Act adopts a similar measure to the Californian one.

Are mandatory disclosure laws the answer?

AI agents are certainly being optimized to pass for human, with a view to sell, persuade, seduce and nudge us into parting with our attention, our money, our data, our votes. What’s less obvious is how much mandatory disclosure will insulate us against that. Will knowing that we’re interacting with an AI protect us against its superhuman persuasive power?

There is some reason to think it might play a role. One study from 2019 found that potential customers receiving a cold call about a loan renewal offer were as or more likely to take up the offer when it was made by an AI. But this advantage largely dissipated when they were told in advance that the call was from a chatbot.

Interestingly, the authors of the 2019 paper reported that late disclosure of the chatbot’s identity – that is, after the offer has been explained, but before the customer makes up their mind about whether to accept it – seemed to cancel out the antipathy to chatbots. This leads them to the provisional conclusion that experience of talking to chatbot will allay some of their concerns about it. In other words, as we get more used to talking with AIs, our intuitive suspicion of them will likely dissipate.

Another reason to be somewhat sceptical of mandatory disclosure solutions is that telling me whether something was generated by AI tells me little or nothing about whether it’s true, or about whether the person I’m talking to is who they claim to be. Ultimately, I don’t really care if content comes from a bot, a human scammer, a Putin propaganda troll farm, or a genuine conspiracy theorist. Is “Patrick Doan”, the “author” of the email I received recently, a person or a bot? Who cares. He/it is clearly phishing me either way:

Phishing email

So much for cognitive misrepresentation. What about emotional manipulation? Will knowing that I’m talking to an AI help us resist the sort of emotional investment that will help the AI lead me into bad decisions?DuoLingo owl

My answer for now is: I just don’t know. What I do know, from many hours of personal experience, is that I am by no means immune to emotional investment even in the very weak AI we have now. They don’t even need to look remotely human.  I’m even a sucker for the blatant emotional nudges from the little green owl if I don’t do my DuoLingo practice!

Vulnerable and lonely people are going to be even easier prey. Phishing and catfishing are likely still to be problems, whether the fisher is a human or an AI. Imagine trying to resist that AI Abraham Lincoln (or Taylor Swift or Ryan Gosling), when it’s been optimized to hit all the right sweet-talking notes.

Targeted steps forward

If this all sounds like a counsel of despair, it isn’t meant to. I think there are meaningful steps that can be taken to mitigate the manipulative threat posed by human-like AI. But I suspect those measures will likely have to be properly targeted if they’re to have that effect. Simply telling me that I’m talking to a “counterfeit person” is unlikely to be enough to protect me from its persuasive superpowers.

We could, for instance, consider seriously the prospect of going hard after this sort of technology, or the worst examples of it anyway. Under the EU AI Act, those AI systems which are deemed to present an unacceptable risk are to be banned outright. This includes AI that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm.

Perhaps there will soon be a case for adding highly persuasive AI systems to that list.

The UK Government seems to be going in a very different direction with regard to AI regulation, and the protections of the AI Act are unlikely to apply here. But other options exist. We could, for instance, consider stronger consumer law protections against manipulative AI technologies, to match those we have for “deceptive” and “aggressive” sales techniques.

In truth, I don’t have a clear idea right now about the best regulatory strategy. But it’s a subject I’m planning to look into more closely. Maybe it does matter if we can tell AI from human – at least to some people, at least some of the time. But on its own, I fear that knowledge will be nowhere near enough to prevent ever smarter AI, to use Harari’s words, hacking our operating systems.

This content is based on a paper given at the Gikii 2023 Conference in Utrecht, and at this year’s annual guest lecture at Southampton Law School. Colin is grateful for the helpful comments received at both. 

Network applications as an enabler for AI-driven autonomous networking

BDFI academic Dr Xenofon Vasilakos recently attended the IEEE ICC 2023 Industry Forum and Exhibition in Rome where he gave a speech to the IF&E workshop. In this blog he goes into detail about the topics covered in the speech, as we move from the fifth (5G) towards to the sixth (6G) generation of telecommunication networks.

5GASP explores self-managing and self-organizing automation for the development of sixth generation (6G) intelligent future networks. This is achieved through an ecosystem of specialized AI-driven network applications that enable automation. These applications fulfil the automation requirements of other “enhanced” network applications or services. The prototypes of these applications include network and performance prediction systems that enable proactive resource management and a human-centric approach, adapting to the dynamic nature of 6G networks and users without the need for human intervention. This AI-based automation provides improved network and service quality, while also ensuring compliance with business requirements and enhancing service agility.

Below, we provide a summary of the prototypes for AI-driven enablers, self-organised, or managed network applications.

(1) Efficient MEC Handover (EMHO) network application (Univ. of Bristol, AI-driven Autonomy enabler)

The functioning of this network application depends on collaborative machine learning (ML) predictions to maintain and potentially improve the quality of service provided by enhanced network applications operating on a multi-access edge computing (MEC) platform. The existing prototype utilizes mobile radio resource control (RRC) monitoring data along with an additional ML layer consisting of cooperative models that predict MEC handovers.

(2) Virtual On-Board Unit (vOBU) provisioning Network Application (OdinS, AI self-organisation)

This network application deploys a digital twin (DT) of a car on-board unit (OBU) on the nearest MEC node of its location. The DT can be “migrated” to car’s nearest edge as a twin (virtual) vOBU acting as a proxy, and its migration automatically begins upon cars’ movement. To avoid bottlenecks, this network application can pose an intent for forecasting future car locations with EMHO’s mobility prediction ML, thus allowing it to proactively deploy vOBU.

(3) PrivacyAnalyser Network Application (Lamda Networks, self-management)

PrivacyAnalyser is a cross-vertical cloud-native application running either at network Core or MEC. Among other features, it caters for ML network data classification from UE and/or IoT devices, and privacy evaluation and analysis. Also, PrivacyAnalyser is converging toward ML-based network management and orchestration via EMHO’s exposed ML predictions, enabling smart scale-in/out MEC pods proactively, better than the default container autoscaling for improving energy efficiency.

(4) Remote Human Driving Network Application (DriveU.auto, AI-driven self-management & self-organisation)

This Network Application enables remote autonomous vehicle operation in unusual/dangerous situations. The intent is to ensure reliable, low-latency, high-quality real-time video transmission via AI-optimised network latency, but also via EMHO Network Application handover predictions to automatically deploy appropriate applications with optimised slice features matching dynamic needs.

Future Steps, Impact & sociotechnical aspects

5GASP aims to establish an Open Source Software (OSS) repository and a VNF marketplace that caters to small and medium-sized enterprises (SMEs). It also focuses on fostering a community of network application developers by providing them with tools and services. These resources enable developers to achieve the following goals: (i) implement AI-driven network automation in network applications to improve network quality with minimal human intervention by capturing business and other intents through continuous monitoring, (ii) validate and certify network services early on to ensure alignment with business and other sociotechnical goals, and (iii) prioritize inter-domain use-cases for daily testing, validation, and ensuring security and trust of third-party intellectual property rights (IPR) in their testbeds.

The key lessons learned so far can be summarized as follows:

  • AI-driven automation plays a vital role in enhancing network and service automation by minimizing the need for human intervention and improving quality of service (QoS). Moreover, it allows the adoption of higher-level policies through proper orchestration decisions. Therefore, several sociotechnical aspects can be captured by translating key value indicators (KVIs) to network performance KPIs targets for AI enabler applications.
  • AI-driven network applications and the consumption of AI-driven artefacts (such as predictions or dynamic network orchestration suggestions) make 6G network automation achievable. Again, this can enable the adoption/imposition of sociotechnical targets and policies.

As for the next steps, the project has achieved a level of maturity where network applications are already deployed using the developed tools and procedures. The project is currently seeking network application developers, individuals or SMEs, outside of the consortium who are interested in validating their 5G applications and adopting the 5GASP methodology, tools, and innovative 6G automation network applications.

Related work

[1] A. Bonea et. al, Automated onboarding, testing and validation for Network Applications and Verticals, ISSCS Iasi, 2021.

[2] Kostis Trantzas et al., An automated CI/CD process for testing and deployment of Network Applications over 5G infrastructure, IEEE International Mediterranean Conference on Communications and Networking, 7–10 September 2021.

[3] X. Vasilakos et al., Towards Low-latent & Load-balanced VNF Placement with Hierarchical Reinforcement Learning, IEEE International Mediterranean Conference on Communications and Networking, 7–10 September 2021.

[4] M. Bunyakitanon et al., HELICON: Orchestrating low-latent & load-balanced Virtual Network Functions, IEEE ICC 2022.

[5] V. A. Siris et al. Exploiting mobility prediction for mobility & popularity caching and DASH adaptation, IEEE 17th International Symposium on A World of Wireless, Mobile and Multimedia Networks, 2016.

[6] R. Direito, et al., Towards a Fully Automated System for Testing and Validating Network Applications, NetSoft 2022, 2022.

[7] X. Vasilakos et al., Towards an intelligent 6G architecture: the case of jointly Optimised handover and Orchestration, WWRF47, 2022.

[8] N. Uniyal et al., On the design of a native Zero-touch 6G architecture, WWRF47, 2022.

 

Connected communities: are hybrid futures the way forward?

Following the publication of the ‘Post’ Pandemic Hybrid Futures report, Ella Chedburn from Knowle West Media Centre reflects on the pro and cons of connecting remotely during Covid, and what positives we should be taking forward from our different experiences of connecting during the pandemic.

Knowle West Fest

For many of us, the Covid-19 pandemic involved a huge shift from in-person to digital encounters across all areas of life. Here at KWMC, from the very first lockdown we knew we needed to find ways to keep working with and stay connected to our community, so we got creative with digital and blended ways of working. There were many positives to connecting remotely, through online platforms / posted packs etc. For some people, joining meetings, events or workshops from home was suddenly possible and more accessible. However, there were lots of negatives to purely online spaces too – not everyone has access to webcams or is familiar using technology, and some of these spaces had negative health impacts too. 

As we emerged from lockdowns, we wondered: could we get the best of both worlds by merging online and physical (‘hybrid’) spaces? We explored this in our ‘Come Together’ programme in 2021 and learned so much about the vices and virtues of these hybrid setups. We have lots of useful resources and examples on the website for anyone to use. However, as 2022 rolled around it became more and more tempting for institutions to forget these learnings and revert to in-person events that are often easier to run. 

The ‘Post’ Pandemic Hybrid Futures project came at the perfect time for us to pause and reflect on what learnings we could realistically carry forward from the pandemic. Through this collaboration, we were able to further develop some of the hybrid tools and methods we had learnt from workshops, community events, live broadcasts, festivals and blended programmes. We focused our collaboration on a specific experiment – how could we make a local community festival (Knowle West Fest) more accessible through hybrid means? 

Learning from the process

From the Knowle West Fest (KWfest) experiments one of our main learnings was that a rough-and-ready style works really well when it comes to livestreams. It seemed that the more authentic and casual style of Facebook Live resonated with many of our audiences. People in the physical space were also much more relaxed about being featured in a Facebook Live, with many seeming excited to talk on camera. Plus, the more informal nature meant that any pauses from lack of internet felt far less painful in both the online space and the physical space compared to Zoom. This livestream was also not too taxing on our staff, so it is realistic for us to continue doing them long-term. The biggest surprise was the success of our Facebook livestream afterwards – gaining over 1,000 views during the following week. Here we learned the importance of allowing digital audiences to engage in their own time.  

In comparison, only a couple of people joined our Zoom livestream. While marketing it, a few people responded negatively to the idea of Zoom – associating it with work and lockdown. People also expect events on this platform to be more professional and smoothly run, which adds pressure to staff. Despite our best efforts to market the space as a ‘cozy online portal’, these workplace associations will take more effort to overcome. Instead, we recommend using Zoom to fully engage in a single activity, allowing participants to get hands-on and make the most of the more personal space. Or even creating a pre-recorded complimentary offering to access from home instead. These have both worked very well in our previous projects. 

PostcardsAlongside our two livestream experiments, we left postcards around the festival for people to send to friends and family via a ‘post box’ in the cafe. On the back of the postcards was a link to a YouTube playlist of acts playing at the festival. Surprisingly, this activity went down particularly well with children and has a lot of scope for further experimentation such as adding art, or posting to (consenting!) strangers, or posting back and forth between people. It can also be less intense for staff to run and eliminates the stress of technology failures. After the festival we sent out craft packs to some people with links to online content – again demonstrating that to access a festival experience it doesn’t all have to synchronise or be live. 

The BDFI partnership 

BDFI’s aim to create more inclusive, sustainable and prosperous digital futures aligned well with our ethos at KWMC.  

BDFI’s support was invaluable in helping us to collate all our previous research and reflect on it from both internal and external perspectives. This allowed us to fully absorb and integrate our learnings then use them as a springboard for more experimentation.  

On a practical level, the extra staff from BDFI meant that we had enough people power to confidently deliver the hybrid elements. We learned the hard way through the Come Together project that hybrid events often need double the staff and can be more demanding for facilitators and producers, so it is important that they are properly resourced and well planned.  

Next steps

At KWMC, we hope to cultivate a more inclusive future by combining the best of digital and physical spaces. We are also keen to ensure that Knowle West communities continue to benefit from the research and experiments that they have participated in. We will be sharing these learnings with the 2023 KWfest producing team and exploring ways in which we can share the research more broadly with those working in the education, community, creative and charity sectors. 

Do Pixels Have Feelings Too?

BDFI co-director Professor Daniel Neyland hosted a fascinating and informative lecture about the ethics around artificial intelligence. Here he follows up that lecture with a thought-piece on the proliferation of AI, ethical principles and questions that can be applied, and the importance of trust and truth.

Daniel Neyland lecture

We appear to be moving into a period where the number of AI applications being launched is proliferating rapidly. All indications are that these applications will utilize a range of data, and operate at a speed and on a scale that is unprecedented. The ethical impact of these technologies – on our daily lives, our workplaces, modes of travel and our health – is likely to be huge.

This is a familiar story – we have perhaps heard similar narratives on previous occasions (for example in relation to CCTV in the 1990s, the internet in the late 1990s and early 2000s, biometric IDs from the early 2000s until around 2010, smartphones from around 2008 onwards, and so on). We are always told as part of these narratives that trying to address the impact emerging through these technologies will be incredibly difficult. However, the development of AI systems does seem to pose further specific challenges.

Firstly, for the most part, AI developments are even more opaque than some of the other technologies we have seen developed in recent decades. We don’t get to see the impacts of these systems until they are launched into the world, we may not even be aware that such systems exist before they are launched. In order to assess the likely problems specific AI applications will create, we need to open up the design and development stage of these systems to greater scrutiny. If we can intervene at the design stage, we might have a greater chance of reducing the number and range of harms that these systems might otherwise create.

Secondly, with generative AI and machine learning neural networks, systems have a certain amount of autonomy to produce their outputs. This means that if we want to manage the ethics of AI, we cannot work with the designers and developers of these systems alone. We need to work with the AI. Key to success here will be to engage with carefully bounded experiments to assess how AI engages with the social world, in order to assess its likely impacts and any changes to system design that are needed. We have an imperative to experiment with AI before it is launched into the world, but this imperative is in danger of being swept aside by the current drive to gain a market advantage by being the first mover in any particular AI application.

Thirdly, when we do have access to these AI applications, we need to attune our ethical assessment to the specific technology in focus. Not all AI is the same. In this lecture, I provide a range of broad ethical principles that draws on existing work in the field, but I also demonstrate how these principles can be given a specific focus when looking at a particular AI application – a machine learning, neural network that uses digital video to do emotion recognition.

I utilize broad ethical principles to raise questions regarding how a specific AI system can be re-designed. The ethical principles and associated questions set out one way we can discover and address concerns in the development of new AI systems. These include:

  • Consultation – at the design stage, how can we actively foster engagement with emerging AI systems to assess perceptions, trust and sentiment, for example, toward an emerging system?
  • Confidence – do we have confidence that the system will perform as we expect, how can we assess confidence (what kinds of experiments might we carry out, for example, to test how well a system works), and how can we address concerns raised by a system that is not operating as anticipated?
  • Context – in what setting is the system designed for use and what concerns arise from switching contexts of use?
  • Consequence – what happens as a result of the system being used, who is subject to AI decision making and for what purpose?
  • Consent – how can people give agreement that they should be subjects of AI decision making, that their data should be processed by AI, or that they are happy to work with an AI system in their workplace?
  • Compliance – what are the relevant legal and regulatory frameworks with which a system must comply? How might we design regulatory compliance into the technology?
  • Creep – if we carry out an ethical assessment in relation to a new and emerging technology in one use case, how might we guard against or assess the issues that might arise if that technology is used in other contexts?

These ethical principles and questions are not designed to be exhaustive, but I suggest, these need to be applied, developed, added to or moved in different directions when they are applied to specific technologies under development. They seem to represent a useful starting point for asking questions. In the lecture on neural networks for machine learning, I suggest that two significant concerns that arise through asking these questions are trust and truth. Drawing on over 50 years of social science research on trust[1], I suggest we can engage with AI systems to explore to what extent these systems provide the basic conditions for trust: does the system operate in line with our expectations of it (linking back to the ethical principle of confidence)? But we can go further and ask do we trust that the system will place our interests ahead of those who own or operate the system? We can also look at how trust is managed in demonstrations of AI and how AI disrupts the routine grounds of social order through which trust would normally persist.

With regards to truth, in the lecture I pose questions of the nature, source and reliance upon somewhat simplistic notions of truth that seem to pervade AI system development. I suggest this becomes problematic when assumptions are made that AI systems do no more than reflect truth that is already out there in the world independent of the technology. Without straying into debates about post-truth and its associated politics, it nonetheless seems problematic that systems with a generative capacity to create their own truth (at least to an extent) are then presented to the world as doing no more than re-presenting a truth that already exists independent of the system. In the lecture I also suggest that truth can be considered as an input (through the notion of ground truths that the system itself partially creates) and output (through the system’s results).

[1] For example, Barber’s (1983) work on trust, Shapin (1994), Garfinkel (1963)

Barber, B. (1983) The Logics and Limits of Trust (Rutgers University Press, NJ, USA)

Shapin, S. (1994) A Social History of Truth (University of Chicago Press, London)

Garfinkel, H. (1963) A conception of and experiments with ‘trust’ as a condition of stable concerted actions, in Harvey, O. (ed) Motivation and Social Interaction (Ronald Press, NY, USA) pp. 197-238

 

“I just remember the gasworks as big grey stone buildings that almost frightened you.” Connections with the past informs and inspires BDFI’s new research hub

University of Bristol historian Lena Ferriday concludes BDFI’s ‘History of the Sheds’ project with a summary of the living histories collected to inform its place the former industrial community of The Dings and St Philips. Alongside artist Ellie Shipman, Lena interviewed former local residents and employees of the Bristol Gas Company to uncover how it felt to live and work among a transformational industry for Bristol.

In June 2022, the BDFI became the first inhabitants of the new Temple Quarter Campus when they moved into their new building at 65 Avon Street. As part of this relocation, they were keen to investigate the histories of this site which once housed Bristol’s gasworks, in order to draw connections between the socio-technical pasts, presents and futures of the space. Following extensive archival research conducted by Dr James Watts, the two of us wrote a report which revealed the gasworks and gas industry’s important influences on Bristol in social, economic, environmental and technological terms. But we also identified that what was missing from the written record were the voices of those who lived and worked in the vicinity of this site, whose memories are obscured in the archives. As such, the project became a more participatory one.

Plan of Avon Street Gas Station, 1857. Bristol Archives, 28777/U/E/5/1.

After publicising a call for contributions, I conducted a set of oral history interviews with local residents who were all differently connected to the site of the gasworks. Concurrently, we commissioned illustrator and artist Ellie Shipman to work alongside us to produce an artistic response to the histories we were continuing to uncover and write. Ellie also engaged with members of the public through interviews and memory café chats – with the help of local historian and co-founder of the Barton Hill History Group Garry Atterton.[1] Across these conversations – both structured and less formal – themes highlighted in the previous report were brought out in animated, lively ways.[2] But people’s tangible memories of the site also took the history in new ways, sometimes with narratives that competed with the stories told by archival material.

Environment

Our original report had identified that the gasworks opened in 1821 and was frequently renovated until its decommissioning in 1970. In an interview with Geraldine Stone, who lived in East Bristol in the 1960s, she relayed that ‘I just remember the gasworks as big grey stone buildings that almost frightened you … cause they were again grey and dark’. This perspective of the building from the outside – as perhaps relational to the way in which the majority of Bristol’s inhabitants will experience the refurbished BDFI building – demonstrated the emotional power of the site on the memories of those who lived nearby. But they also showed the gasworks’ relationship to the other buildings in this industrial area. From the nineteenth to mid-twentieth centuries, this area of Bristol was fiercely industrial.

Neighbouring the gasworks were an ironworks, a vitriol works, a lead works, a paint works, a marble factory, a railway engine works, the railway and Temple Meads station, a timber yard and a dye factory. As Garry explained, ‘The Feeder Canal was sort of a good thing and a bad thing, in some respects. It was a good thing because it was an artery that spread the heart of Bristol out to other parts of the city … But because of the flat land that was available, it was brilliant for industry… What you saw in that time was a massive concentration of heavy industry that had not been seen in the city.’ Geraldine recalled that the area was ‘always dark, you know. But it was mostly the smells in everything that was going on, and because it was the centre of St Philips and The Dings, it was something that you lived with every day, because you didn’t have cars, you would walk everywhere.’ For Geraldine, living as a pedestrian in this area led to a particular form of sensory engagement with the industrial space, and also led to an understanding of Avon Street a transitional place: ‘To me Avon Street was a way of getting through to anywhere. It was a throughway.’

For Geraldine this dark, smelly area also produced a particular atmosphere in the area: ‘in those days when you had the trains and smoke, well in our days as a small child everything was smog. It was just smog. It used to come down really low and you couldn’t see…’ In the report we attested to the concern for the environmental impact of gas that was demonstrated nationally in the nineteenth century, and the moves within the Bristol Gas Company to reduce pollution.[3]  In his interview with Ellie, Garry corroborated this, writing that ‘You had this massive concentration there, which potentially was all going in the Feeder. All the pollutants.’ The report also attested to the company’s attempts to ensure this pollution did not cause medical concerns. Members of the Barton Hill Group recalled that those who worked in the area struggled with chest and lung issues – ‘coming home with really heavy coughs’ – although here Lysaghts steel works and the cotton factory were more frequently listed as embodied polluters than the gasworks.

Society

In the report we noted that this pollution made the gasworks an uncomfortable and challenging workplace. Richard Nicholls, whose father worked as a foreman at the site from 1944, noted that this physical atmosphere also had a social impact on the employees:

“It was unusual in those days for engineers to be able to talk about their marriage, their sex life or whatever. Because they were in what were dangerous conditions, you know, really hot there, the hot coals there and so on. So when they were in that condition they were very much a family of their own and looked after each other…”

Here, Richard attested to a community within the group of men working at the site that the archival records had shown to an extent, through evidence of an employee brass band and football team. Richard’s intergenerational memories bring this collegiality to life with stories. He noted that ‘it was very much a mick-taking humour’ between colleagues, recalling the instance in which ‘One of the guys had his shoes nailed down to the floor cause when they’d go to the building sites they’d change shoes, … and he had to go back in his muddy boots.’

It was the dangerous conditions that Richard also linked to the success of trade unionism within the industry: ‘that’s why the strikes were so solid, because they were family, they relied on each other, they could talk to each other about anything. They could speak their mind as it was. Very hard men, very stuck together.’ In our initial report we identified Bristol gas workers’ strong involvement in the strikes of 1866, 1889 and 1920, the archives attesting to the outcomes of this action. But the internal mentalities of those striking were not documented, and Richard’s comments are enlightening here.

Banners of the National Union of the Gas Workers and General Labourers Bristol District No 1 Branch. Credit to Bristol Museums, Galleries and Archives, T8389.

The successful 1889 strike saw a dispute over shift length, with workers petitioning for a move from 12 to 8-hour shifts. By the time Richard’s father was employed at Avon Street, shift patterns were fairly regular: ‘The normal one would be 8 till 5, 8 till 6, that sort of thing’. When engineers worked on call, however, they could be called out to emergency leaks across the city at any time. Richard noted that ‘it wasn’t a case of you said no. You went.’ The gas works were therefore closely connected to the insides of Bristolians’ homes in a way the archives had not accounted for, not only via the material substance of the gas but also those who monitored it. But it was not only engineers who brought the gasworks into the home. Members of the Barton Hill group recounted memories of company employees visiting to collect money from the gas meters, an exciting day as they would often get money back. Richard remembered ways in which people would also get around this system in certain ways: ‘there was a mechanism inside the meter where you could change the rate that you’d pay for your gas, and they used to get them to change it so that the money would go in … then when the meter reader came they’d make them a cup of tea, oh you’ve overpaid us this much, so they’d give them a refund out of the money that was in the thing’. Others would put coin-shaped objects into the meter to get themselves through the week until they confessed on collection day.

Technology

This connection into urban homes came with its difficulties. Archival records attested to concerns about the dangers of gas circulating from its introduction in the early nineteenth century, and numerous gas explosions were reported in the late nineteenth and early twentieth centuries. Members of the Barton Hill History Group similarly remembered a gas explosion that flattened two houses on Lincoln Street in the early 1950s.

Difficulties with gas not only arose from fears regarding its danger – from the 1890s acknowledgment of the efficiency and stability of electric light in industrial settings (beginning with the Wills Tobacco factory) posed a challenge to the Bristol gasworks. Yet gaslighting remained popular for longer than expected, and many of the participants in our project had strong recollections of it. When working as a gas engineer in Bristol, Richard recounted being called out to a house that still had no electricity in 1972: ‘She had gas lighting, gas cooking … So it’s amazing how long it was before some people had electrics.’ Richard noted being concerned about the hazards of the lighting here however: ‘the gas lights had little chains on that you could pull down to turn on and … She wanted me to lower it. She had big frizzy hair. And I thought, well if I lower that down, it’s going to be so near her hair – I could just see it going whoosh!’, he laughed.

Lamplighter, Bristol 1946, Bristol Archives 2877

The gasworks also contributed to energy in the home via other materials than gas. Members of the Barton Hill group recalled taking metal prams down to the gasworks to collect bags of coke for the fire – coke being a purified substance created in the production of coal gas. The elongated transition to electricity also shaped memories of gas itself, particularly given that gas is still used in many homes but now for cooking and central heating, rather than lighting. One member of the Barton Hill Group referred to cooking gas as ‘normal gas’, in comparison to the historic gas substance used to fuel lights. Yet in her interview Geraldine spoke of ‘the gas lamp posts being lit by somebody, by a man coming with a stick.’ Here too, however, her memories become sticky: ‘I don’t know if I imagined it – I didn’t and I know I didn’t …. so I think is that my vision or did I see it? But I’m sure I did …. Cause it used to shine all the time into my bedroom.’ Although she described the memory in detail, the chasm between gaslight technology and contemporary lighting innovations has worked to obscure her own visual childhood memories.

Conclusion

In the first report published on our findings about the site of Bristol gasworks, we concluded that ‘The tension between the benefit and harm brought by the introduction of gas to the city … is what characterises the history of this site.’ Predominantly the living histories we have collected have also pointed towards contested and competing narratives operating within the history of the gasworks. Gas was hazardous to the local landscape and residents’ bodies in the short term, with explosions destroying buildings and injuring inhabitants, and on a longer time scale, polluting the waterways and atmosphere which in turn brought on lung and chest issues for those immersed in them. Yet it also played a formative role in people’s fond memories of the area.

Given the BDFI’s emphasis on ensuring that the production of digital technology is inclusive and sustainable for the societies it affects, that the findings of these oral histories attest to the environmental, social and technological dimensions of the site is of great importance. Acknowledging via archival material that the former proprietors of the site also worked to reduce the environmental impact of innovative gas production, and that their employees campaigned for workplace equality provides a source of inspiration for the ways in with the Institute now inhabit this site in the present and into the future. The findings of the living and community histories that this new report has attested to, however, stretch beyond the site itself, and reaffirm the wider range of forms engagement with the gasworks site took. The gasworks had strong connections both to the wider industrial area of St Philips and to homes across the city, and in a similar way through this project the BDFI has maintained this connection between the sites and local communities through the sharing of memories. In addition, this project has amplified lived experience and as such demonstrated the importance of considering the individuals who are impacted by innovation in diverse ways.


Acknowledgements

The quotes in this piece are excerpted from recordings by Lena Ferriday with participants Geraldine Stone and Richard Nicholls, and Ellie Shipman with Garry Atterton, Pete Insole, and members of the Barton Hill History Group, Bern and Gill. We are very grateful to all those involved for sharing their time and memories with us, which have brought to life this historic site.

About the author

Lena FerridayLena is a PhD researcher in the Department of History here at Bristol, with an interest in modern histories of bodies, the environment and everyday experience. Her research has previously explored other areas of Bristol – including the city’s transport networks, tourism, green spaces and the history of the University – and she is currently writing a history of embodiment in nineteenth century Cornwall.

 

Notes

[1] A memory café is a community group that gathers people with shared memories (often of a place or event) to meet and discuss. The Barton Hill History Group run a monthly memory café, the August instalment of which Ellie joined.

[2] An edited audio piece including excerpts from these recordings was created by Ellie Shipman, and can be listened to here.

[3]This is also expanded in another blog post I have written to complement the project, considering the ways in which the senses profoundly shaped the urban production of gas.

Hopeful illustrations daring us to imagine sustainable energy innovations

In the midst of COP27 and as Europe-wide energy crisis BDFI seed-corn funding recipient Dr Ola Michalec describes how, as a social scientist, she has helped energy policy makers open up dialogue around of smarter energy systems through illustrations. In partnership with Bristol City Council, Ofgem and Energy Systems Catapult and communication and illustration experts she is helping to move the debate from the purely technical to social imaginings.

 

Winter is coming

We are at the cusp of winter, gearing up to spend more time indoors. As soon as the clocks returned to Greenwich Mean Time, basking in late-October sunlight seemed like a distant memory. To create a sense of homely “hygge”, I’ve recently completed a bi-annual reshuffle of the storage boxes. Christmas decorations, candles, and blankets have swapped places with beach hats, sandals, and camping gear. I’ve yet to put heating on, however. The biggest contributor to wintry cosiness comes at the highest cost, especially this year.

This sentiment resonates across the country, as individuals and businesses alike are nervously budgeting for the upcoming months. With so many complex questions arising, I’m grateful to the scientists and journalists for their excellent outreach and science communication. “Why has the price of gas increased?”, Rebecca Leber asks? “Who is profiting from my expensive bills?”, Graeme Demianyk explains.  “How to effectively target those at the highest risk of fuel poverty?”, Prof Aimee Ambrose discusses.

 The value of futures-thinking

With most of the public attention is directed to examining today’s crises, sometimes it might be challenging to imagine that many of us in the wider energy sector are working towards a greener, happier, and fairer future. Indeed, as a social scientist interested in emerging technologies, I occupy a peculiar space where the present and the future(s) meet.

At first, buzzwords like “open energy data”, “energy digitalisation” or “smart homes” might seem irrelevant to the current issues concerning energy affordability, sustainability, and security. Although emerging technologies will not help with our heating bills this winter, these seemingly futuristic visions of the “new power grid” are closer than we think. Various best practice guidance documents, standardisation proposals or regulatory consultations have sprouted over the UK energy policy landscape over the past several months. This is precisely why now is the best time to broaden the community of stakeholders and raise the important questions about the social implications of introducing novel energy technologies into our homes and infrastructure sites.

Meaningful participation of community organisations and citizens is critical for timely advancement of climate action (Stirling, 2008; Rommetveit et al, 2021). Infrastructures and policies, if introduced without the public approval, risk becoming delayed or rejected (see, for example, the troubling case of smart meters implementation programme – Michalec et al, 2019; Sovacool et al, 2017). However, engaging the lay public with complexity and autonomy of modern digital (or ‘smart’) systems has been proven challenging due to pre-requisite knowledge expected from the citizens (Pfotenhauer et al., 2019). Recently, scholars of energy systems and society argued for a participatory research agenda on energy systems digitalisation (Sareen, 2021 ). A paper co-authored by a fellow Bristol University researcher, Dr Caitlin Robinson, suggested five areas of further analysis and engagement: 1) the intersection of digital and financial inclusion; 2) social implications of flexibility 3) the role of trust in shaping engagement with innovations; 4) digital literacy and communications; 5) the uneven impacts of innovation on different social groups (Chambers et al., 2022).

Our research: on regulating smart energy appliances

In parallel to that, my research at the University of Bristol, explored the role of expertise in standardisation and policymaking initiatives in the context of smart energy (Michalec et al., in review) systems. We found that while there are numerous initiatives to facilitate the introduction of smart energy systems, they are usually framed as solely ‘technical’ projects which provide limited opportunities for engagement with citizens and community advocacy groups. However, security, privacy, and interoperability of energy data are inherently socio-technical considerations that necessitate opening up of the public debate.

Smart lens: Sketching new perspectives on energy systems

Thanks to seed-corn funding from Bristol Digital Futures Institute, we were able to fund a project exploring illustration as a medium of public engagement in energy futures. First, we assembled a team of researchers (Dr Ruzanna Chitchyan and I), an illustrator (Oliver Dean), a science communication expert (Dr Emma Osborne) and industry partners (Bristol City Council, Ofgem and Energy Systems Catapult). Second, we worked collaboratively to produce briefs for the artist. Third, we engaged in several rounds of sketching and feedback until we reached a version we were all happy with. Et voilá! Let me introduce you to a few of our illustrations (you can find a full set here).

Bristol 2030

This is a series of eight images (available as postcards or a large poster) depicting digital energy innovations in several iconic Bristol locations: from Ashton Gate Stadium, Millennium Square to Easton Community Centre, among the others. We wanted to show a variety of communities celebrating sustainable, inclusive, and optimistic futures. While most of media reporting focuses on scaremongering and fatalistic accounts, we created images that could be used as conversation-starters for more hopeful discussions. Our postcards were displayed during an exhibition in We The Curious earlier this year.

Old Grid/New Grid

Old grid/ New Grid is a prototype card game. With 28 images representing technological, social, and regulatory aspects of the energy systems, we designed a structured activity for schools and community energy organisations. So far, we have received interest from organisations such as Bath and West Community Energy Cooperative and are always keen to hear from other potential collaborators!

Security and data sharing platforms: get on the right track!

One of our project partners, Energy Systems Catapult, requested an attractive infographic aimed at people working in the energy sector without background in cyber security. Energy Systems Catapult is a supporter of the Open Data paradigm in the industry but faced difficulties with communicating it in an accessible way, while resolving misunderstandings around security and privacy. We came up with a metaphor of a tube station to show that open data is about re-considering who should access various information, rather than publishing all datasets freely accessible websites. The infographic is now used for Catapult’s onboarding workshops and other events.

What’s next?

 Public engagement is never a ‘finished job’ – there are always new stakeholders to meet and new issues to discuss. That said, as researchers, we often tend to side-line these activities; it is difficult (if not impossible!) to directly demonstrate impact (echoing my reflections on policy engagement for the Cabot Institute blog). What would a measure of impact look like in this case anyway? A number of people who wrote to their MPs about sustainable energy, who threw soup at a painting in Tate, who attached solar panels to their cats? While I am not planning on tracking any of the above, let me tell you about our future plans…

  • I’ll be presenting a lunchtime talk for the REPHRAIN National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online on the 17th Nov 1-2pm (see image for joining instructions). I will be discussing “Using creative methods to engage people in cyber security conversations”

 

 

 

 

  • I am preparing a public engagement workshop using our wonderful “postcards from Bristol 2030”. This will be an interactive, local, and open event aimed at all Bristolians (whether native or adopted) interested in sustainable futures of the city. Time and place TBC
  • I would love an opportunity to exhibit the images – perhaps in your city or your community? Please let me know if you could introduce me to relevant people! Contact me.

Convoluting poetry + maths – Poetrishy explores the possibilities

BDFI seedcorn-funded researcher Dr Rebecca Kosick explains how her project, Poetrishy has taken off and where the collaborations between the worlds of poetry and maths might lead.  Check out the stylish editions that can also be purchased in print from Tangent Books.

  • How ambitious is Poetrishy?

Poetrishy has big ambitions in that it is trying to bring together two fields of practice—poetry and maths—that are not obvious allies. We have found ways they can be, but our most ambitious goal, which we aren’t sure if we have yet realized, has to do with the convolution of these two fields.

Convolution increases the challenge, in that rather than just igniting an encounter between maths and poetry, we are trying generate opportunities for the two fields to influence and mutually modify each other, creating something new in the process. For our second edition, we spent a lot of time reflecting on the ways we have seen mathematics influencing poetry, particularly by creating new forms and possibilities for poetic production. This direction of influence is pretty well established in our experiments, and builds on earlier work we did in collaboration with the Brigstow Institute. The opposite direction, where poetry can influence and modify maths, seems to be still a nascent and more speculative possibility, though we have some ideas. For instance, my collaborator Mauro Fazion is working with other researchers to look into how metaphor and meaning-making in poetry can inform mathematical modelling of lexical and semantic evolution. Here we think poetry may have something to contribute to mathematics and its applications. And we are eager to see what other possibilities are out there.

  • What do the works submitted so far, tell us about our digital futures?

It’s been really interesting to see the range of submissions we have gotten, and to discover that the community of people interested in poetry and maths is bigger than you might expect. For me, this speaks to the continued vitality of the arts during the so-called digital age.

Plosive Consonants by Bruno Ministro for Poetrishy #1

I don’t think this was ever really in doubt for artists, or for those of us who study contemporary arts and humanities, but we still see ways in which the STEM disciplines are understood as, on the one hand, distinct from the arts, and on the other, as having a special claim on technology that the arts somehow don’t have. I think we can contest this claim historically, and with an eye toward the future too. Poets, in particular, have been keen to explore the possibilities that new technologies enable for the creation and dissemination of poetry, from the typewriter to the mimeograph and the algorithmic computing. I expect this will continue and that there are surprises yet for us to discover.

  • As this is an evolving project, what adjustments have you made along the way? What have been the most challenging aspects, and the most surprising?

One of the more technical challenges had was to do with how to display the range of poetic materials we were receiving (and publishing) in Poetrishy. We honestly didn’t know what to expect when we put out our first call, in that we were open to all kinds of formal possibilities, from text-based lyrical poems to apps, interactive web-based tools, videos, and more. We ended up receiving a range of submissions that exceeded, even, our own open imagination of the parameters of what we might expect. And then we needed to figure out how to first, share these works via some kind of unified digital platform, and second, share them in print form.

Our designers, Russell Britton (web designer) and Johanna Darque (print designer and co-editor), did such a fantastic job of bringing together a huge diversity of contributions, and in navigating the affordances and needs of digital versus print publishing. You should definitely check out both versions, digital and print. On top of making gorgeous and innovative homes for Poetrishy in each of these platforms, the team also worked hard to build a kind of flexible reciprocity between the web and paper versions, producing what Jo Darque called “non-identical twins.” The web version and the print journal are each their own distinct but linked elaborations of what Poetrishy​​​​ is.

  • When will we see the next edition?

We are working on the print version of Poetrishy #2 now (Autumn 2022), and it should be available for sale in the coming months. We are hoping to continue publishing Poetrishy in the coming years as well and will be looking for funding to make this happen. We are grateful to the BDFI for believing in this project and helping us get it off the ground.

 

Poetrishy is published by a team of poets, mathematicians, editors, and designers: Mauro Fazion, Rebecca Kosick, Rowan Evans, Ademir Demarchi, Miranda Lynn Barnes, Johanna Darque, and Russell Britton.

Explaining AI decision making: A sociotechnical approach

Dr Marisela Gutierrez Lopez has been collaborating with BDFI partner LV=General Insurance to explore opening up processes behind AI decision making. How will this benefit organisations and people who are affected by automated decisions?

Sociotechnical methods are helping us to create more inclusive ways of AI discussion across public, private and academic sectors. They are therefore crucial to investigate how people shape and are shaped by AI systems, and explore the interrelations between people, algorithms, data, organisational procedures and other factors that constitute these systems. For this purpose, we integrate social and technological expertise from across the University of Bristol, and our partners in industry and communities to empirically examine what makes AI explainable from a sociotechnical perspective.

In July 2020, the Explaining AI project was started with the vision to examine the concept of “Explainable AI” (or XAI) in machine learning approaches used in decision making. Our aim was to move beyond technocratic perspectives where explanations are framed as technical challenges towards more inclusive approaches that consider what AI might mean for diverse data publics – particularly those not usually included in discussions about AI or explainability.

Working collaboratively with LV= General Insurance (LV= GI), a leading insurance provider in the UK, we are investigating the different levels of explanation of the decision-making processes informed by machine learning models and their outcomes. In addition to our investigations at a commercial setting, we have also teamed up with two local partners – Black South West Network and Knowle West Media Centre – to explore the types of explanations that would make machine learning intelligible and actionable to these communities.

Reaching out to local communities

The community strand of our project is underpinned by design justice as a framework for reconstructing Explainable AI in collaboration with those at the margins of innovation. We avoid positioning ourselves as outsiders that tell communities what AI is or why it matters. We are not aiming to solve to the black-box problem. Instead, we start from the “bottom-up”, exploring community interests and concerns as a first step.

We are co-producing community-led XAI initiatives with our community partners to ensure machine learning decisions are communicated in relatable and actionable ways. This has given our partners ownership over the project and its outcomes. For example, each community partner is shaping up their initiative by defining their research questions and the focus of their community engagements.

woman speaking in a groupThese community-led initiatives allow for open and speculative conversations that generate knowledge (in opposition to traditional forms of XAI), moving from individual to community understandings of what constitutes AI, and shifting the focus of attention from the past and present to possible futures. The next steps of our project involve supporting community engagements by the community groups to reach into their local areas and produce new XAI approaches that empower and give agency to different data publics.

Embedding our research at LV= GI

For the organisational strand, we set up a participatory ethnography where BDFI researchers are embedded in the LV= General Insurance data science team. As a result, the project offers a unique opportunity to closely analyse organisational practices and ways of working between data science and other business functions.

This project allows us to collectively explore ways to explain machine learning models beyond providing technical accounts of data and complying with legal requirements. It shifts the perception on what makes AI explainable with an enhanced understanding of how machine learning is shaping the organisation. Moreover, it has given us, both the research team and LV= GI practitioners, space to form deep connections, share co-working spaces, and expand our partnership even further.

Putting together industry and community – XAI perspectives 

Our project responds to the current ethical turn in AI by disrupting the concept of explainability, moving away from a purely technical solution to explaining the practice of AI rather than the principle itself. Sociotechnical methods are helping us to make research results actionable, where outputs are not abstract or distant but directly applicable in the context of each project partner.

The knowledge and dynamics generated using these methods are also helping us to connect the outcomes of the organisational and community strands of the project. Putting together cross-sector collaborations in XAI involves mutual learning, where the perspectives of all partners are equally important, and we learn from each other strengths. Additionally, it requires flexibility to adjust our priorities and facilitate two-way conversations. These conversations will become crucial in the last year of the project as we reconstruct Explainable AI together, in consideration of the findings from each place of inquiry. This will allow us to create more inclusive processes for the development of machine learning in the future.

The Streets Seen and ‘The Sheds’ Smelled

Lena Ferriday is co-author of ‘Avon Street Gasworks and Bristol’s Gas Industry‘ with Dr James Watts, a report commissioned for BDFI to examine the histories of their renovated industrial building in St Philips, Bristol.  Here she looks at the senses most provoked by the production and distribution of gas – sight and smell.

In 1861, the Bristol Mirror proclaimed that,

Of all the social improvements that the last 50 years have seen brought about, none is more significant of progress than the lighting with gas all the thoroughfares of our towns. To look back to the year 1800 in this respect and conceive what the streets […] of our own city […] were after dark, without the aid of gaslight, is a task most of us would rather shirk than encounter. Yet there are those living and walking in our midst […] who can very easily go back in memory to the period when a light was shed upon the darkness that prevailed by Winsor illuminating the metropolis.[1]

 For historians of nineteenth century dark and light, the introduction of gaslighting was a revelation for urban life, stimulating a new economy where the hours of factory work and public leisure time were able to extend into the evening without the sun’s aid.[3] For Constance Classen, this innovation ‘blurred the age-old sensory divide between the visuality of daytime and the tactility of night-time’.[4] In the streets, this was indeed true. Yet as this short piece will show, in the industrial setting and on the level of company organisation, it was the interaction of sight with another sense, that of smell, that proved most important.

Lamplighter, Bristol 1946, Bristol Archives 2877

The first gas works was established in London in 1814. By 1819 gas works were in operation throughout the country, and in the mid-1820s most big cities were supplied with gas. The Bristol Gas Light Company first manifested in 1815 and was incorporated in 1818, working to produce coal-gas for the purposes of lighting. The first gas lamps were lit in the city, inside the Exchange, and on Wine and St. Nicholas Streets. Looking back, Bristol’s press proclaimed that from the Gas Light Company’s formation ‘we have had light shed upon our doings when the orb of day has sunk beneath the horizon, which, though it may not equal that furnished by the sun […] is yet the best substitute that has ever been discovered.’[2]

In 1821, the company headquarters expanded beyond its site at Temple Back and was rehoused to 65 Avon Street, in the building now known as ‘The Sheds’. Coal-gas was produced through a burning of coal to distil it into coke and capturing the gas that this produced, and the ‘Sheds’ was comprised of the Coal Shed, for storage, and Retort House, which the oven heating the coal to release gas, as seen in Fig. 2.

Plan of Avon Street Gas Station, 1857. Bristol Archives, 28777/U/E/5/1.

This production of gas did, however, have notoriously sensory consequences. This was not least in the noxious odours that were emitted from coal-gas as a result of its containing Hydrogen sulphide, characterised by a distinctively sulphuric scent. Despite commenting in 1861 that ‘These lights are clear, white, and beautiful – luminous, without any smoke or obnoxous effluvium, producing an effect equal to daylight, at about one-third the expense usually employed to obtain a miserable substitute’, John Breillat, one of the Gas Company’s original engineers, worked with his team across the 1820s in an attempt to reduce the smell of the gas production, and its pollution of the local area.[5] Local inhabitants also complained of the smell and in its early years calls were made for the Company to switch to the use of oil gas. extracted from whale or seal blubber.

These attempts failed, however, as the company’s management committee decided that to produce ‘the same quantity of light’ oil gas was significantly more expensive than coal, at a ratio of roughly 5:3.[6] To some extent then, sight was prioritised above smell: the importance of a gas emitting strong light to combat the darkness more important than the strength of its odour. Yet as part of their ‘exposition’ of the oil gas scheme, Bristol Mirror also concluded that ‘the ridiculous assertion of Oil-Gas being without smell, is also without foundation’, having found evidence that oil gas too ‘invaded’ homes with ‘a most distressing stench’.[7]

The failure to institute changes within the Gas Light Company led to a movement that formed a separate oil gas company, which by August 1823 had instated the Bristol and Clifton Oil Gas Company Act forbidding the company from using coal gas. Yet as the cost of whale oil rose in the 1830s, price differentiation once again took precedent over olfactory adversity, and the Oil Gas Company also began to use coal gas in 1836. For nearly two decades the companies operated alongside one another, each serving different Bristolian districts, until they were finally amalgamated in 1853 as the Bristol United Gaslight Company.

Whilst the changes to visual experience have commonly been seen as the key indicator of gaslight innovation’s sensory influence on urban space, in the case of Bristol’s Gasworks’ civic and industrial position the eye was not entirely dominant. For the city’s inhabitants, the gasworks were a strong-smelling presence and this sensory characteristic had great impact on the Gas Company’s bureaucratic and manufacturing development in its early years.

 

[1] ‘Jubilee of Gas Lighting in Bristol’, Bristol Mercury, 7 Sept 1861, p.4.

[2] ‘Jubilee of Gas Lighting in Bristol’, Bristol Mercury, 7 Sept 1861, p.4.

[3] Wolfgang Schivelbusch, Disenchanted Night: The Industrialization of Light in the Nineteenth Century, trans. Angela Davies (University of California Press, 1995) 16; Lynda Nead, Victorian Babylon: People, Streets and Images in Nineteenth-Century London (Yale University Press, 2000) p.98. Further, see Chris Otter, The Victorian Eye: A Political History of Light and Vision in Britain, 1800-1910 (University of Chicago Press, 2008).

[4] Constance Classen, ‘Introduction: The Transformation of Perception‘ in Constance Classen (ed.), A Cultural History of the Senses in the Age of Empire (London: Bloomsbury, 2014) 8-10.

[5] ‘Jubilee of Gas Lighting in Bristol’, Bristol Mercury, 7 Sept 1861, p.4.

[6] ‘Exposition of the Oil Gas Scheme’, Bristol Mirror, 3 March 1823, p.3.

[7] ‘Exposition of the Oil Gas Scheme’, Bristol Mirror, 3 March 1823, p.3.

Making play-based maths easier for teachers to assess – testing a blend of low and hi tech approaches

Michael Rumbelow and Professor Alf Coles lead one of our seedcorn-funded projects that aims to help boost children’s confidence in maths.

Using an AI driven app, the interchange between learning is explored through traditional use of blocks.  Here they discuss how digitising this learning aid could benefit teacher classroom assessment and the challenges of developing novel technologies as education specialists.

In 1854, the first English-speaking Kindergarten opened in London, based on the play-based pedagogy of Friedrich Froebel (1782-1852), who designed his Kindergarten curriculum around play activities with wooden blocks. Later plastic versions of Froebel’s blocks were developed, which evolved into Lego – now the world’s largest toymaker – as well as into interlocking plastic cubes for primary mathematics classrooms – which the characters in the popular CBeebies cartoon series Numberblocks are made of. And more recently, free play with digital cubes became the basis of Minecraft, the most popular video game of all time.

Figure 1. Sketches of using wooden cubes to model halving and quartering from an 1855 Kindergarten handbook.

Clearly, block play is a popular activity among children. And in schools there has also been a resurgence in the use of physical blocks in primary mathematics classrooms, following the government’s policy since 2016 of promoting so-called ‘Asian mastery’ approaches to teaching maths, as used in Singapore, China, South Korea and Japan, which make extensive use of physical blocks as concrete models of abstract mathematical concepts, such as counting, addition, multiplication etc. We were interested in researching children’s interactions with physical blocks from a mathematics education perspective, and one of the key challenges was how to capture data on children’s interactions with blocks for analysis.

Previous studies of block play have focused on gathering data variously through sketching or taking photos or videos of children’s block constructions, or embedding radio transmitters in blocks which could transmit their positions and orientations. Recently developments in computer vision technology offer novel ways of capturing data on block play. For example, photogrammetry apps such as 3D Scanner can now create 3D digital models from images or video of objects taken on mobile phones, and AI-based object recognition apps are increasingly able to detect objects they have been trained to ‘see’.

We felt there might be an opportunity to detect and digitise the positions of wooden or plastic cubes on a tabletop directly through a webcam, so that the coordinates of the corners could be used to create virtual animated models of stages of block constructions which could then be explored in various ways, such as in immersive virtual 3D environments, by both researchers and students. This abstracted coordinate data would also enable patterns of real-world block constructions to then be analysed statistically, for example using AI pattern recognition algorithms.

 

 

 

 

 

Figure 2: A sketch of 8 cubes being used to model a garden seat in an 1855 Kindergarten guide (left); a photo of a reconstruction of the sketched model with wooden cubes (centre); and a screenshot of a prototype 3D model generated from the reconstruction with photogrammetry app 3D Scanner (right). (The 3D model is viewable here: http://3d-viewer.xplorazzi.com/model-viewer/index.html?modelId=629e943a3aaf2b171525a9b5 )

With funding from the BDFI we were able to form a small project team of two researchers in the School of Education, and a software developer and the head of a local primary school, in order to develop an app to trial with children in the school.

Technical Challenges

The problem of capturing positions and orientations of blocks digitally almost immediately became more challenging than we had anticipated. Initially we had hypothesised that detection of straight edges would be a relatively simple computer vision task, however in practice traditional edge-detection algorithms proved unreliable in detecting the edges and extrapolating cubes positions, with multiple confounding issues including lighting, shadows, orientation, variations in perspective and vertical position, variations in wood texture and colour, and hidden edges under stacked blocks. One approach we attempted was to paint each block in a different colour to aid recognition, but this too was unsuccessful.

Figure 3. The move from plain wooden blocks to painted blocks to Cuisenaire rods to aid recognition

Finding ourselves stuck in terms of successful block recognition, we decided on two radical changes in direction: (a) to move from traditional edge-detection to AI-based computer vision algorithms, such as Mask-RCNN, and (b) to drastically simplify the recognition problem by focusing on Cuisenaire rods – standard classroom manipulatives which are 1 cm to 10 cm long, each coloured in a distinct colour, and typically arranged flat on the table, avoiding the issue of stacked blocks (Figure 3).

Our developer found that a gaming laptop equipped with a GPU processor was powerful enough to run Mask-RCNN, and with sufficient training on approximately 150 images, could detect the positions of Cuisenaire rods in an image from a live webcam feed within 2-3 seconds of processing time, which we felt was acceptable from a usability point of view.

With a feasible solution now successfully implemented for rod detection, the developer could now relatively easily add code which generated images and sounds associated with each rod, such as displaying a graphical image of it on screen, and speaking its colour or length. We trialled the app with Year 1 children in a local primary school, and produced a paper about the trial for the British Society for Research into Learning Mathematics.

Figure 4. The experimental set-up as used in the initial trial in a primary school

Lessons learnt

As educational researchers with little experience of developing apps such as this, we have learned many lessons. One is the value of iterative, so-called ‘Agile’ approaches which enable rapid experimentation and pivoting of direction in order to solve problems that inevitably arise in developing novel technologies.

Another is the value of the ecosystem of open-source libraries, shared expertise and documentation which grows over time around any novel technology, and in particular complex open-source AI algorithms and tools such as Google’s Tensorflow, and Facebook’s Detectron. Occasionally, a novel technology we tried looked attractive in terms of affordability – for example the OAK-D camera with built-in AI camera – but was so novel at the time that the supporting knowledge eco-system had not yet developed which effectively made it unfeasible to develop for in the short term.

And a third lesson learned is the critical importance of training data for AI computer vision algorithms/  or example, to recognise blocks placed on a school desk in daylight, the algorithm should be trained on images from as similar an environment as possible, but randomised sufficiently to avoid ‘overfitting’. This process of training AI algorithms also provided us with rich insights, from an educational conceptual perspective, into current neural network models and neuroscientific theories of how human brains learn – as well as some of the power and limitations of these theories.

Future challenges

With a prototype now delivered which can successfully recognise Cuisenaire rods, running on a GPU-equipped laptop and webcam, we are now looking towards potential future phases of development.. We’d like to revisit recognising plain cubes, and to make the app accessible on other devices like low-spec computers or mobile phones, allowing us to gather data on block play more widely from schools, as well as enabling children and their families to use the app at home.

We would also like to develop an AI app to analyse the block play data and recognise patterns, for example symmetries in constructions, or commonalities and differences across settings or over time, or compared with digital block play. Currently assessment of children’s activities in pre-school is often, like the curriculum, very different from primary school, and an app that could gather and showcase a portfolio of children’s real-world block play – potentially in virtual worlds if they wish – might enable more continuity in formative assessment across the transition from pre-school to primary.

Expanding the remit

We are also interested in the applications of a simple set of physical blocks as an interface, for example for playing musical notes, or modelling language, or atomic reactions in climate science, as well as for children with visual impairments who may not be able to see touch screens easily. And there also is the potential to translate the digital 3D models of children’s physical block constructions into current 3D online block metaverses such as Minecraft, to bridge the two worlds.

We are keen to work with partners across creative and technical disciplines who are interested in exploring opportunities to augment physical block play with multi-modal digital experiences. If you would be interested in learning more or a chat about the project please get in touch with us: alf.coles@bristol.ac.uk

Avon Street, Gas, and Bristol

We commissioned a report into the industrial and social histories of our new building at 65 Avon Street, known as The Sheds.  In the heart of the new Temple Quarter Enterprise Campus it was the former headquarters of the Bristol Gas Company.

Here one of the report authors, Dr James Watts, Lecturer in Public and Creative Histories describes how the project has unfolded and how its shed light Bristol’s industrial heritage. Co-author on the report,  Lena Ferriday, is continuing the research with a call for local people to come forward with their memories.

The Avon Street gasworks operated for nearly 150 years bringing light and heat to much of Bristol through the dangerous labour of those at the gasworks. Since April 2021 I have been researching this history for BDFI.

I was fortunate in beginning this project that research on the gasworks and their place in Bristol has been undertaken by others before me. Harold Nabb’s PhD thesis and pamphlet on the gasworks is invaluable as is Mike Richardson’s Men of Fire Work, Resistance and Organisation of Bristol Gasworkers in the Nineteenth Century alongside work by Michael Painting and Mike Richardson. Material on Know Your Place and in Bristol Archives has also been very helpful in digging into the history of the gas industry in Bristol in greater detail.

drawing of the gasworks by Samuel Loxton
Figure 1 The Avon Street gasworks, Samuel Loxton 1919, Bristol Library J785. By Permission of Bristol Libraries

The research revealed many links and parallels between the historical use of the site as a gasworks with the revolutionary effect this had on the life of Bristol.

The gasworks and the many local people employed there also had a profound effect on the local community, the workforce was locally drawn and, along with employers like the Ironworks across Silverthorne Lane, created a sense of community in this deeply industrial area of Bristol.

Surprises

I was continually surprised by how far-reaching the technologies of gas heating and lighting were. Gaslighting created and extended the night-time economy, especially in the winter months. This meant that the centre of Bristol was lit from 1820 onwards. The Old Vic was an early customer of the gas company and remained one of their largest customers for many years and in 1869 was eligible for a special discount due to the volume that they were using which was more than 1 million cubic feet a year.

By the 1900s gas heating was also very common and pre-paid gas meters allowed tighter budgeting and enabled the spread of gas heating and cooking into working class households. There was a large showroom on Colston Street in the city centre, built in 1935, to advertise and sell gas cookers as well. This was demolished in 2007 and the building is now the Bristol Beacon.

gas showroom
Figure 2 Bristol Archives, Vaughan Postcard collection, 43207/35/1/2

The other thing that impressed itself on me was the sense of how much of a community the gasworks and the surrounding area seemed to be. The gasworks had football teams who were season champions in 1930-1 as well as a brass band. The solidarity of the workers in times of industrial action was remarkable as the gasworks were involved in the wave of strikes in Bristol in 1889.

gas workers football team
Figure 3 Bristol Gas Company Reserves Football Club, 5th Division Champions, Bristol and District League 1931, Bristol Archives, 28777/U/Ph/1/6

Hopes for the research and site

I think the Avon Street gasworks could act as an important example for the modern use of historic buildings. It is, for me, about respect. Respecting the building itself, but also an awareness of the people who made, used, worked, and lived in them. I hope that the buildings’ new uses will reflect this history and help to educate others about the history of this industry and area. Those stories should not disappear but should be considered and reflected upon in the future uses of the buildings.

For instance, George Daniel Jones was a gas holder attendant during the 1940s. On March 11th 1941 ‘during an air raid two incendiary bombs lodged on the top of a large gas holder. Jones immediately climbed to the top of the holder and succeeded in knocking the bombs off the crown with his steel hat.’[1]

For his bravery that day he was awarded the George Medal. There is also now a road named after George Jones as well as a plaque on Folly Rd on the site of a gasholder close to Avon St and also owned by the Bristol Gas Company.

My main hope for this research and the site is to find more stories and personal memories from the current outreach. What I want to know about the site are these personal stories of someone’s Grandfather who was a stoker or captained the gasworks’ football team.

It is personal stories that give the site its interest given the long history of work there.

We’ve created a short survey for anyone who might have memories, artifacts, documents  or photographs from the gas industry in Bristol.  Please get in touch to help us ensure the social and industrial heritage of BDFI’s new home is remembered and celebrated.

 

[1] Supplement to the London Gazette, 2 May, 1941. The recommendation is in the National Archives. https://discovery.nationalarchives.gov.uk/details/r/C14149725

Tackling an intelligence gap in 6G management and orchestration systems with HELICON

BDFI academic Xenofon Vasilakos, lecturer of AI for Digital Infrastructures with the Dept. of Electrical and Electronic Engineering and a member of Smart Internet Lab at the University of Bristol, discussed the current orchestration systems intelligence gap when devising 6G network services at the IEEE International Conference on Communications this week. Below, he explains a Reinforcement Learning model-based orchestration approach tested with Bristol’s 5G city testbed and a real use case, which tries to address this intelligence gap while aiming at multi-objective optimisation goals. Further, Xenofon explains how this work poses a basis for integrating and supporting sociotechnical aspects in the future, such as fair resource consumption by users and services.

Network softwarisation in the fifth and future sixth generation of wireless networks (5G, 6G) is characterised by significant flexibility and agility as a result of adopting the concepts of Software Defined Networking (SDN) and Network Function Virtualization (NFV). The former have enabled scalable vertical industry services with strict performance requirements that need to be addressed by MANagement and Orchestration (MANO) systems. Nonetheless, today’s state of the art in MANO systems faces fundamental challenges regarding the highly complex problem of optimal user service function placement. MANO systems still lack Machine Learning (ML) intelligence while remaining largely dependent on rule-/heuristic-based solutions focusing exclusively on system-level resources after predefined policies.

High-level HELICON architecture showing global RL (GRL) and local RL (LRL) modules (on top), and internal system component data and signal message exchange.

The above approach neglects critical technical aspects such as network dynamics and system-wide service-level performance objectives of both verticals and infrastructure providers as expressed by Key Performance Indices (KPIs) such as service latency or balanced resource utilisation. In addition, it neglects the potential of including Key Value Indicators (KVIs) such as user access fairness to deployed services to avoid user starvation.

To address these gaps, we propose and present our latest work entitled “HELICON: Orchestrating low-latent & load-balanced Virtual Network Functions” to the IEEE International Conference on Communications, in May 2022, Seoul, South Korea (https://icc2022.ieee-icc.org/), during the “QoE And Network Systems” leg of the technical symposium of “Next-Generation Networking and Internet (NGIN)”.

HELICON stands for “Hierarchical rEinforcement LearnIng approach for OrChestratiNg” low-latent and load-balanced services. Though targeting purely technical KPI-based objectives in the current stage, HELICON paves the way for introducing also KVI-based objectives into the MANO equation, thus setting the necessary technical background for supporting socio-technical aspects in current and future 6G MANO operations. In brief, HELICON:

  • Poses a novel distributed hierarchical Reinforcement Learning (RL) approach that can serve as a stand-alone online service placement solution as well as a module-based extension for the current state of the art.
  • Tackles a computationally/analytically difficult problem (NP-Hard) with a tunable and lightweight Q-Learning scheme that besides KPIs can also support KVIs in the future such as fair access to resources by both users and services. In its current pilot version, HELICON optimises either or both of (i) end-to-end service delay or (ii) load balancing among hosting service nodes.
  • Last, we provide a real-life testbed implementation and use case-driven validation, and specifically, practical experimental results upon a realistic 5G Smart City Safety (SCS) use case conducted over a Bristol’s 5G city testbed assuming an e2e application video transcoding service.

Choosing the ‘high road’: major employer study reveals remote working challenges and opportunities

Jennifer Johns at the School of Management has been working with a major UK employer during the last year to examine how their working practices have responded to COVID-19 challenges. What does blended working mean and how does this continue to impact on day to day business decisions?  Here she explains discoveries so far and implications for the world of human resources.

Within organisations and across media channels there is currently much discussion about the ways in which we work. Terms like ‘remote’, ‘hybrid’ and ‘blended’ learning are used to describe changing patterns of work, breaking the traditional assumption that we should work in an office location.  This is not a new trend.  Since the 1990s, increased use of communication technologies, particularly the Internet, has facilitated significant changes in the ways in which work is conducted.  Digital technology enables the multidimensional fragmentation of work – one form of fragmentation is spatial as work can take place across smaller and more isolated work units.  What IS new is the degree to which more flexible form of work are now taking place since the COVID global pandemic.

Before COVID, we saw a rise in the number of people working away from the office, typically from home.  This included full remote work (for example data processing, professional services) and part remote work (e.g. senior executives working from home two days a week). Academic work charted the rise of this work, but its increase was considered to be limited to a narrow range of job roles, predominately low skilled routine work that can be conducted online or, conversely, high skilled ‘white collar’ professional work.  We recently argued that existing academic understandings of remote work were overly simplistic and that the relationship between employees and employers could take a ‘high road’ in which employee wellbeing increases, or a ‘low road’ in which working conditions deteriorate over time.

During COVID, the national lockdowns introduced by national governments required organisations to make working from home mandatory for as many job roles as possible.  This meant questioning some old assumptions about what work had to be based in the office.  Many organisations realised that the move to paperless offices had decoupled some forms of work from the office (receptionists, salespeople now using electronic brochures etc). In some sectors, this left a relatively narrow number of job roles that were required to physically be present in the office, typically those involving the maintenance of critical business infrastructure.

Following the move of many employees to home working, organisations have had to respond with modified working practices, policies around the return to work and debates around how much flexibility to continue to offer employees when/if they return to work.  On one hand, organisations can make cost savings by reducing their office space. On the other, many are discussing what types of activities must be co-located, acknowledging that some employees want to return to the office, and working out which functions could remain at home.

Alongside collaborator Rory Donnelly (University of Liverpool), I have been working with a major UK employer  since April 2021 on their blended working practices. The initial introduction to this company was made by Bristol Digital Futures Institute. This employer will remain anonymous in the research findings, once published. We have interviewed over fifty employees across three different sectors, highlighting the different needs of individual divisions in relation to flexible work. This employer has much to share with other organisations about their ongoing experience of flexible working, particularly as their group ranges from customer-facing contact centres to maintaining critical infrastructure.  The notion of having contact centre agents working from home would have been inconceivable to many organisations pre-COVID (and many academics too).  Yet, their contact centre agents have been working from home effectively, generating higher customer feedback scores during COVID.  This has been incredibly illuminating about how organisations can support staff to work flexibly and how they can adapt to dramatic shifts in the business environment. Retail staff, who typically worked in physical shop locations, were trained to work from home as contact centre staff.  This demonstrates an agility not typically seen in large multinational companies.  Our findings are being fed back to the company via company-wide seminars and workshops.  Our work will continue with this company and extend to include others from different industry sectors. We will be generating wider impact through policy recommendations and industry briefings.

Challenges remain, as for most businesses, around how to embed flexible working within organisational cultures and how to maintain innovation and employee wellbeing with staff working in the office and from home.  Here the role of human resources professionals becomes especially important within organisations. So too is the role of academics in offering guidance on how businesses can achieve a ‘high road’ approach which values employee well-being and job satisfaction. These lessons will be valuable as we seek to understand now work might further change as a result of digitalisation.