What’s Your AI Idea?

Are you a Sussex researcher in the arts, humanities or social sciences, with an idea about how AI might be used in your work? Are you looking for some expert advice, and the chance to explore some collaboration?

If this sounds like you, submit your idea here, and/or get in touch with j.c.walton@sussex.ac.uk.

Weird AI-generated landscape
jbustterr / Better Images of AI / A monument surrounded by piles of books / CC-BY 4.0

Sonic-Social Genre

Workshop on humanist and computer science approaches to musical genre at Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany.

by Mimi Haddon

This post shares some reflections on a four-day workshop at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany in May 2023. The event was led and organised by anthropologist, musicologist, and social scientist Professor Georgina Born and its focus was the points of convergence and divergence between humanist approaches to musical genre and those in computer science, focusing to some extent on music-genre recognition.

A well earned dinner after the four day workshop. The real reason for European research projects.

I joined as part of the humanist contingent as a result of my work on genre in What is Post-Punk? Genre and Identity in Avant-Garde Popular Music, 1977-82 (University of Michigan Press, 2020 – out now in paperback!) and my chapter in a forthcoming collection titled, Music and Genre: New Directions (Duke University Press, forthcoming), edited by Born and David Brackett, who also participated in the workshop. We three were joined by Anton Blackburn, Eric Drott, Klaus Frieler, Owen Green, Christopher Haworth, Christabel Stirling, Lara Pearson, Will Straw, Bob Sturm, and Melanie Wald-Fuhrmann.

The conversations were open-ended, experimental, and informal. Our topics ranged from approaches to genre in musicology (specifically Brackett’s 2016 book, Categorizing Sound), Born’s work on music’s social mediations and Straw’s work on scenes, to the precise way Shazam functions, Bayesian statistics, and the potentialities of music data sets. I was asked to respond to two papers on internet genres, one by Howarth on microsound, hauntology, hypnagogic pop and vaporwave, and the other by Blackburn on hyperpop.

My reflections covered six loose areas: (1) (Il)legibility; (2) Temporality; (3) Levels and Context; (4) the Unconscious of Genres; (5) Subjectivity in relation to Born’s 2011 article; and (6) “Extra-Musicological Sprawl.” As someone who is interested in the social and discursive mediation of popular music genres, my comments responded in ways that aligned with my musicology and cultural studies interests and expertise, and my emergent interest in gendered knowledge formations. On illegibility, I was interested in genres that fail to take hold in popular music culture and genres that are illegible to outsiders, e.g. the seeming impenetrability of the symbolic system in hyperpop to millennials, gen X, and boomers. On temporality and levels, I observed the way the internet potentially speeds up a genre’s process of emergence and how, perhaps unconsciously invoking Fredric Jameson, internet genres dissolve and tangle historical time. I wondered, too, how internet-based genres were nested and interrelated.

On the unconscious of musical genres, I was very much drawn to a sentence in Born and Howarth’s 2017 article, in which they describe what wasn’t allowed to be discussed on the microsound listserv, insofar as it “[discouraged] exchanges on equipment, record collecting, music-making.” This suggested to me that genres could have a discursive unconscious. Can computer science cope with this and, if so, how? Finally, in listening to work by Haworth and Blackburn, I was struck by the extra-musicological/extra-musical connotations of both genres. In the case of hyperpop, video games and social media appear imbricated with the genre in complex ways, and in the case of hauntology, I was struck by how reminiscent it was (in retrospect) of concurrent television programmes “about” media archaeology, such as the comedy series Look Around You (2002-2005). In short, how do data sets and computer scientists manage, represent, and understand such unpredictable but rich cultural contingencies?

In all, this was a stimulating and productive four days contemplating the challenges ahead for the humanities and computer sciences.

Shout out to what we nicknamed the “sky bar” at the Fleming’s Hotel… (it wasn’t really a sky bar).

Follow up reading

Blackburn, A. (2021). Queering accelerationism: Virtual ethnography, PC Music, and the temporal politics of queerness [Master’s]. Oxford.

Born, G. (2011). Music and the materialization of identities. Journal of Material Culture, 16 (4), 376–388. https://doi.org/10.1177/1359183511424196

Born, G. (forthcoming). Time and Musical Genre. In G. Born & D. Brackett (Eds.), Music and Genre: New Directions.

Born, G., & Haworth, C. (2017). From microsound to vaporwave: Internet-mediated musics, online methods, and genre. Music and Letters, 98(4), 601–647. https://doi.org/10.1093/ml/gcx095

Brackett, D. (2016). Introduction. In Categorizing Sound: Genre and Twentieth-Century Popular Music (pp. 1–40). University of California Press.

Drott, E. (2013). The End(s) of Genre. Journal of Music Theory, 57(1), 1–45. https://doi.org/10.1215/00222909-2017097

Drott, E. (forthcoming). Genre in the age of algorithms. In G. Born & D. Brackett (Eds.), Music and Genre: New Directions.

Frieler, K. (2018). A feature history of jazz improvisation. In W. Knauer (Ed.), Jazz @ 100 (Vol. 15, pp. 67–90). Wolke Verlag.

Hesmondhalgh, D. (2005). Subcultures, scenes or tribes? None of the above. Journal of Youth Studies, 8(1), 21–40. https://doi.org/10.1080/13676260500063652

Stirling, C. (2016). ‘Beyond the Dance Floor’? Gendered Publics and Creative Practices in Electronic Dance Music. Contemporary Music Review, 35(1), 130–149. https://doi.org/10.1080/07494467.2016.1176772

Straw, W. (1991). Systems of articulation, logics of change: Communities and scenes in popular music. Cultural Studies, 5(3), 368–388. https://doi.org/10.1080/09502389100490311

Sturm, B. L. (2014). The state of the art ten years after a state of the art: Future research in music information retrieval. Journal of New Music Research, 43(2), 147–172. https://doi.org/10.1080/09298215.2014.894533

AI and Archives

By Sharon Webb

The Sussex Humanities Lab was delighted to welcome speakers and delegates online and in-person to the AI and Archives Symposium last month (April 2023). The event, a collaboration between the Sussex Humanities Lab and two AHRC-IRC funded research projects, Full Stack Feminism in Digital Humanities and Women in Focus, was an opportunity to share interest in, knowledge of, and concerns with AI systems in archives and archival praxis.

I began by introducing the Lab itself, partly to provide context to the space in which the AI and Archives Symposium was being held, but also because it provided a useful foundation for thinking about the presenters’ many research areas. The AI and Archives Symposium spoke to SHL’s creative, critical, and collaborative approach to research: our programme has a wide disciplinary reach, from community archives to AI, media theory to conservation technology, critical heritage to intersectional feminism, digital humanities to experimental music technology and critical making. A perfect setting for this collision of past, present and futures: AI and Archives! What follows is an edited version of my introduction.


“AI and Archives” covers a lot of ground, and requires conversation that is critically engaged, creative and exploratory, and informed by feminist, queer and anti-racist ethics and approaches. The conversations around AI and Archives are collaborative across and within disciplinary boundaries – much like how SHL brings those elements uniquely together under one programme.  

The foundations of the symposium lie not in the current hype around AI but within dialogue to address or redress the historic and chronic under- and mis-representation of specific histories related to women, people of colour, LGBTQI+ communities, as well as the histories of individuals and communities who exist across multiple lines of identity politics and their intersections. Work in both projects, Women in Focus and Full Stack Feminism, considers how “archives”, as a place of knowledge, knowledge exchange, of power, disempowerment and empowerment, play a pivotal role in terms of acknowledging and celebrating the existence of alternative histories to those which form the traditional historic canon (mostly covering the domain of cis-white men, the victors and their conquests, trade and empire). Both projects, and indeed other projects related to SHL’s research cluster, Critical Digital Humanities and Archives, view archives as a place where identity lives, and where we can exemplify the contributions of those who have often been ignored, sidelined, and/or silenced, explicitly or otherwise.  

A critical aspect of the Women in Focus project is revisiting records in East Anglian Film Archives (EAFA) and Irish Film Archives (IFA), editing metadata and thereby revealing and acknowledging the role of women amateur filmmakers. These records range from short “home movies” to longer travelogues, and include a rich mix of topics. Revisiting and editing these records is labour-intensive. While the collection is relatively small (in comparison to other archival collections elsewhere), the task of reviewing each record and its associated object remains complex and time-consuming. The original metadata also speaks to this resource scarcity, as some descriptions are limited in scope and missing crucial detail about contributors to the resource. 

With this in mind, we thought about how computer vision might help generate descriptive metadata for moving-image archives. Of course, within digital humanities and computer science, there are many ongoing experiments exploring how AI might assist in identifying, describing, and/or tagging digital objects – the work of Lisa Jaillan, our first speaker, attests to this – but the current hype also demands some immediate responses. As AI tools multiply and grow mainstream, and potentials turn into realities, such work becomes timely, not least because of the problems of bias and discrimination which such systems can replicate and perpetuate.  

AI, machine learning, computer vision, and other automated methods have an allure for several reasons. For example, while the digitisation of historical records has developed considerably, archival methods of cataloguing (metadata descriptions) remain largely the same (manual, human). Automated methods can provide resource and quick fixes for an industry or service which faces dwindling resources despite ever-increasing content.  

The challenge we face is using AI for good – while anticipating, acknowledging and/or expecting elements which are/could be profoundly bad. A few years ago, Dr Ben Roberts, co-director of the lab, ran an AHRC Network grant called ‘Automation Anxiety‘. It explored contemporary attitudes toward automation, and framed them within the longer history of ‘automation anxieties’ related to mechanical devices. A notable example is, of course, the Luddites. Often misrepresented as mere technophobes, the Luddites quite rightly argued for more critical reflection on the economic and social consequences of technological change within textile manufacturing. Luddite perspectives were revived by Langdon Winner in the automation anxiety wave of the 1970s, and are attracting attention once more today.

There are many contexts in which we could discuss the topic of AI and Archives. Using AI to generate descriptive metadata, whether to describe text, moving image, audio, etc., can itself help with the problem of discoverability, which might in turn help to address the problem of biased training sets. AI being used to improve archives, archives being used to improve AI: perhaps we are amid a new archival turn!  

However, the scale of implementing these systems is not trivial, and nor are the associated environmental costs, including the carbon impacts of training and running AIs, and of running the servers where archival data is stored.  If AI removes the barrier of describing collections, does it also remove the barrier/process of appraising? We can/could possibly “save everything” but this preservation strategy would likely be at odds with climate change strategies – what additional pressure might digital archiving activities impose on our already delicate global ecosystem?

Like most people, I did ask ChatGPT to write something: ‘a 200 word abstract about AI and Archives’. Among its list of “benefits” it listed that “AI has the potential to automate many of the tedious tasks associated with archival work, such as cataloging and indexing”. And while this may of course be of benefit, we must question what the fallout of such an approach is. What does knowledge gain? What are the epistemological pitfalls of such an approach which removes the human? Is an archive a mere store for documents or do they serve a wider purpose? Conversations around AI and Archives, and digital material in general, do require us to think critically about the social and cultural role of archives, in our communities, in our societies. 

And to whom is this task “tedious”? This depends on the context – for community archives who are archiving their histories, this work is not tedious, it is an essential activity in terms of knowledge production, community identity and historical acknowledgment. If knowledge is power, then what happens when our knowledge production processes are no longer mediated through us, but rather done for us? Machine talking to machine, rather than one human (the archivists) talking with a researcher.  

Archiving in these contexts provides a close reading of the artifacts. The archival process itself is a meaningful exchange with history. This is not a “tedious task” but an important intergenerational passing of knowledge, of culture, of life experience. Wrapped in this context, we know ethics should be a driving force for any conversation or experiments with AI technologies. AI is maths after all – if we don’t calculate the parameters, who will!  


A summary of the event and speakers will follow in the next few weeks, but perhaps the most enduring thought for me was that we do not experience these technologies equally. Nor do we experience heritage and archives in the same way – questioning representation and questioning our individual experiences and responses is imperative. How we “handle” heritage objects and their interpretation must be informed by collective, community driven activities and dialogue. Archives and heritage objects are not merely objects: they represent stories, experiences, life, death, hurt, violence, trauma, and joy. Critically, they represent human sentience, and whether AI will read these objects in this way is still yet to be seen. 

Democratising Digital Decarbonisation

This summer, the Sussex Humanities Lab, as part of our Experimental Ecologies cluster, is running a project focused on the challenges of decarbonising the digital. With a fabulous set of collaborators, including the Digital Humanities Climate Coalition, Hampshire Cultural Trust, Bloom & Wild, Greenly, and GreenPixie, we’re working capture and share some best practice in measuring and reducing carbon emissions, with some particular emphasis on digital carbon and cloud computing.

Key outputs will be:

  • A case study about Hampshire Cultural Trust‘s sustainability journey, emphasising challenges and opportunities distinctive to the GLAM sector
  • A case study about Bloom & Wild’s sustainability journey, emphasising ICT and Cloud

Decarbonisation must operate on a tight timeframe. The IPCC has established that global carbon emissions would need to peak and begin an unprecedented fall within the 2020-2025 period to have a reasonable chance of aligning with the Paris Agreement.

Across many sectors, despite ambitious net zero commitments, it remains challenging to interpret what climate transition means for day-to-day decision-making, longer-term strategy, or the interests and agendas of different stakeholders. The legal and regulatory environment is rapidly evolving, along with standards and certifications, and informal sectoral norms. Organisations’ capacity to interpret and to apply these to their own operations varies considerably, and where there are gaps, climate tech and climate consulting services are set to play a key role. It’s easy to get stuck in the phase of data collection, or trying to figure out what data to collect, and delay actual carbon reductions. Communicating climate risk and climate action with internal stakeholders is challeging too.

Unsurprisingly, we’re seeing a boom in commercially available climate-related expertise, on a variety of models. Buying in the right expertise can help clients to put evidence into practice. But there are plenty of questions around such engagements. Who can or can’t afford bespoke consulting engagements? To what extent can and should such expertise be automated and platformised? Can we reconcile tensions between a technical mindset (seeking optimal solutions) and a political mindset (making value judgments and trying to do the right thing)? How do state-of-the-art carbon analytics services fit into the broader picture of climate policy, climate justice, and the political economy of energy transition?

And, of course, what role might universities play in identifying digital decarbonisation methods and best practices, enriching these with interdisciplinary expertise, and getting them to the decision-makers who need them? How do we ensure that the legitimacy that academics can generate and confer is being deployed responsibly, within the wider ecology of decarbonisation knowledge and practice? Who is an expert on who is an expert?

In 2019, the University of Sussex joined many others in declaring a climate emergency. Democratising Digital Decarbonisation aims to balance action research with an exploratory ethos. In other words, we plan to make tangible contributions to net zero goals in the short term. At the same time, we fully expect to emerge from the project not only with a few new answers, but also with a few new questions. And we’re not just interested in producing new knowledge or new research agendas. We’re interested in knowledge exchange in the richest sense: in learning from one another; in nurturing learning within networks of collaboration, care and solidarity; and in valuing that knowledge not just wearing our professional hats on, but with our human hats too. We think this is a good approach to doing research in the middle of an emergency!

Project team:

  • PI: Jo Walton (j.c.walton@sussex.ac.uk)
  • Co-I: Josephine Lethbridge
  • Olivia Byrne
  • Alex Cline
  • Polina Levontin
  • Matthew McConkey

Funding:

  • HEIF
  • ESRC IAA (scoping: Mainstreaming Climate Transition)
  • GreenPixie
  • Greenly

Research Assistant: Sustainable Software and Digital Infrastructures

This is a fixed term 0.5 appointment for three months (May to July). All work can be conducted remotely. Interested candidates may apply here before 25 April. Enquiries: j.c.walton@sussex.ac.uk.

The Sussex Humanities Lab is seeking a Research Assistant to explore the future of digital technology (especially cloud computing) in relation to net zero and climate justice.

Key requirements:

  • PhD in a relevant subject (e.g. Computer Science, Engineering), or comparable expertise
  • Good analysis and communication skills

Interest in any of the following will be beneficial:

  • Cloud computing
  • Machine Learning
  • Web3
  • Sustainable IT
  • Sustainability reporting and certification
  • Climate analytics
  • Carbon accounting
  • Energy transition
  • Science communication
  • Decision support tools
  • Digital Humanities
  • Climate justice

About the project

Globally, the amount of data we store and process has exploded in recent years, and is projected to keep rising. Exciting new AI tools hit the headlines almost daily, and look set to become an increasing part of everyday life. Meanwhile, countries and companies are scrambling to meet their net zero targets — with the IPCC warning that these targets are not yet ambitious enough. So how do we align the future of ICT with the future of our planet?

You will be part of a small team, whose interdisciplinary work focuses on bringing together cutting-edge evidence and creating accessible, actionable resources to support net zero and climate justice. This project investigates digital decarbonisation, focusing especially on cloud computing. There will be opportunities for dialogue with leading digital carbon consultancies, as well as organisations working on tracking and reducing their digital carbon.

Our work will expand the Digital Humanities Climate Coalition toolkit (https://sas-dhrh.github.io/dhcc-toolkit/), a free and open-source resource devoted to deepening and widening understandings of the impacts of digital technologies.

The research project will run May to July. The post will be paid at point 7.2 on the HE single pay spine.

Informal enquiries are welcome: j.c.walton@sussex.ac.uk.

Reference: 50001228

Bridging the gap between computational linguistics and concept analysis

by Eddie O’Hara Brown

“Bridging The Gap” by MSVG is licensed under CC BY 2.0.

One of our priorities at the Concept Analytics Lab is to utilise computational approaches in innovative explorations of linguistics. In this post I explore the disciplinary origins of computer science and linguistics and present areas in which computational methodologies can make meaningful contributions to linguistic studies.

The documented origins of computer science and linguistics place the fields in different spheres. Computer science developed out of mathematics and mechanics. Many of the forefathers of the field, the likes of Charles Babbage and Ada Lovelace, were first and foremost mathematicians. Major players in the development of the field in the twentieth century were often mathematicians and engineers, such as John von Neumann and Claude Shannon. On the other hand, linguistics developed out of philology and traditionally took a comparative and historical outlook. It was not until the early 20th century and the work of philosophers such as Ferdinand de Saussure, when major explorations into synchronic linguistics began.

The distinct origins of computer science and linguistics are still visible in academia today. For example, in the UK and other western universities, computer science is situated in STEM subjects, and linguistics often finds a home with humanities and social sciences. The different academic homes given to linguistics and computer science often poses a structural barrier to interdisciplinary study and creation of synergies between the two disciplines. 

Recent research shows that the merging of linguistic knowledge with computer science has clear applications for the field of computer science. For example, the language model BERT (Devlin et al., 2018) has been used by Google Search to process almost every English-based query since late 2020. But we are only just beginning to take advantage of computational techniques in linguistic research. Natural language processing harnesses the power of computers and neural networks to swiftly process and analyse large amounts of texts. This analysis complements traditional linguistic approaches that involve close reading of texts, such as narrative analysis of language, discourse analysis, and lexical semantic analysis.

One particularly impressive application of computational linguistics in the analysis of semantic relations is the word2vec model (Mikolov et al., 2013). word2vec converts words into numerical vectors and positions them across vector space. This process involves grouping semantically and syntactically similar words and distancing semantically and syntactically different. Through this process corpora consisting of millions of words can be analysed to identify semantic relations within hours. However, this information, as rich as it is, still needs to be meaningfully interpreted. This is where the expertise of a linguist comes in. For instance, word2vec may identify pairs of vectors between which the distance increased across different time periods. As linguists, we can infer that the words these vectors represent must have changed semantically or syntactically over time. We can rely on knowledge from historians and historical linguists to offer explanations as to why that change has occurred. We may notice further that similar changes occurred amongst only one part of speech, or note that the change first occurred in language of a particular author or a group of writers. In this way, the two fields of computer science and linguistics necessarily rely on each other for efficient, robust, and insightful research.

At the Concept Analytics Lab, we promote the use of computational and NLP methods in linguistic research, exploring benefits brought by the convergence of scientific and philological approaches to linguistics. 

References

Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova (2018) ‘Bert: Pre-training of deep bidirectional transformers for language understanding.’ Available at https://arxiv.org/abs/1810.04805

Mikolov, Tomas, Kai Chen, Greg Corrado, and Jeffrey Dean. (2013) ‘Efficient estimation of word representations in vector space.’ Available at https://arxiv.org/abs/1301.3781

Introducing The Big Reveal (WT)

Tim Hopkins

Project aims

How can uses of AI and adaptive technology be used to create narrative fiction in audio form, responsive to location, in AR experiences for audiences using headphones and mobile devices?

Imagine an audience member at point A, wearing headphones, listening to pre-scripted narration that reveals creative spirits at large – talking to listeners, drawing them into a story.  Other scripted text awaits at locations B, C, D etc.

The user moves unpredictably between them – as they do, AI generates spoken text that bridges the story from one place to another, regardless of the route, with compelling narrative traction.   The thresholds between AI and not-AI may be undetectable, or may announce themselves, like doors to a different realm…

The Big Reveal (WT) is a project researching how to make this possible, bringing together colleagues from different disciplines: Tim Hopkins, Sam Ladkin, Andrew Robertson, Kat Sinclair and David Weir.  We’ve had very welcome support also from Jo Walton and other colleagues, and have arranged future contribution from Victor Shepardson (voice synthesis.)

New developments in generative pre-trained transformer (GPT) technology extend uses of deep learning to produce text. This is a branch of machine learning/AI that has many potential uses and societal impacts.  The project explores this using AI and AR as a creative space for collaboration between engineers and artists.  

We are interested in affording practitioners across these disciplines a space to learn by making something together, perceiving AI as a new space for creative writing in a multidimensional expressive space, and making something that might offer the public an engaging way to experience and reflect on their space and the growing presence and impact of AI.

The project envisages three elements, each with its own development stage.

1) a text-generation system that is adaptive to context in a way that sustains / plays with suspension of disbelief

2) voice synthesis that can translate text into convincing speech / narrative voices

3) a platform combining software which can fuse detection of user activity and location with adaptive delivery on a mobile device

Progress so far

This has focused on 1 (as this will define scope for an application for a large project grant supporting the research phases 2 and 3).

Our method has been to develop a lab simulation combining some key technology, functionality and artistry as a proof-of-concept.

We come from different disciplines.  One of the inherent stimulations for the project is navigating differences between how we conceive and share what we are trying to do.  For example, David and Andrew (Informatics) have provided essential insights and guidance on current work with language models, to help induct Tim, Sam and Kat (MAH) into a complex and huge field.   T, S and K have experience of writing / creating for related spaces (e.g. games, adaptive systems, branching narratives), as well as more traditional contexts, but the concepts and engineering potentially underpinning our project ask for new understandings.  Similarly, discussions of language features at work in creative writing (e.g. complex implications of syntax) may test the functionality and limits of existing automated language models.

A central task has been to look for an approach that can respond significantly to what might be minimal input (prompts) from the user.  In contrast to some game formats, where players’ active choices overtly steer subsequent events, we are interested in an experience where users might not perceive any instructional dialogue with a system at all, but feel as if they are being told or are immersed in a narrative that recognises what experience they are having, and is able to determine what they should hear next.  This needs to happen according to a given narrative world and its bespoke generative language model – adapting to information detected by a mobile device as to location, orientation, direction, speed.

A series of discussions (April-November 2022) each led to tests based on existing text2text approaches, whereby text is put into a language model that infers suitable responses based on a range of data.  Although ultimately in our user’s experience there may be no apparent text prompt from the user themselves, there is nonetheless a need for an underlying element of this kind in order to stimulate a generated response. ‘Text’ in this case may be adaptive written text users actually hear, or an equivalent input determined by their behaviour, generating ‘text’ or prompts that may be hidden from users’ perception.  Our tests involved texts / prompts written by Andrew, Kat, Tim and Sam, fed through a number of text generation processes (on https://huggingface.co/ , a prominent platform for AI projects.)

Instead of shorter prompts leading to consequent longer responses, many of these processes were originally designed to achieve different kinds of results – such as inferring summaries of information from input texts.   This tended to result in short outputs that were not quite right for our purposes in a variety of ways. Extending prompts did not flex responses more effectively. Varying the character of prompts, for example imitating strongly-flavoured genres, had some perceivable impacts, but not decisively.  We needed to develop functionality towards richer responses.  This suggested adjusting our approach, involving two current directions.

Next steps

Firstly,  we continue to explore core needs – text2text generation, and training a GPT2-like model. However, we’re focussing on getting a ‘good start’ (DW) to an automated response of the kind we might want – rather than concerns about the length of response (which can be addressed later.)  We are also identifying specific corpora to fine-tune a model. Andrew has been experimenting for example using ‘film reviews’ as inputs (recently using https://huggingface.co/distilgpt2.) Kat is supplying examples of poetry (including her own) and shortly larger corpora needed to train a classifier – something that can distinguish between kinds of input in the way we need.  Andrew is now working on building a test model based on GPT2 to be fine-tuned with this.

Secondly,  the creation of some kind of ranking machine.  For example

a) we give the same input to create a range of candidate texts (e.g.100), a machine ranks them according to merit, and randomly chooses from the top of pile

b) we have two blocks of text – one visible one not.  We insert candidate third blocks between the visible and the hidden, and rank the insertions according to how well they work between the two. (This discussion also included ‘similarity metrics’ and BERT representation – ‘Bidirectional Encoder Representations from Transformers’).

c) we compare prompts with two corpora of texts – one has features we are looking for (e.g. of genre or form), the other is overtly more neutral (e.g. informational website like BBC news) – the machine ranks highest those closest to the first.

In the new year (2023) we will pick up on these, aiming to make our proof-of-concept model in the Spring.   Towards our grant application, we will also start scoping Phase 2 – on voice synthesis – with input from Victor Shepardson (Iceland, Dartmouth,  delayed this year due to COVID19 impacts.) We will look at challenges and consequences for translating text responses into persuasive speech simulation, and practical issues around processing – since the outcome envisages accompanying users across ‘borders’ in real time, between recorded adaptive narration and AI assisted/generated narration.

What is Concept Analytics and who are we?

Concept Analytics Lab (CAL) gathers linguists, AI engineers, and historians and is aligned with Sussex Humanities Lab within the Critical Digital Humanities and Archives research cluster. The principle mission behind Concept Analytics is to understand human thinking by analysing conceptual layering in texts. We overcome the divides between humanities, AI, and data science by harnessing the power of computational linguistics without losing sight of close linguistic analysis. 

Although CAL was formally set up in 2021, its existence is the culmination of research energies over the previous few years and our desire for a stable space to explore concept-related ideas with like-minded scholars.  Establishing the Lab has provided us with a platform from which we showcase our research expertise to researchers and other external partners. CAL has grown and changed through 2022, during which time we have counted on a team of six researchers at a range of stages in their careers, from undergraduate to postdoctoral level. The team is led by Dr Justyna Robinson. 

CAL has so far partnered with research groups within Sussex, e.g. SSRP, as well as ones further afield, e.g. Westminster Centre for Research on Ageing. We have worked closely with Archives such as Mass Observation Archive and Proceedings of the Old Bailey, as well as non-academic organisations. 

What were the highlights of the past year? 

Our activities in the past year centred around exploring the content of the Mass Observation Project (MOP) and their Archive of May 12 Diaries with the aim of identifying conceptual changes that happened during Covid-19. We have completed two main research projects. CAL was awarded funding through the UK-RI/HEIF/SSRP call Covid-19 to Net Zero, in collaboration with industry partner Africa New Energies, to identify the impact of Covid-19 on people’s perceptions and habits in the context of household recycling and energy usage. CAL was also commissioned by the PETRA project (Preventing Disease using Trade Agreements, funded by UKPRP/MRC) to discover key themes and perceptions the public holds towards post-Brexit UK trade agreements. Keep reading for summaries of the findings of these research projects, as well as our other achievements this year. 

Household recycling with Africa New Energy (ANE) 

Through this project we identified that respondents to the MOP Household Recycling 2021 directive were deeply committed to recycling, but that these feelings were coupled with doubt and cynicism in relation to the effectiveness of the current system. MOP writers pointed to a perceived lack of transparency and standardisation in recycling processes and systems. Lack of transparency and standardisation have also been identified as obstacles to recycling adherence and efficacy in more policy-based analytical surveys (Burgess et al., 2021; Zaharudin et al., 2022). Changes in recycling habits among the UK population were identified as resulting from external factors, such as Covid-19 and reduced services, as well as lack of knowledge about how and what can be recycled. This research has significantly impacted the way our grant partner ANE approach their operations in terms of gaining energy from organic waste content. The research results also led ANE to start work on gamifying the waste classification process. It aims to encourage recycling compliance by replacing the current sanction-based system with a more rewards-based system. This research shows that the CAL already has a track record of establishing commercial routes of impact for our research and we see extending the scope of this impact to be a critical next step in CAL’s research programme. Further details on the collaboration with ANE can be found in this blog post.  

We are seeking further HEIF funding to expand on the work already done with the Household Recycling directive to maximise policy impact by processing the handwritten answers and also processing the 2022 12 May Diaries for insight into the impact of the current energy crisis on respondents’ behaviour and attitudes to energy. As part of this project we would hold an exhibition in which we would invite various stakeholders including policy makers to showcase our work. 

MOP UK Trade Deals 

We were commissioned by the PETRA project’s lead Prof. Paul Kingston from the University of Chester to perform a conceptual linguistic analysis of the MOP UK Trade Deals directive. We used our approach to identify hidden patterns and trends in the answers to the directive questions. The conceptual analysis allows us to combine quantitative with qualitative methods and identify otherwise unperceived patterns. The main themes that arose were related to the perceived quality of trade deals and concerns about animal and ethical standards. We also performed an analysis linked to people’s knowledge, belief and desires. The results of the analysis will inform policy makers in their decisions regarding trade deals. Additionally this piece of work has attracted some interest from public health bodies with whom we are preparing a potential grant for future research. 

Papers and presentations 

In 2022 Justyna Robinson and Julie Weeds both presented the work they did within the context of the Old Bailey archives and have had their paper on that work published in the Transactions of Philological Society. In this paper they describe a novel approach to analysing texts, in which computational tools turn traditional texts into a corpus of syntactically-related concepts. Justyna Robinson and Rhys Sandow also have authored a paper forthcoming in 2023, ‘Diaries of regulation: Mass Observing the first Covid-19 lockdown’. This research will be presented at Mass Observation’s 85th Anniversary Festival, Mass Observation Archive, The Keep, 23rd April 2023. 

Website 

As part of the SSRP/HEIF funding we received earlier this year we have also developed a website, which can be found at conceptanalytics.org.uk, where we also post blogs with news pieces and short research insights. 

Embedding Sustainability in the Curriculum

Presenting the Media Arts and Humanities Sustainability Educator Toolkit:

Or if you prefer, here is the Google doc (feel free to leave comments).

This toolkit is aimed at supporting educators (at Sussex and beyond) to build themes, concepts and practices related to sustainability into our teaching. It’s a grab-bag of inspirations, provocations, and helpful signposts.

It covers:

  • Sustainable Development Goals
  • Planetary boundaries
  • Climate change and climate justice
  • Ecocentrism
  • Indigenous knowledges
  • Degrowth and postgrowth

There is a focus on media, arts and humanities, and some focus on the University of Sussex. But we hope it will be useful much more widely.

This toolkit is complemented by a crowdsourced living document of links and resources. We have been inspired by the commitment to decolonising the curriculum in the past few years. Now it is time to embed sustainability – and acknowledge the deep relationship between the two.

The toolkit is published under a CC license.

Other related resources: