Create a green website

Would you like a website of your own, or to revamp your existing website?

Would you like to make your website greener, so that it hardly contributes at all to climate change?

Would you like to understand more about how websites are built, and pick up some basic coding skills?

If you answered yes to any of the above, then this workshop could be for you. Over 2 days, you’ll create your own website, using our free low carbon website building platform. If you come with text and images ready to go, the whole website should be ready to go by the end of the second day.

Once you have finished your website, it will be hosted for free on a server powered by renewable energy. The only costs your website will incur will be the domain name and a CDN where videos and images will be stored. You can choose your own providers for these, or we will offer suggestions. (SHL Digital can also provide small grants to participants, to cover you for the first 2-3 years at least).

The workshop will be led by two folk from Fast Familiar, who created the platform and will be there to support you with the process every step of the way. 

Where will the workshop happen?

19 June: University of Sussex Campus (closest station Falmer), in the Sussex Digital Humanities Lab, Silverstone Building

26 June: University of Sussex Campus (closest station Falmer), room TBC

When will the workshop happen?

19 June, 10:00-16:00 (lunch provided)

26 June, 10:00-16:00 (lunch provided)

How do I sign up?

You can apply to participate here.

What do I need to bring?

Please bring with you as much as you can of the text for the website pages you want to create, and the images and videos you want to use. As there is a week between the two days of the workshop, don’t worry if you don’t have all the text and images with you that you will eventually need, as you can write it in between day 1 and day 2. It would probably be helpful for you to bring your own laptop to work on, as that will be what you will use to edit the site later.

Below is some information about why we built a low carbon website builder and why it matters:

Why do low carbon websites matter?

These days, every artist, creative or organisation needs a web presence – it’s how we communicate what we do, demonstrate track record or legitimacy, and make ourselves available to be offered work.

Platforms like Squarespace, WordPress and Wix have made it possible for people without coding skills to build their own site – which is great. Often these platforms have capacities or affordances built into them that we don’t use if we’re basically just looking to put a portfolio of work and contact details online. These affordances and other aspects of the off-the-peg solutions mean that the site is using more energy than it needs to – which means the server it sits on and the devices of the people accessing it are using more electricity than they need to as well. Currently not all of our electricity comes from renewable sources, so this means the site is contributing to climate change.

The impact of digital activity on climate change is often less obvious than something like driving a car or heating your house with a gas boiler because the activity takes place thousands of miles away, on server farms, many of which are in ‘Data Center Alley’ in Virginia, a state powered by fossil fuels.

You might be reading this thinking, ‘my tiny website is a drop in the ocean compared to aviation/ international shipping/ the activities of fossil fuel giants like BP and Shell’ or ‘a carbon footprint is a made-up thing, a concept invented by aforementioned fossil fuel giants to make climate change an individual problem, rather than something they take responsibility for.’ You’d be right on both counts.

FF absolutely recognises the need for structural change, regulation, investment in new technologies and other ‘big picture’ interventions. But that doesn’t mean we don’t also want to reflect on how the choices we make contribute to climate change – and how we could take climate action instead. (In one of the interviews we did for a past project, The Networked Condition, artist Memo Akten talks about his thoughts on individual responsibility in a far more articulate way than we could.)

So we’ve been working on a tool to let artists, creatives and small organisations build a website which doesn’t contribute to climate change.

FAQs

How have you made it zero-carbon?

In a nutshell, 100% of the electricity used in the process of building and hosting the site is from carbon neutral sources, e.g. wind or hydro-electric.

The website builder itself is really lightweight and can be hosted on micro-servers – these are smaller servers that use a fraction of the typical energy a server would use, so building and hosting the sites takes less electricity, plus we’ve chosen providers where 100% of that electricity is green.

We’ve also only used third-parties who are verifiably carbon neutral and have committed to sustaining neutrality or carbon negative practices. If you are interested, these are Hetzner and Microsoft.

It’s all very well getting your own house -or site- in order but you don’t have any control over what electricity the visitors to your site are using for their device and router. So the websites that the site builder makes are ‘static websites’, which means that visitors to the site use less bandwidth (and therefore less electricity) to view them. Static here doesn’t mean that you can’t have moving things and videos, it means your site is a bunch of code which doesn’t require fancy databases or a responsive server.

You say it’s a prototype – what happens if I put a load of work into using it to build a site and then you discontinue it, will I lose all my work?

Not at all, our site builder is built to be data-redundant, we just provide a user interface to make editing the files easier. Behind the scenes the site builder uses an open-sourced website framework called Jekyll, paired with Github Pages for hosting. If we discontinue the website builder, your website will continue to work on Github Pages and you can still edit files there.

Why is it free? Where’s the catch?

There isn’t one – honestly. It’s free because it should be and because we’re lucky enough to have an arts funder in this country who supported its creation (Arts Council England).

What do Fast Familiar get out of it?

Nothing. It’s something we wanted to make happen.

I’m really not a technical person, will I be able to use the tool?

We’ll be honest, it is more complicated to use than an off-the-peg solution – because we haven’t had the funding to build a slick user interface. BUT that is why we are running the workshops, to help you through any bits that are slightly less intuitive. And, as a bonus, you will learn a little bit about basic coding along the way.

Are there tutorial documents?

Yes there are, which means that when you want to tweak things after the workshops, there will be the information easily available to do the things you want to do.

Will my site look like everyone else’s who uses the tool? I’m a unique iNdIViduAL you know.

No, the tool lets you use a range of different themed templates. If you’re more confident with coding, you can also significantly adapt these themes to suit your needs, but we hope that even without that, everyone should be able to find a theme that suits them.

Yeah, but you haven’t answered my question, I have a different question 

Drop an email to Dan and Jo at dan(at)fastfamiliar(dot)com and j(dot)c(dot)walton(at)sussex(dot)com.

Generative AI and HE assessment: What do we need to research?

By Jo Walton

Would you like to collaborate on something?

There are a lot of fascinating ongoing conversations about the use of generative AI within HE, and especially the issues it raises for assessment. Kelly Coate, writing for WonkHE, characterizes it as a moral panic, but still a potentially useful one:

If in academia we value things like authenticity, integrity, and originality, we need to be able to articulate why those values remain important in the age of generative AI. Doing this can only help students to make meaning from their higher education learning experience – in fact, it’s really what we should have been doing all along.

Our sense is that there are now many particular well-defined questions which could do with being studied in a more rigorous way, to move these conversations forward. How good are educators at identifying AI-generated text? How good are educators at evaluating their own ability to identify such text? What differences emerge across different assessment designs? How about different disciplines? If AI is deliberately included in the assessment, how accurately can assessors evaluate whether it has been used appropriately? How does the variation in grades assigned compare across non-AI-asssessed work and various types of AI-assisted work? These are all questions we can address from our own experience and miniature experiments. But it would be valuable to conduct some studies at scale, and have some data to debate.

SHL Digital would be keen to hear from colleagues, at Sussex or beyond, who are working on these issues or would be interesting in collaborating. Get in touch with j.c.walton@sussex.ac.uk.

New (-ish) name, same Lab

Last month the University of Sussex announced 12 new Centres of Excellence at a reception in House of Commons at Westminster hosted by Caroline Lucas (MP). We are proud to be counted as part of this new cohort of flagship centres which ‘carry out innovative and world-leading research’. The application phase for Centres of Excellence provided time to reflect upon the way in which our name, Sussex Humanities Lab, works or doesn’t work for our members, networks, and associated schools and departments. 

Following consultation with core members and various heads of school, we have added ‘Digital’ to our name to emphasise our shared interest in the impacts and opportunities associated with digital transformations in culture and society.

Addressing these changes requires interdisciplinary research and we see the increasing use of digital methods and scholarship as an opportunity for the humanities to interact in new ways with other disciplines; Sussex Digital Humanities Lab creates a space to nurture those interactions at Sussex.

Sharon Webb, Co-Director SHL Digital

For simplicity’s sake – and to emphasise continuity with a successful past – the short form of our name will now be SHL Digital.

Although we are now formally a Centre of Excellence, we have retained the word ‘Lab’ in our title. This was a deliberate and essential choice, illustrating how SHL Digital is experimental, practice based and centred on collaborative work and methods. For us, a ‘Lab’ extends beyond a mere technological space. SHL represents the community of diverse individuals which occupy it.

Our name change, we hope, reflects more accurately our community and our cross-campus, multidisciplinary nature. SHL Digital will continue to investigate the interactions between computational technology, culture, society, and the environment, working towards more sustainable and just futures for all.

For more details on our current research please visit our website.

For any questions, please contact shl@sussex.ac.uk  

Sonic-Social Genre

Workshop on humanist and computer science approaches to musical genre at Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany.

by Mimi Haddon

This post shares some reflections on a four-day workshop at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany in May 2023. The event was led and organised by anthropologist, musicologist, and social scientist Professor Georgina Born and its focus was the points of convergence and divergence between humanist approaches to musical genre and those in computer science, focusing to some extent on music-genre recognition.

A well earned dinner after the four day workshop. The real reason for European research projects.

I joined as part of the humanist contingent as a result of my work on genre in What is Post-Punk? Genre and Identity in Avant-Garde Popular Music, 1977-82 (University of Michigan Press, 2020 – out now in paperback!) and my chapter in a forthcoming collection titled, Music and Genre: New Directions (Duke University Press, forthcoming), edited by Born and David Brackett, who also participated in the workshop. We three were joined by Anton Blackburn, Eric Drott, Klaus Frieler, Owen Green, Christopher Haworth, Christabel Stirling, Lara Pearson, Will Straw, Bob Sturm, and Melanie Wald-Fuhrmann.

The conversations were open-ended, experimental, and informal. Our topics ranged from approaches to genre in musicology (specifically Brackett’s 2016 book, Categorizing Sound), Born’s work on music’s social mediations and Straw’s work on scenes, to the precise way Shazam functions, Bayesian statistics, and the potentialities of music data sets. I was asked to respond to two papers on internet genres, one by Howarth on microsound, hauntology, hypnagogic pop and vaporwave, and the other by Blackburn on hyperpop.

My reflections covered six loose areas: (1) (Il)legibility; (2) Temporality; (3) Levels and Context; (4) the Unconscious of Genres; (5) Subjectivity in relation to Born’s 2011 article; and (6) “Extra-Musicological Sprawl.” As someone who is interested in the social and discursive mediation of popular music genres, my comments responded in ways that aligned with my musicology and cultural studies interests and expertise, and my emergent interest in gendered knowledge formations. On illegibility, I was interested in genres that fail to take hold in popular music culture and genres that are illegible to outsiders, e.g. the seeming impenetrability of the symbolic system in hyperpop to millennials, gen X, and boomers. On temporality and levels, I observed the way the internet potentially speeds up a genre’s process of emergence and how, perhaps unconsciously invoking Fredric Jameson, internet genres dissolve and tangle historical time. I wondered, too, how internet-based genres were nested and interrelated.

On the unconscious of musical genres, I was very much drawn to a sentence in Born and Howarth’s 2017 article, in which they describe what wasn’t allowed to be discussed on the microsound listserv, insofar as it “[discouraged] exchanges on equipment, record collecting, music-making.” This suggested to me that genres could have a discursive unconscious. Can computer science cope with this and, if so, how? Finally, in listening to work by Haworth and Blackburn, I was struck by the extra-musicological/extra-musical connotations of both genres. In the case of hyperpop, video games and social media appear imbricated with the genre in complex ways, and in the case of hauntology, I was struck by how reminiscent it was (in retrospect) of concurrent television programmes “about” media archaeology, such as the comedy series Look Around You (2002-2005). In short, how do data sets and computer scientists manage, represent, and understand such unpredictable but rich cultural contingencies?

In all, this was a stimulating and productive four days contemplating the challenges ahead for the humanities and computer sciences.

Shout out to what we nicknamed the “sky bar” at the Fleming’s Hotel… (it wasn’t really a sky bar).

Follow up reading

Blackburn, A. (2021). Queering accelerationism: Virtual ethnography, PC Music, and the temporal politics of queerness [Master’s]. Oxford.

Born, G. (2011). Music and the materialization of identities. Journal of Material Culture, 16 (4), 376–388. https://doi.org/10.1177/1359183511424196

Born, G. (forthcoming). Time and Musical Genre. In G. Born & D. Brackett (Eds.), Music and Genre: New Directions.

Born, G., & Haworth, C. (2017). From microsound to vaporwave: Internet-mediated musics, online methods, and genre. Music and Letters, 98(4), 601–647. https://doi.org/10.1093/ml/gcx095

Brackett, D. (2016). Introduction. In Categorizing Sound: Genre and Twentieth-Century Popular Music (pp. 1–40). University of California Press.

Drott, E. (2013). The End(s) of Genre. Journal of Music Theory, 57(1), 1–45. https://doi.org/10.1215/00222909-2017097

Drott, E. (forthcoming). Genre in the age of algorithms. In G. Born & D. Brackett (Eds.), Music and Genre: New Directions.

Frieler, K. (2018). A feature history of jazz improvisation. In W. Knauer (Ed.), Jazz @ 100 (Vol. 15, pp. 67–90). Wolke Verlag.

Hesmondhalgh, D. (2005). Subcultures, scenes or tribes? None of the above. Journal of Youth Studies, 8(1), 21–40. https://doi.org/10.1080/13676260500063652

Stirling, C. (2016). ‘Beyond the Dance Floor’? Gendered Publics and Creative Practices in Electronic Dance Music. Contemporary Music Review, 35(1), 130–149. https://doi.org/10.1080/07494467.2016.1176772

Straw, W. (1991). Systems of articulation, logics of change: Communities and scenes in popular music. Cultural Studies, 5(3), 368–388. https://doi.org/10.1080/09502389100490311

Sturm, B. L. (2014). The state of the art ten years after a state of the art: Future research in music information retrieval. Journal of New Music Research, 43(2), 147–172. https://doi.org/10.1080/09298215.2014.894533

Research Assistant: Sustainable Software and Digital Infrastructures

This is a fixed term 0.5 appointment for three months (May to July). All work can be conducted remotely. Interested candidates may apply here before 25 April. Enquiries: j.c.walton@sussex.ac.uk.

The Sussex Humanities Lab is seeking a Research Assistant to explore the future of digital technology (especially cloud computing) in relation to net zero and climate justice.

Key requirements:

  • PhD in a relevant subject (e.g. Computer Science, Engineering), or comparable expertise
  • Good analysis and communication skills

Interest in any of the following will be beneficial:

  • Cloud computing
  • Machine Learning
  • Web3
  • Sustainable IT
  • Sustainability reporting and certification
  • Climate analytics
  • Carbon accounting
  • Energy transition
  • Science communication
  • Decision support tools
  • Digital Humanities
  • Climate justice

About the project

Globally, the amount of data we store and process has exploded in recent years, and is projected to keep rising. Exciting new AI tools hit the headlines almost daily, and look set to become an increasing part of everyday life. Meanwhile, countries and companies are scrambling to meet their net zero targets — with the IPCC warning that these targets are not yet ambitious enough. So how do we align the future of ICT with the future of our planet?

You will be part of a small team, whose interdisciplinary work focuses on bringing together cutting-edge evidence and creating accessible, actionable resources to support net zero and climate justice. This project investigates digital decarbonisation, focusing especially on cloud computing. There will be opportunities for dialogue with leading digital carbon consultancies, as well as organisations working on tracking and reducing their digital carbon.

Our work will expand the Digital Humanities Climate Coalition toolkit (https://sas-dhrh.github.io/dhcc-toolkit/), a free and open-source resource devoted to deepening and widening understandings of the impacts of digital technologies.

The research project will run May to July. The post will be paid at point 7.2 on the HE single pay spine.

Informal enquiries are welcome: j.c.walton@sussex.ac.uk.

Reference: 50001228

Introducing The Big Reveal (WT)

Tim Hopkins

Project aims

How can uses of AI and adaptive technology be used to create narrative fiction in audio form, responsive to location, in AR experiences for audiences using headphones and mobile devices?

Imagine an audience member at point A, wearing headphones, listening to pre-scripted narration that reveals creative spirits at large – talking to listeners, drawing them into a story.  Other scripted text awaits at locations B, C, D etc.

The user moves unpredictably between them – as they do, AI generates spoken text that bridges the story from one place to another, regardless of the route, with compelling narrative traction.   The thresholds between AI and not-AI may be undetectable, or may announce themselves, like doors to a different realm…

The Big Reveal (WT) is a project researching how to make this possible, bringing together colleagues from different disciplines: Tim Hopkins, Sam Ladkin, Andrew Robertson, Kat Sinclair and David Weir.  We’ve had very welcome support also from Jo Walton and other colleagues, and have arranged future contribution from Victor Shepardson (voice synthesis.)

New developments in generative pre-trained transformer (GPT) technology extend uses of deep learning to produce text. This is a branch of machine learning/AI that has many potential uses and societal impacts.  The project explores this using AI and AR as a creative space for collaboration between engineers and artists.  

We are interested in affording practitioners across these disciplines a space to learn by making something together, perceiving AI as a new space for creative writing in a multidimensional expressive space, and making something that might offer the public an engaging way to experience and reflect on their space and the growing presence and impact of AI.

The project envisages three elements, each with its own development stage.

1) a text-generation system that is adaptive to context in a way that sustains / plays with suspension of disbelief

2) voice synthesis that can translate text into convincing speech / narrative voices

3) a platform combining software which can fuse detection of user activity and location with adaptive delivery on a mobile device

Progress so far

This has focused on 1 (as this will define scope for an application for a large project grant supporting the research phases 2 and 3).

Our method has been to develop a lab simulation combining some key technology, functionality and artistry as a proof-of-concept.

We come from different disciplines.  One of the inherent stimulations for the project is navigating differences between how we conceive and share what we are trying to do.  For example, David and Andrew (Informatics) have provided essential insights and guidance on current work with language models, to help induct Tim, Sam and Kat (MAH) into a complex and huge field.   T, S and K have experience of writing / creating for related spaces (e.g. games, adaptive systems, branching narratives), as well as more traditional contexts, but the concepts and engineering potentially underpinning our project ask for new understandings.  Similarly, discussions of language features at work in creative writing (e.g. complex implications of syntax) may test the functionality and limits of existing automated language models.

A central task has been to look for an approach that can respond significantly to what might be minimal input (prompts) from the user.  In contrast to some game formats, where players’ active choices overtly steer subsequent events, we are interested in an experience where users might not perceive any instructional dialogue with a system at all, but feel as if they are being told or are immersed in a narrative that recognises what experience they are having, and is able to determine what they should hear next.  This needs to happen according to a given narrative world and its bespoke generative language model – adapting to information detected by a mobile device as to location, orientation, direction, speed.

A series of discussions (April-November 2022) each led to tests based on existing text2text approaches, whereby text is put into a language model that infers suitable responses based on a range of data.  Although ultimately in our user’s experience there may be no apparent text prompt from the user themselves, there is nonetheless a need for an underlying element of this kind in order to stimulate a generated response. ‘Text’ in this case may be adaptive written text users actually hear, or an equivalent input determined by their behaviour, generating ‘text’ or prompts that may be hidden from users’ perception.  Our tests involved texts / prompts written by Andrew, Kat, Tim and Sam, fed through a number of text generation processes (on https://huggingface.co/ , a prominent platform for AI projects.)

Instead of shorter prompts leading to consequent longer responses, many of these processes were originally designed to achieve different kinds of results – such as inferring summaries of information from input texts.   This tended to result in short outputs that were not quite right for our purposes in a variety of ways. Extending prompts did not flex responses more effectively. Varying the character of prompts, for example imitating strongly-flavoured genres, had some perceivable impacts, but not decisively.  We needed to develop functionality towards richer responses.  This suggested adjusting our approach, involving two current directions.

Next steps

Firstly,  we continue to explore core needs – text2text generation, and training a GPT2-like model. However, we’re focussing on getting a ‘good start’ (DW) to an automated response of the kind we might want – rather than concerns about the length of response (which can be addressed later.)  We are also identifying specific corpora to fine-tune a model. Andrew has been experimenting for example using ‘film reviews’ as inputs (recently using https://huggingface.co/distilgpt2.) Kat is supplying examples of poetry (including her own) and shortly larger corpora needed to train a classifier – something that can distinguish between kinds of input in the way we need.  Andrew is now working on building a test model based on GPT2 to be fine-tuned with this.

Secondly,  the creation of some kind of ranking machine.  For example

a) we give the same input to create a range of candidate texts (e.g.100), a machine ranks them according to merit, and randomly chooses from the top of pile

b) we have two blocks of text – one visible one not.  We insert candidate third blocks between the visible and the hidden, and rank the insertions according to how well they work between the two. (This discussion also included ‘similarity metrics’ and BERT representation – ‘Bidirectional Encoder Representations from Transformers’).

c) we compare prompts with two corpora of texts – one has features we are looking for (e.g. of genre or form), the other is overtly more neutral (e.g. informational website like BBC news) – the machine ranks highest those closest to the first.

In the new year (2023) we will pick up on these, aiming to make our proof-of-concept model in the Spring.   Towards our grant application, we will also start scoping Phase 2 – on voice synthesis – with input from Victor Shepardson (Iceland, Dartmouth,  delayed this year due to COVID19 impacts.) We will look at challenges and consequences for translating text responses into persuasive speech simulation, and practical issues around processing – since the outcome envisages accompanying users across ‘borders’ in real time, between recorded adaptive narration and AI assisted/generated narration.

Tune in to International Dawn Chorus Day online

By Alice Eldridge

This weekend is International Dawn Chorus day, a worldwide celebration of nature’s great symphony. Not everyone is of the requisite constitution to get up in time to witness the majesty of the spring dawn chorus, but fear not — you can listen in from the comfort of your own bed. As part of ongoing research into the art, science and technology of wild soundscapes we have installed a high-fidelity, DIY, off-grid live audio transmitter at Knepp Wilding Project in West Sussex.

Dawn mist over the hammer pond at Knepp Rewilding project.

Our live feed is part of a global broadcast, linking the dawn chorus of Sussex to a network of open microphones around the world. Over the weekend of Dawn Chorus day each year, project partners, Sound Camp curate a live global dawn chorus transmission, Reveil. By mixing live feeds from around the globe they create one continuous 24 dawn chorus, following the path of the rising sun around the planet as our feathered friends awaken and warm up their virtuosic synrinxes.

Reveil is a 24 hour live broadcast of the dawn chorus as it circumvents the globe.

You are invited to listen to the Knepp soundscapes both above and below water. One ear is up in an oak tree, roosting with the turtle doves, cuckoos, owls and nightingales that have come to breed, evidence of the astonishing success of the rewilding of this arable farm over the last 20 years. The other ear takes you under water into a little stream where you can variously hear the tinkle of a babbling brook, splashing of a duck bathing, pig drinking, or subtle munching of an, as yet unidentified, freshwater invertebrate.

The soundscape from the canopy of an oak tree is transmitted via a microphone, sound card and 3G dongle perched in the tree.

This technical and artistic experiment complements ongoing scientific and ethnographic research into cultural and natural soundscapes, including the potential to use sound to monitor ecological status. We now recognise that we are on the edge of the sixth great extinction. Various national, European and global strategies such as Biodiversity Net Gain, EU Biodiversity strategy 2030 or the UN Decade on Restoration, aim to halt or reverse biodiversity loss. Such schemes require evidence to monitor progress and inform decision making, but traditional survey methods of ecological health assessment are prohibitively time-consuming. Our previous research, alongside that of an increasingly active international community of ecoacousticians, demonstrates that listening in to ecosystems can provide valuable information about ecological status, biodiversity, and even behavioural changes in certain species.

The research cannot progress within a single discipline. Even within Sussex University over the last few years our research into cultural and natural soundscapes has involved collaborations across disciplines including conservation biology, international development, anthropology, AI, complexity science, neuroscience and music, partnering with artists in London, indigenous communities in Ecuador, fishers in Indonesia, parabiologists in Papua New Guinea, tourism operators in Sweden, anthropologists in Finland, ecoacousticians in Italy and geographers in France. Working together across and beyond disciplines enables technical and methodological innovation alongside enthnographic, cultural and ethical insights, that not only stimulate methodological and theoretical advances in conservation technologies, but bring other voices in to the conversation. In this way we aim to contribute to social and ecological sustainability through creating cost-effective monitoring tools and advancing equitable conservation agendas.

If the soundscape acts as a transdisciplinary nexus for research, it also connects across species boundaries. As you listen to the exquisite nightingale trios in the late evening, the sound of ducks paddling or tiny insects feeding, I defy you to maintain a strong sense of human exceptionalism. Intimately witnessing the moment-to-moment details of the lives of these other beings unfold is a strong, sensory reminder of our interdependence — of the fact that human well being and that of all other living organisms are inseparable. And a reminder that we need to act fast to ensure that all our songs continue long into the future.

— — —

Bringing you 24 hours of dawn chorus around the earth, Reveil runs 5am London time (UTC+1) on Saturday 30 April to 6am on Sunday 1 May 2022. Listen live on the Reveil platform

The live stream from Knepp is a long-term experiment in Rewilding Soundscapes – perhaps the ultimate slow radio. It is funded by Sussex Humanities Lab, Experimental Ecologies strand and is a collaboration between Alice Eldridge and arts cooperative Sound Camp.

You can listen to the live stream from Knepp day and night for years to come here.

Scoping out the best site at for a long term soundscape stream with Grant and Dawn of Sound Camp

Coming soon: An exciting announcement which explains the motivation for and development of this long term audio streaming project …

Utopia on the Tabletop

Utopia on the Tabletop is a forthcoming collection of scintillating interventions and essays about tabletop roleplaying games and their relation to utopian theory and practice. It is being edited by Jo Lindsay Walton and supported by the Sussex Humanities Lab’s Open Practice group. Contributors include:

  • Emma French
  • Francis Gene-Rowe
  • Grant Brewer
  • Kelsey Paige Mason
  • Lesley Guy
  • Vivek Santayana
  • Felix Rose Kawitzky
  • Nicholas Stefanski
  • Maurits W. Ertsen
  • Rafael Carneiro Vasques, Vitor Celli Caria, and Matheus Capovilla Romanetto
  • Benjamin Platt
  • Grace A.T. Worm
  • Allan Hughes and Mark Jackson
  • Jess Wind
  • Nicholas J. Mizer
  • Jo Lindsay Walton
  • Rok Kranjc
  • Kellyn Wee

More to be confirmed!

The collection will launch in early 2023 [UPDATE: OK, late 2023]. It will be a collaboration with MAH’s new imprints Ping Press and Both Are Worse. In the meanwhile, you can keep an eye on Vector, where you’ll be able to get a sneak peek at some of the chapters and interviews. See also the related Applied Hope Games Jam.