Greening the Digital Humanities

We had, I think, a very good workshop.

To coincide with COP26, the Greening the Digital Humanities workshop was held by the Edinburgh Centre for Data, Culture & Society, the University of Southampton Digital Humanities, the Sussex Humanities Lab, and the Humanities & Data Science Turing interest group. It was a chance for Digital Humanities groups across the UK and Northern Europe to come together to consider what DH communities should do to rise to the urgent challenges of a changing climate and a just climate transition.

It was a summit of unprecedented scope and determination, and probably long overdue. Before the day itself, we had a couple months’ worth of drumroll. So we were able to start by sharing insights from these various scattered dialogues and surveys. Video here and slides here.

Building on this early engagement, four-ish main action themes emerged during the workshop:

  • Compiling a toolkit for DH researchers to do what we do more sustainably — finding out what’s already out there and signposting it, finding out what isn’t and inventing it.
  • Improving our knowledge, especially about how to measure our own impacts. This could definitely inform that toolkit, but it came up so much it deserves its own theme.
  • Nurturing a community of interest around just transitions — climate action is about decolonisation, about feminism, about anti-racism, about diversity and democracy. Many of us felt we wanted to deepen our understandings of climate justice, to share in one another’s research, and to reach out to colleagues and fellow travellers outside of DH.
  • Lobbying, influencing, and offering support and expertise — especially within our universities, and in our relationships with major funders. There was also some interest in other stakeholder groups (key suppliers, green investor coalitions, people responsible for league tables and excellence frameworks, etc.).

My own breakout rooms focused mostly on that final theme. We spent quite a lot of the conversation on funders (representatives from whom were in attendance). We all acknowledged the need for a collaborative and joined-up approach, feeding our perspectives into the work funders are already doing.

At the same time, there is also a fairly clear short-term ask here: we want prominent assurances that bids are not going to be disadvantaged for devoting some of their precious word counts to environmental impacts, and that budget lines related to mitigating environmental impact are legitimate. Everybody’s hunch is that this is already the case, but it’s good to have it said out loud, while the medium-term processes such as updating funder guidelines grind into gear. There is plenty to figure out. But the next few years are crucial from a climate perspective, and bids going in today or tomorrow are impacting what we might be doing in 2022-2025. To keep them aligned with the 1.5 degrees ambitions, some interim incentives will be handy.

As we flowed from our break-out groups into plenary discussion, another theme that emerged was work. We’re long past the point where managing climatic impacts could be seen as a ‘nice to have’ piece of work bolted onto the side of business-as-usual, if there happens to be some extra time and energy to devote there. But at the same time, we need to be sensitive to the diverse levels of capacity. We need to watch out for replicated or otherwise unnecessary work. Where possible activities should be folded into things that already exist. Progress can be made asynchronously to reflect busy calendars. And where we can, we should tune into the ways this work can be collectively nourishing, fascinating, and energising.

So what are the next steps? Broadly, to sort ourselves into teams to try to action things over the next six months or so, and see how we get on with that. Also to continue to reach out to others. These activities probably need to be organised under an umbrella of some kind. How do you like the ring of a Digital Humanities Climate Coalition?

The workshop winds up. One by one they go back to their lives, till I am alone in the Zoom room. A surreptitious glance over my shoulder, then I gleefully get out my gas-guzzling leaf vidaXL Petrol Backpack Leaf Blower and get the Google Jamboard in my gun sights. Post-its dance like confetti. One flies up that escaped my attention earlier.

“The world is burning. It is already too late without massive systematic top-down changes forced on us that no politician will want to do. Let’s all write nihilistic poetry and embrace the end.”

I feel that too. Of course it goes straight into the spreadsheet: WILLING TO LEAD OR CO-LEAD NIHILISTIC POETRY AND END-EMBRACING WORKING GROUP.

But it also drives home for me one last theme: the importance of mid-scale action. When we focus too much on what the individual can do  — buying zippy little electric car, or the Correct Broccoli  — it fails to engage with the scale of the challenge. When we focus too much on the big big shifts  — system change! DegrowthAn end to extractivist ontologies! — the concepts have all the necessary oomph, but the concrete actions prove elusive.

The middle scale, the often distinctly unpoetic activity of organising with a few others to influence an organisation, a sector, a community of practice, a regulation or practice, is often what goes missing. The small scale and the big scale are still important, of course! And climate actions at many different scales feed and reinforce one another. Nihilistic poetry and end-embracing can even be part of that …

But the reason it felt like a very good workshop was that it was satisfyingly in-the-middle. Hope can be a feeling, but hope isn’t exclusively a feeling. Hope is also what you do. And often it’s things you do with a few other people that most manifestly are hope. Interventions with two or three other collaborators, or a dozen, or twenty, exploring what might be accomplished, and multiplying the tales of the attempts.

If you are involved in any way with Digital Humanities and were not at the workshop, please feel free to reach out. Some ways to get involved: email j.w.baker@soton.ac.uk and ask about the Digital Humanities Climate Coalition; sign the Digital Humanities and the Climate Crisis manifesto; contribute to the growing crowdsourced list of resources (and wishlist).

This post mirrored at Southampton and Edinburgh.

Communicating Climate Risk

Save the date: 1 October, 12:30-17:00

Register here.

Communicating Climate Risk workshop

If the goal of climate communication is to compel decision-makers to act, then for too long our methods haven’t worked. Many desperately want to tackle the risks posed by climate change, but are confounded by mountains of complex, technical data. 

So how can academics present climate risk in ways that are meaningful and effective for this audience?  How can they ensure communication is part of their thinking from the outset, not just at the end of a research project? Who exactly are the end-users of climate risk research, and what are their needs? 

This online afternoon workshop will be jointly delivered by UCL’s Climate Action Unit and the Analysis under Uncertainty for Decision-Makers network (AU4DM). We will draw on interdisciplinary expertise to equip participants with the critical skills to communicate on climate risk. 

Speakers will share insights across three broad topics: why risk communication is difficult, what decision-makers want (and need), and how to present climate risk information. A final, fourth session will invite participants to co-design communication tools for the future. 

We need the big stories, the stories that engage and inspire. 

At the same time, we also need tools to present more niche information. 

And throughout, we also need to be always conscious of the politics of climate chance communication: the ways our communications shape whose voices are heard, and whose decisions count.

Speakers and facilitators:

  • Martine Barons (Warwick)
  • Mark Workman (Imperial)
  • Polina Levontin (Imperial)
  • Jo Lindsay Walton (Sussex)
  • Freya Roberts (UCL)
  • Kris de Meyer (UCL)
  • Lucy Hubble Rose (UCL)

This is an open workshop that will be especially relevant to climate and environmental scientists, and others whose work involves communicating or relying on scientific knowledge about climate and the environment. It is part of the COP26 Universities Network’s climate risk conference.

Some Upcoming Events

Just a glimpse of some upcoming events from SHL:

Text Analysis with Antconc, with Andrew Salway. Wednesday, 24 February at 15:00 GMT. “This workshop is for researchers who would like to use automated techniques to analyse the content of one or more text data sets (corpora), and to identify their distinctive linguistic characteristics and reveal new potential lines of inquiry. The text data could comprise thousands to millions of words of e.g. news stories, novels, survey responses, social media posts, etc.” More info here. Part of the SHL Open Workshops Series.

Dataset Publishing and Compliance, with Sharon Webb and Adam Harwood. Wednesday, 3 March at 15:00 GMT. “Funding bodies are placing increasing emphasis on data archiving in humanities research. The workshop will have a practical emphasis, aimed at helping you prepare data for deposit into a data archive or repository, to comply with grant applications requirements.” More info here. Part of the SHL Open Workshops Series.

Reality is Radical: Queer, Avant-Garde, Utopian Gaming, with Bo Ruberg, Amanda Phillips, and Jo Lindsay Walton. Monday 8 March at 17:00 GMT. “The Sussex Humanities Lab and the Sussex Centre for Sexual Dissidence are pleased to welcome leading critical game studies scholars Amanda Phillips and Bo Ruberg to explore the politics of contemporary games.Games themselves are a major cultural form, and the ‘ludic turn’ in recent years has also seen game design thinking and critical play practices spill out into many areas of social and economic life.” More info here. Part of the SHL Seminar Series.

Coming to Terms with Data Visualization and the Digital Humanities, with Marian Dörk. “How can visualization research and design be inspired by concepts from cultural studies, sociology, and critical theory? In contrast to the epistemological hegemony that engineering and science has held over data visualization, humanistic engagements with data and interfaces suggest different kinds of concerns and commitments for the study and design of data visualizations. From collaborative research in the arts and humanities arises a need to support critical and creative engagements with data and visualization.” More info here. Part of the SHL Seminar Series.

For more events, see the SHL website.

CfP: Utopia on the Tabletop

Edited by SHL’s Jo Lindsay Walton, Utopia on the Tabletop will explore tabletop roleplaying games (TTRPGs) in their intersections with utopianism. While often considered conspicuously “analog,” in distinction from their digital RPG cousins, TTRPGs actually have a much more complex relationship with the digital, shaped by gaming platforms and gaming social media such as Roll 20, Twitch, Itch.io, and Discord, and encompassing a diverse array of digital project management, performance, and creativity tools. How might the utopianism of storytelling and play intersect with the utopianism of these (post-) digital affordances? Abstracts due 1 February: full CfP available here.

Opacity and Splaination

I’m just back from Beatrice Fazi’s seminar on ‘Deep Learning, Explainability and Representation.’ This was a fascinating account of opacity in deep learning processes, grounded in the philosophy of science but also ranging further afield.

Beatrice brought great clarity to a topic which — being implicated with the limits of human intelligibility — is by its nature pretty tough-going. The seminar talk represented work-in-progress building on her recently published book, Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics, exploring the nature of thought and representation.

I won’t try to summarise the shape of the talk, but I’ll briefly pick up on two of the major themes (as advertised by the title), and then go off on my own quick tangent.

First, representation. Or more specifically, abstraction (from, I learned, the Latin abstrahere, ‘to draw away’). Beatrice persuasively distinguished between human and deep learning modes of abstraction. Models abstracted by deep learning, organised solely according to predictive accuracy, may be completely uninterested in representation and explanation. Such models are not exactly simplifications, since they may end up as big and detailed as the problems they account for. Such machinic abstraction is quasi-autonomous, in the sense that it produces representational concepts independent of the phenomenology of programmers and users, and without any shared nomenclature. In fact, even terms like ‘representational concept’ or ‘nomenclature’ deserve to be challenged.

This brought to my mind the question: so how do we delimit abstraction? What do machines do that is not abstraction? If we observe a machine interacting with some entity in a way which involves receiving and manipulating data, what would we need to know to decide whether it is an abstractive operation? If there is a deep learning network absorbing some inputs, is whatever occurs in the first few layers necessarily ‘abstraction,’ or might we want to tag on some other conditions before calling it that? And is there non-representational abstraction? There could perhaps be both descriptive and normative approaches to these questions, as well as fairly domain-specific answers.

Incidentally, the distinction between machine and human abstraction also made me wonder if pattern-recognition actually belongs with terms such as generalization, simplification, reduction, and (perhaps!) conceptualization, and (perhaps even!) modelling, terms which pertain only in awkward and perhaps sort of metaphorical ways to machine abstraction. It also made me wonder how applicable other metaphors might be: rationalizing, performing, adapting, mocking up? Tidying? — like a machinic Marie Kondo, discarding data points that fail to spark joy?

The second theme was explanation. Beatrice explored the incommensurability between the abstractive operations of human and (some) machine cognition from a number of angles, including Jenna Burrell’s critical data studies workongoing experiments by DARPA, and the broader philosophical context of scientific explainability, such as Kuhn and Feyerabend’s influential clashes with conceptual conservativism. She offered translation as a broad paradigm for how human phenomenology might interface with zones of machinic opacity. However, to further specify appropriate translation techniques, and/or ways of teaching machine learning to speak a second language, we need to clarify what we want from explanation.

For example, we might want ways to better understand the impact of emerging machine learning applications on existing policy, ways to integrate machine abstractions into policy analysis and formation, to clarify lines of accountability which extend through deep learning processes, and to create legibility for (and of) human agents capable of bearing legal and ethical responsibility. These are all areas of obvious relevance to the Science Policy Research Unit, which hosted today’s seminar. But Beatrice Fazi’s project is at the same time fundamentally concerned with the ontologies and epistemologies which underlie translation, whether it is oriented to these desires or to others. One corollary of such an approach is that it will not reject in advance the possibility that (to repurpose Langdon Winner’s phrase) the black box of deep learning could be empty: it could contain nothing translateable at all.


For me, Beatrice’s account also sparked questions about how explanation could enable human agency, but could curtail human agency as well. Having something explained to you can be the beginning of something, but it can also be the end. How do we cope with this?

Might we want to mobilise various modes of posthumanism and critical humanism, to open up the black box of ‘the human’ as well? Might we want to think about who explanation is for, where in our own socio-economic layers explanation could insert itself, and what agencies it could exert from there? Think about how making automated processes transparent might sometimes place them beyond dispute, in ways which their opaque predecessors were not? Think about how to design institutions which — by mediating, distributing, and structuring it — make machinic abstraction more hospitable for human being, in ways relatively independent of its transparency or opacity to individual humans? Think about how to encourage a plurality of legitimate explanations, and to cultivate an agonistic politics in their interplay and rivalry?

Might we want to think about distinguishing explainable AI from splainable AI? The word mansplain has been around for about ten years. Rebecca Solnit’s ‘Men Explain Things To Me‘ (2008), an essay that actually intersects with many of Rebecca Solnit’s interests and which she is probably recommended at parties, doesn’t use the word, but it does seem to have inspired it.

Mansplaining in Art (@MansplainingArt) | Twitter

Splain has splayed a little, and nowadays a watered-down version might apply to any kind of pompous or unwelcome speech, gendered or not. However, just for now, one way to specify splaining might be: overconfident, one-way communication which relies on and enacts privilege, which does not invite the listener as co-narrator, nor even monitor via backchannels the listener’s ongoing consent. Obviously splained content is also often inaccurate, condescending, dull, draining, ominously interminable, and even dangerous, but I think these are implications of violating consent, rather than essential features of splaining: in principle someone could tell you something (a) that is true, (b) that you didn’t already know, (c) that you actually care about, (d) that doesn’t sicken or weary you, (e) that doesn’t impose on your time, and wraps up about when you predict, (f) that is harmless … and you could still sense that you’ve been splained, because there is no way this bloke could have known (a), (b), (c), (d), (e), and/or (f).

“Overconfident” could maybe be glossed a bit more: it’s not so much a state of mind as a rejection of the listener’s capacity to evaluate; a practiced splainer can even splain their own confusion, doubt, and forgetfulness, so long as they are acrobatically incurious about the listener’s independent perspective. So overconfidence makes possible the minimalist splain (“That’s a Renoir,” “You press this button”), but it also goes hand-in-hand with the impervious, juggernaut quality of longform splaining.

Splainable AI, by analogy, would be translatable into human phenomenology, without human phenomenology being translatable into it. AI which splains itself might well root us to the spot, encourage us to doubt ourselves, and insist we sift through vast swathes of noise for scraps of signal, and at a systemic level, devalue our experience, our expertise, and our credibility in bearing witness. I’m not really sure how it would do this or what form it would take: perhaps, by analogy with big data, big abductive reasoning? I.e. you can follow every step perfectly, there are just so many steps? Splainable AI might also give rise to new tactics of subversion, resistance, solidarity. Also, although I say ‘we’ and ‘us,’ there is every reason to suppose that splainable AI would exacerbate gendered and other injustices.

It is interesting, for example, that DARPA mention “trust” among the reasons they are researching explainable artificial intelligence. There is a little link here with another SHL-related project, Automation Anxiety. When AIs work within teams of humans, okay, the AI might be volatile and difficult to explain, evaluate, debug, veto, learn from, steer to alternative strategies, etc. … but the same is true of the humans. The same is particularly true of the humans if they have divergent and erratic expectations about their automated team-mates. In other words, rendering machine learning explainable is not only useful for co-ordinating the interactions of humans with machines, but also useful for co-ordinating the interactions of humans with humans in proximity to machines. Uh-oh. For that purpose, there only needs to be a consistent and credible, or perhaps even incontrovertible, channel of information about what the AI is doing. It does not need to be true. And in fact, a cheap way to accomplish such incontrovertibility is to make such a channel one-way, to reject its human collaborators as co-narrators. Maybe, after all, getting splAIned will do the trick.

JLW

Earlier: Messy Notes from the Messy Edge.

Messy Notes from the Messy Edge

By Jo Lindsay Walton

 “[…] had the it forow sene […]”

— John Barbour, The Brus (c.1375)

If this is an awful mess… then would something less messy make a mess of
describing it?

— John Law, After Method (2004)

I am in the Attenborough Centre for the Creative Arts. It is my first time in the Attenborough Centre for the Creative Arts. It’s a pretty good Centre.

It’s Messy Edge 2018, part of Brighton Digital Festival. The name “Messy Edge,” I guess, is a play on “cutting edge.” It must be a rebuke to a certain kind of techno‑optimism – or more specifically, to the aesthetics which structure and enable that optimism. That is, the image of technology as something slick, even, and precise, which glides resistlessly onward through infinite possibility. If that slick aesthetic has any messiness at all, it’s something insubstantial, dispersed as shimmer and iridescence and lens flare.

My mind flicks to a chapter in Ruth Levitas’s Utopia as Method, where she explores the utopian presence that pervades the colour blue. Blue sky thinking, the blues. Levitas never mentions “blueprint,” and now I’m wondering if that’s deliberate? – an essay haunted, textured, structured, enabled, by its unuttered pun. Like how no one ever asks Bojack Horseman, “Why the long face?”

Utopia as Method – my copy anyway – is blue.

Many artists are really awful at talking about their art. Some artists, I suspect, do this deliberately. Or at least, their incompetence comes from stubborn adherence to something disordered and convoluted, to something in their work that would vanish from any punchy soundbyte. I like them, these artists who are really awful at talking about their art. “Awful” – filled with awe?

By contrast, the digital artists at Messy Edge are, by and large, very good at talking about their art, and about the political context of their art.

OK, I like them too.

Continue reading “Messy Notes from the Messy Edge”

Welcome

Welcome to the sparkling new Sussex Humanities Lab blog!

What is this field, called the Digital Humanities? Broadly speaking, it’s about a dialogue – occasionally a tussle, or a two-part harmony – between all the things humanities scholars have traditionally done, and the new and emerging practices which digital technology enables. Here at the SHL, we’re organised into four strands: Digital History/Digital Archives, Digital Media/Computational Culture, Digital Technologies/Digital Performance, and Digital Lives/Digital Memories; we also collaborate extensively with the TAG (Text Analysis Group) Lab, and with the University of Sussex Library. Building innovative archival and analytic tools to reappraise literary and cultural heritage is part of what we do; so is thinking through the ethical implications of the changing nature of privacy and personal data; so is investigating fledging or fleeting everyday cultural practices of social media users. The fore-edges of medieval manuscripts are in our wheelhouse; so are the memes of 4Chan.

But even a permissive definition of the Digital Humanities risks falling short of the sheer richness and diversity of activity taking place under its rubric. The influence of the Digital Humanities spreads wide, as encounters with new cultural forms often cast fresh light on the familiar, revealing what was under our noses all along. Some scholars and artists have already started to prefer the term postdigital. Let’s not forget about that strong practice-led thread either: the Digital Humanities is not only critical and curatorial, but also creative.

Perhaps the best way to understand the Digital Humanities is to keep an eye on what Digital Humanities scholars are up to. That’s where this blog comes in. In the coming months we’ll be bringing you glimpses of dozens of exciting projects and initiatives with which the Sussex Humanities Lab is involved. We hope to make this blog a place of interest to researchers of all disciplines, and to the public at large.

For my part, I mostly work on the intersection of speculative fiction and economics. How are creators of speculative fiction imagining the impact of automation and Artificial Intelligence on society? Can speculative fiction inform the design of new institutions and policies, allowing us to meet the ecological challenges of the future? Or … maybe it can’t? Later in the year, I’ll be sharing my research more fully. But right now, I want to flag up two projects I’ve been involved with editorially, both hot off the presses. On the fiction side, there’s Strange Economics, an anthology from David Schultz, featuring original economics-themed short stories. The ebook will be free for a limited time. Then there’s Vector, the UK’s longest-running magazine of SF criticism. Vector #288, co-edited with Polina Levontin, a special issue devoted to the future of economics. The magazine only goes to members of the British Science Fiction Association, but we’ll be featuring plenty of excerpts on the Vector website over the next few months.