Game Studies Event

Last week the Sussex Humanities Lab played host to a pretty special experiment. Alex Peverett and Andrew Duff assembled over twenty gaming systems, spanning over forty years of gaming history.

In attendance, says Alex, were such marvels and horrors as a Sinclair ZX81, Atari 2600, Colecovision, Vectrex, Sinclair 128k, Commodore 64, Atari 800XL, Acorn BBC Model B Microcomputer, Windows/Dos Laptop, Nintendo 64, Super Nintendo, Nintendo Gamecube, Sony Playstation 2, Sega Megadrive, Sega Dreamcast, Nintendo Gameboy, Sony PSP, Sony PS4 and PSVR, and a Speak and Spell … that he can remember.

I am fairly sure there were at least one or two systems from a parallel universe’s timeline (but my memory is a bit blurred by vast steam clouds billowing from the polished brass and stained glass Sega Steamcast, so maybe not). 

Tim Jordan, Professor of Digital Cultures at Sussex, kicked things off with a highly suggestive, whistlestop tour of issues in gaming from a broadly media studies, sociological, and cultural studies perspective. Tim used the example of a massively multiplayer online roleplaying game to illuminate at least four themes. First, there was the negotiation of meaning and identity in gaming, especially thinking about race, gender, and sexuality. Second, the co-production of games by players, not only through their in-game behaviours, but also through player co-operation outside of games, and the modification and changing of games by players. Third, there was the media archaeology of games, both in regards to the glittering array of stand-alone games surrounding us, and on the servers of discontinued MMORPG. Some digital ludic worlds have an uneasy, precarious existence, liable to have the plug pulled by the owners of the IP. Then Tim wrapped up by looking at some issues around gaming and the digital economy more generally. The older consoles in the room were built with the expectation that players would purchase and fully and permanently own each new game; more recently, gaming has been reshaped by the economics of free-to-play games (with in-game purchases), subscription models, and so on.

Then we played. I tried out a VR game for the first time (and am presumably still in the game, which was Moss). It wasn’t nearly as immersive as I’d been led to expect by commercial hype and popular culture portrayals of VR, but it was fetchingly unsettling. I liked the way sometimes an invisible object or person in the “real” world got in your way, and you just had to push it or them over to pursue your in-game objectives.

I drew some eldritch granite water lilies up from the deep, and helped my heartbreakingly brave protagonist (a needle-wielding champ with a real Reepicheep / Mattimeo vibe) hop across the water. In the background, the mountainous gloom shifted … it was a deer, stooping to drink.

But although I was there for almost the full three hours, I probably only spent ten minutes on the gizmos, because I just kept having nice chats with people! We talked about gaming — video games, board games, tabletop RPGs, gamification, art with game elements, worldbuilding and storytelling across games and other forms of culture. Networking was part of the event’s rationale:

This event will not only be a chance to explore SHL’s media archaeology resources, reflect on media archaelogical theory and practice — and play some games! — but also an opportunity to meet others across the university involved in gaming, game studies, and game design, and to take stock of the state of the art and the future of game studies at Sussex. […] What are we already doing around games at Sussex? How can we bring together existing research and teaching around gaming to share resources, projects, ideas, and opportunities?

We were here to shoot zombie Nazis, but more fundamentally, to shoot the breeze.

For some reason, I did start to think of Sam Ladkin, Senior Lecturer in Critical and Creative Writing (English), as a sort of console. “You should give this one a go,” I told everyone. “It’s really weird.”

Among other things, Sam and I discussed the new elective undergraduate module he’ll be launching, Video Games: Critical and Creative Writing. It sounds like a really fascinating and exciting mix, which will look at games as both technical and cultural objects, will allow students to be assessed through creative work and/or their critical studies of games, and will place politics at its very heart.

With the launch of Video Games: Critical and Creative Writing, along with what’s already happening on the Games and Multimedia Environments BSc and elsewhere, and the outlines of SHL 2 beginning to wobble on the horizon, we could well be at the dawn of a golden age of game studies at Sussex.

(But then again, maybe I’m still in the game).

Beyond Numbers: Celebrating Ada Lovelace Day at the Brighton Digital Festival 2018

By Ioann Maria and Sharon Webb.

Held annually on the second Tuesday in October, Ada Lovelace Day recognises the accomplishments of women in Science, Technology, Engineering and Maths (STEM), while also celebrating Ada Lovelace herself as a pioneer of computing science. The overarching aims of the annual event are to increase the profile of women in STEM, and to create and to highlight role-models, so that we can encourage diversity and representation in computer science, in software engineering, and in the sciences more broadly. In 2018 the Sussex Humanities Lab celebrated Ada Lovelace Day with an event called Beyond Numbers.

adabeyondnumbers_sharon&ioann
Ioann Maria and Sharon Webb

Ada Lovelace was born in 1815. She was encouraged by her mother, Annabella Byron, to study arithmetic, music, and French. It’s been suggested that Ada’s strict study regime was a deliberate attempt to suppress Ada’s imagination, since Ada’s mother was fearful of her ‘dangerous and potentially destructive,’ imagination given the eccentrics of Ada’s estranged father, Lord Byron (Essinger, 2014).

By the time she was thirteen, Ada Lovelace had already designed a mechanical bird. At the age of eighteen Lovelace formally met Charles Babbage, who would later be heralded as the father of computing science. She became intrigued with Babbage’s proposed “Difference Engine.” Over the years Ada Lovelace studied and translated the maths associated with both Babbage’s Difference Engine and its sequel, the Analytical Engine, as well as the Jacquard Loom. In 1843, translating and annotating Luigi Menabrea’s paper on Babbage’s Analytical Engine, she developed a formula for computing Bernoulli numbers. On the basis of this work — a program to be executed upon machine that did not yet exist — Lovelace has been hailed as the world’s first computer programmer.

But unlike Babbage and Menabrea, who only saw the number-crunching potential of this machine, Ada Lovelace also proposed that if a machine could manipulate numbers then it could do so for any type of “data.” Indeed, the ‘Enchantress of Numbers’ (as Babbage is credited with describing her) stated that the Analytical Engine ‘might act upon other things besides numbers,’ and that for instance, it might ‘compose elaborate and scientific pieces of music of any degree of complexity or extent.’

The Beyond Numbers event, organised by Ioann Maria and Dr Sharon Webb, coincided with Ada Lovelace Day. It was specifically interested in exploring the potential identified by Ada Lovelace for machines to ‘act upon other things besides numbers.’ The aim of the event was to celebrate women, non-binary, and transgender scientists, artists, musicians, researchers and thinkers whose works are based on scientific, technological and/or mathematical methods. 

The event opened with Sharon Webb’s historical overview of the role of women in technology, entitled “When Computers Were People,” which also called out the current gender gap in computer science. She was followed by a session from Kate Howland (University of Sussex, Lecturer in Interactive Design) entitled “Talking Programming,” in which Kate gave an outline of her research on designing voice user interfaces for end-user programming in home automation. Cécile Chevalier, Lecturer in Media Practice at the University of Sussex, spoke on “Automata, Automatism and Instrument-Making Toward Computational Corporeal Expressions.” In thinking of the body, technology and expression in computational art, Cecile offered a retrospective of her own artwork. Brighton-based audio-visual artist Akiko Haruna gave a talk on A/V and electronic music scene touching on “Self-Value in the Face of Ego,” where her focus was on encouraging all women to explore the world of electronic music and audio-visual art. She spoke of her personal experiences and the many ways in which digital sound as a medium has liberated her work. Estela Oliva, London-based artist and curator, spoke of “Hybrid Worlds, New Realities,” presenting her new project CLON in which she interrogates the possibilities of new spaces enabled with virtual and immersive technologies such as gaming, 3D video, and virtual reality. Irene Fubara-Manuel in her talk “An Auto-Ethnographic Account of Virtual Borders” presented her piece “Dreams of Disguise” (2018), a traversal of the virtual border through racialized biometric technologies: a project that blurs documentary truth with science fiction to reveal the ubiquitous surveillance of migrants and the rising desire for opacity. The event closed with Ioann Maria’s “Contra-Control Structures” talk on hacktivism, cyberactivism, and women, with an outline of her first-hand experience in creating physical DIY creative spaces.

The day was a fusion of science and creative arts. It reached beyond the “numerical” and provided a friendly space for the local community to find out about one another — a space to share, to engage, and to collaborate.

adabeyondnumbers_artwork

As a direct result of Beyond Numbers and the positive feedback this event received, FACT///  (Feminist Approaches to Computational Technology Network) was established by Cécile Chevalier, Sharon Webb, and Ioann Maria Stacewicz. In keeping with the aspirations and goals of Ada Lovelace Day, FACT/// Feminist Approaches to Computational Technology Network seeks to promote dialogue, collaboration, and support diverse voices in transdisciplinary computational thinking and environments. The first FACT/// forum was held on Thursday, 7th March at the Sussex Humanities Lab. For more details see fact.networkFACT/// is a CHASE Feminist Network Award and also supported by the Sussex Humanities Lab.

#AdaBeyondNumbers on the web:

About Sharon:

Sharon Webb is a Digital Humanities Lecturer in the Sussex Humanities Lab and the School of History, Art History and Philosophy. Sharon is a historian of Irish associational culture and nationalism (eighteenth and nineteenth century) and a digital humanities practitioner, with a background in requirements/user analysis, digital preservation, digital archiving, text encoding and data modelling. Sharon also has programming and coding experience and has contributed to the successful development of major national digital infrastructures.

Sharon’s current research interests include community archives and identity, with a special interest in LGBTQ+ archives, social network analysis (method and theory), and research data management. She holds a British Academy Rising Star Engagement Award 2018 on the topic of community archives and digital preservation, working with a number of community projects, including Queer in Brighton.

Sharon is currently running a twelve-month project funded by the British Academy Rising Star Engagement Award (2018), ‘Identity, Representation and Preservation in Community Digital Archives and Collections’. This project is an intervention in three important areas: community archives, digital preservation, and content representation. For more details see www.preservingcommunityarchives.co.uk.

About Ioann Maria:

Ioann Maria is a new media artist, filmmaker, and computer scientist. Ioann’s work is focused on hacktivism, electronic surveillance, computer security, human-machine interaction, and interactive physical systems. In her solo and collaborative projects she explores new methods in real-time audio-visual performance.

Ioann is co-founder of the Edinburgh Hacklab, Scotland’s first hackerspace. She was formerly an Artistic Director of LPM Live Performers Meeting, the world’s largest annual meeting dedicated to live video performance and new creative technologies, and a Research Technician in Digital Humanities at the Sussex Humanities Lab, University of Sussex, which is dedicated to developing and expanding research into how digital technologies are shaping our culture and society.

And we’re off!

Sussex’s Dr Nicola Stylianou reflects on the launch of Making African Connections.

Suchi Chatterjee (researcher, Brighton and Hove Black History) and Scobie Lekhuthile (curator, Khama III Memorial Museum) discussing the project.
Suchi Chatterjee (researcher, Brighton and Hove Black History) and Scobie Lekhuthile (curator, Khama III Memorial Museum) discussing the project.

Last week was the first time that everybody working on the Making African Connections project was in the same room together. This was a very exciting moment for us and was no small feat: people travelled from Namibia, Botswana, Sudan and all across the UK to attend our first project workshop. We began by discussing the project together and then broke into three groups to discuss the three museum collections of African objects that are now in Kent and Sussex.

The first working group was discussing a collection of Batswana artefacts donated to Brighton museum by Revd Willoughby, a missionary. Staff at the museum will be working with researcher Winani Thebele (Botswana National Museums) and curator Scobie Lekhuthile (Khama III Memorial Museum) as well Tshepo Skwambane (DCES) and Suchi Chatterjee and Bert Williams (Brighton and Hove Black History). The second case study focuses on a large collection of objects from South West Angola that are held at the Powell-Cotton Museum and were acquired in the 1930s. The objects are mainly Kwanyama and this part of the project has, as its advisor, an expert in Kwanyama history, Napandulwe Shiweda (University of Namibia). Finally, the project will consider Sudanese objects held at the Royal Engineers Museum. Research for this part of the project is being conducted by Fergus Nicoll, Reem al Hilou (Shams AlAseel Charitable Initiative) and Osman Nusairi (intellectual).

The aim of the workshop was to decide together what the priorities for the project were. We will begin digitising objects for our online archive in April so we need to know which objects we want to work on first as some of the collections are very large. It will only be possible to create online records for a selection of objects.

BOT_20190212_3-1024x768
Viewing the objects in the store room
REMLA_20190208_1-1-1024x683
Viewing galleries at the Royal Engineers Museum

Before the workshop on Wednesday we had arranged for all the participants to visit the relevant galleries and see objects in storage. This had lead to some interesting and difficult conversations that we were able to build on during the workshop. Perhaps the clearest thing to come out of the meeting was the sheer amount of work to be done to fully research these collections and to understand their potential to connect to audiences and each other.

This post originally appeared on the Making African Connections project blog on 25 February 2019. Making African Connections is an AHRC-funded project.

Mending Dame Durrants’ Shoes

This week Louise Falcini gave us an update on the AHRC-funded project The Poor Law: Small Bills and Petty Finance 1700-1834.

The Old Poor Law in England and Wales, administered by the local parish, dispensed benefits to paupers providing a uniquely comprehensive, pre-modern system of relief. The law remained in force until 1834, and provided goods and services to keep the poor alive. Each parish provided food, clothes, housing and medical care. This project will investigate the experiences of people across the social spectrum whose lives were touched by the Old Poor Law, whether as paupers or as poor-law employees or suppliers.

The project seeks to enrich our understanding of the many lives touched by the Old Poor Law. This means paupers, but it also means workhouse mistresses and other administrators, midwives, tailors, cobblers, butchers, bakers, and many others. Intricate everyday social and economic networks sprung up around the Poor Law, about which we still know very little.

To fill these gaps to bursting, the project draws on a previously neglected class of sources: thousands upon thousands of slips of paper archived in Cumbria, Staffordshire and East Sussex, often tightly folded or rolled, of varying degrees of legibility, and all in the perplexing loops and waves of an eighteenth century hand …

Overseer note

These Overseers’ vouchers – similar to receipts – record the supply of food, clothes, healthcare, and other goods and services. Glimpse by glimpse, cross-reference by cross-reference, these fine-grained fragments glom together, revealing ever larger and more refined images of forgotten lives. Who was working at which dates? How did procurement and price fluctuate? What scale of income was possible for the suppliers the parish employed? What goods were stocked? Who knew whom, and when? Who had what? What broke or wore out when? As well as the digital database itself, the project will generate a dictionary of partial biographies, collaboratively authored by professional academics and volunteer researchers.

Louise took us through the data capture tool used by volunteer researchers. A potentially intimidating fifty-nine fields subtend the user-friendly front-end. The tool is equipped with several useful features. For example, it is possible to work remotely. The researcher has the option to “pin” the content of a field from one record to the next. The database automatically saves every iteration of each record. The controlled vocabulary is hopefully flexible enough to helpfully accommodate any anomalies. It’s also relatively easy to flag up records for conservation assessment or transcription assistance, or to go back and edit records. Right now they’re working on implementing automated catalogue entry creation, drawing on the Calm archive management system.


Personally, one of the things I find exciting about the project is how it engages both with the history of work and with the future of work. Part of its core mission is to illuminate institutions of disciplinarity, entrepreneurship, and precarity in eighteenth and early nineteenth century England. At the same time the project also involves, at its heart, questions about how we work in the twenty-first century.

Just take that pinning function, which means that researchers can avoid re-transcribing the same text if it’s repeated over a series of records. It almost feels inadequate to frame this as a “useful feature,” with all those overtones of efficiency and productivity! I’m not one of those people who can really geek out over user experience design. But most of us can relate to the experience of sustained labour in slightly the wrong conditions or using slightly the wrong tools. Most of us intuit that the moments of waste woven into such labour can’t really be expressed just in economic terms. And I’m pretty sure the moments of frustration woven into such labour can’t be expressed in purely psychological terms either. Those moments might perhaps be articulated in the language of metaethics and aesthetics? – or perhaps they need their very own (as it were) controlled vocabulary. But whatever they are, I think they manifest more clearly in voluntary labour, where it is less easy to let out that resigned sigh and think, “Whatever, work sucks. Come on Friday.”

I don’t have any first-hand experience of working with this particular data capture tool. But from the outside, the design certainly appears broadly worker-centric. I think digital work interfaces, especially those inviting various kinds of voluntary labour, can be useful sites for thinking more widely about how to challenge a productivity-centric division of labour with a worker-centric design of labour. At the same time, I guess there are also distinctive dangers to doing that kind of thinking in that kind of context. I wouldn’t be surprised if the digital humanities’ love of innovation, however reflexive and critical it is, tempts us to downplay the importance of the minute particularity of every worker’s experience, and the ways in which working practices can be made more hospitable and responsive to that particularity. (Demos before demos, that’s my demand).

I asked Louise what she thought motivated the volunteer researchers. Not that I was surprised – if something is worth doing there are people willing to do it, given the opportunity! – but I wondered what drew these particular people to this particular work? In the case of these parishes, it helps that there are good existing sources into which the voucher data can be integrated, meaning that individual stories are coming to life especially rapidly and richly-resolved. Beyond this? Obviously, the motives were various. And obviously, once a research community was established, it has the potential to become a motivating energy in itself. But Louise also reckoned that curiosity about these histories – about themes of class, poor relief and the prehistory of welfare, social and economic justice, and of course about work – played a huge role in establishing it in the first place.

Blake wrote in Milton about “a moment in each Day that Satan cannot find / Nor can his Watch Fiends find it.” I bet there is a moment within every rote task that those Watch Fields have definitely stuck there on purpose. It’s that ungainly, draining, inimitable moment that can swell with every iteration till it somehow comes to dominate the task’s entire temporality. It is politically commendable to insist that these moments persist in any task designed fait accompli from a distance, by people who will never have to complete that task more than once or twice … no matter how noble or comradely their intentions. But even if we should be careful about any dogmatic redesign of labour, I think we should at least be exploring how to redesign the redesign of labour. Karl Marx wrote in his magnum opus The Wit and Wisdom of Karl Marx that, unlike some of his utopian contemporaries, he was not interested in writing recipes for the cook-shops of the future. In some translations, not recipes but receipts. It actually is definitely the future now. And some of us are hungry.

JLW


The Poor Law: Small Bills and Petty Finance 1700-1834 is an AHRC-funded project.

  • PI: Alannah Tomkins (Keele)
  • Co-I: Tim Hitchcock (Sussex)
  • Research Fellow: Louise Falcini (Sussex)
  • Research Associate: Peter Collinge (Keele)

Opacity and Splaination

I’m just back from Beatrice Fazi’s seminar on ‘Deep Learning, Explainability and Representation.’ This was a fascinating account of opacity in deep learning processes, grounded in the philosophy of science but also ranging further afield.

Beatrice brought great clarity to a topic which — being implicated with the limits of human intelligibility — is by its nature pretty tough-going. The seminar talk represented work-in-progress building on her recently published book, Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics, exploring the nature of thought and representation.

I won’t try to summarise the shape of the talk, but I’ll briefly pick up on two of the major themes (as advertised by the title), and then go off on my own quick tangent.

First, representation. Or more specifically, abstraction (from, I learned, the Latin abstrahere, ‘to draw away’). Beatrice persuasively distinguished between human and deep learning modes of abstraction. Models abstracted by deep learning, organised solely according to predictive accuracy, may be completely uninterested in representation and explanation. Such models are not exactly simplifications, since they may end up as big and detailed as the problems they account for. Such machinic abstraction is quasi-autonomous, in the sense that it produces representational concepts independent of the phenomenology of programmers and users, and without any shared nomenclature. In fact, even terms like ‘representational concept’ or ‘nomenclature’ deserve to be challenged.

This brought to my mind the question: so how do we delimit abstraction? What do machines do that is not abstraction? If we observe a machine interacting with some entity in a way which involves receiving and manipulating data, what would we need to know to decide whether it is an abstractive operation? If there is a deep learning network absorbing some inputs, is whatever occurs in the first few layers necessarily ‘abstraction,’ or might we want to tag on some other conditions before calling it that? And is there non-representational abstraction? There could perhaps be both descriptive and normative approaches to these questions, as well as fairly domain-specific answers.

Incidentally, the distinction between machine and human abstraction also made me wonder if pattern-recognition actually belongs with terms such as generalization, simplification, reduction, and (perhaps!) conceptualization, and (perhaps even!) modelling, terms which pertain only in awkward and perhaps sort of metaphorical ways to machine abstraction. It also made me wonder how applicable other metaphors might be: rationalizing, performing, adapting, mocking up? Tidying? — like a machinic Marie Kondo, discarding data points that fail to spark joy?

The second theme was explanation. Beatrice explored the incommensurability between the abstractive operations of human and (some) machine cognition from a number of angles, including Jenna Burrell’s critical data studies workongoing experiments by DARPA, and the broader philosophical context of scientific explainability, such as Kuhn and Feyerabend’s influential clashes with conceptual conservativism. She offered translation as a broad paradigm for how human phenomenology might interface with zones of machinic opacity. However, to further specify appropriate translation techniques, and/or ways of teaching machine learning to speak a second language, we need to clarify what we want from explanation.

For example, we might want ways to better understand the impact of emerging machine learning applications on existing policy, ways to integrate machine abstractions into policy analysis and formation, to clarify lines of accountability which extend through deep learning processes, and to create legibility for (and of) human agents capable of bearing legal and ethical responsibility. These are all areas of obvious relevance to the Science Policy Research Unit, which hosted today’s seminar. But Beatrice Fazi’s project is at the same time fundamentally concerned with the ontologies and epistemologies which underlie translation, whether it is oriented to these desires or to others. One corollary of such an approach is that it will not reject in advance the possibility that (to repurpose Langdon Winner’s phrase) the black box of deep learning could be empty: it could contain nothing translateable at all.


For me, Beatrice’s account also sparked questions about how explanation could enable human agency, but could curtail human agency as well. Having something explained to you can be the beginning of something, but it can also be the end. How do we cope with this?

Might we want to mobilise various modes of posthumanism and critical humanism, to open up the black box of ‘the human’ as well? Might we want to think about who explanation is for, where in our own socio-economic layers explanation could insert itself, and what agencies it could exert from there? Think about how making automated processes transparent might sometimes place them beyond dispute, in ways which their opaque predecessors were not? Think about how to design institutions which — by mediating, distributing, and structuring it — make machinic abstraction more hospitable for human being, in ways relatively independent of its transparency or opacity to individual humans? Think about how to encourage a plurality of legitimate explanations, and to cultivate an agonistic politics in their interplay and rivalry?

Might we want to think about distinguishing explainable AI from splainable AI? The word mansplain has been around for about ten years. Rebecca Solnit’s ‘Men Explain Things To Me‘ (2008), an essay that actually intersects with many of Rebecca Solnit’s interests and which she is probably recommended at parties, doesn’t use the word, but it does seem to have inspired it.

Image may contain: 2 people, text

Splain has splayed a little, and nowadays a watered-down version might apply to any kind of pompous or unwelcome speech, gendered or not. However, just for now, one way to specify splaining might be: overconfident, one-way communication which relies on and enacts privilege, which does not invite the listener as co-narrator, nor even monitor via backchannels the listener’s ongoing consent. Obviously splained content is also often inaccurate, condescending, dull, draining, ominously interminable, and even dangerous, but I think these are implications of violating consent, rather than essential features of splaining: in principle someone could tell you something (a) that is true, (b) that you didn’t already know, (c) that you actually care about, (d) that doesn’t sicken or weary you, (e) that doesn’t impose on your time, and wraps up about when you predict, (f) that is harmless … and you could still sense that you’ve been splained, because there is no way this bloke could have known (a), (b), (c), (d), (e), and/or (f).

“Overconfident” could maybe be glossed a bit more: it’s not so much a state of mind as a rejection of the listener’s capacity to evaluate; a practiced splainer can even splain their own confusion, doubt, and forgetfulness, so long as they are acrobatically incurious about the listener’s independent perspective. So overconfidence makes possible the minimalist splain (“That’s a Renoir,” “You press this button”), but it also goes hand-in-hand with the impervious, juggernaut quality of longform splaining.

Splainable AI, by analogy, would be translatable into human phenomenology, without human phenomenology being translatable into it. AI which splains itself might well root us to the spot, encourage us to doubt ourselves, and insist we sift through vast swathes of noise for scraps of signal, and at a systemic level, devalue our experience, our expertise, and our credibility in bearing witness. I’m not really sure how it would do this or what form it would take: perhaps, by analogy with big data, big abductive reasoning? I.e. you can follow every step perfectly, there are just so many steps? Splainable AI might also give rise to new tactics of subversion, resistance, solidarity. Also, although I say ‘we’ and ‘us,’ there is every reason to suppose that splainable AI would exacerbate gendered and other injustices.

It is interesting, for example, that DARPA mention “trust” among the reasons they are researching explainable artificial intelligence. There is a little link here with another SHL-related project, Automation Anxiety. When AIs work within teams of humans, okay, the AI might be volatile and difficult to explain, evaluate, debug, veto, learn from, steer to alternative strategies, etc. … but the same is true of the humans. The same is particularly true of the humans if they have divergent and erratic expectations about their automated team-mates. In other words, rendering machine learning explainable is not only useful for co-ordinating the interactions of humans with machines, but also useful for co-ordinating the interactions of humans with humans in proximity to machines. Uh-oh. For that purpose, there only needs to be a consistent and credible, or perhaps even incontrovertible, channel of information about what the AI is doing. It does not need to be true. And in fact, a cheap way to accomplish such incontrovertibility is to make such a channel one-way, to reject its human collaborators as co-narrators. Maybe, after all, getting splAIned will do the trick.

JLW

Earlier: Messy Notes from the Messy Edge.

Messy Notes from the Messy Edge

By Jo Lindsay Walton

 “[…] had the it forow sene […]”

— John Barbour, The Brus (c.1375)

I am in the Attenborough Centre for the Creative Arts. It is my first time in the Attenborough Centre for the Creative Arts.

It’s a pretty good Centre.

It’s Messy Edge 2018, part of Brighton Digital Festival. The name “Messy Edge,” I guess, must be a play on “cutting edge.” It must be a rebuke to a certain kind of techno‑optimism – or more specifically, to the aesthetics which structure and enable that optimism. That is, the image of technology as something slick, even, and precise, which glides resistlessly onward through infinite possibility. If this aesthetic has any messiness at all, it’s something insubstantial, dispersed as shimmer and iridescence and lens flare.

My mind flicks to a chapter in Ruth Levitas’s Utopia as Method, where she explores the utopian presence that pervades the colour blue. Blue sky thinking, the blues. She never mentions “blueprint,” and I wonder if that’s deliberate? – an essay haunted, textured, structured, enabled, by its unuttered pun. Like how no one ever asks Bojack Horseman, “Why the long face?”

Utopia as Method – my copy anyway – is blue.

Some artists are really awful at talking about their art. Some, I suspect, do this deliberately. Or at least, their incompetence comes from stubborn adherence to something disordered and convoluted, to something in their work that would vanish from any punchy soundbyte. I like them, these artists who are really awful at talking about their art. “Awful” – filled with awe?

By contrast, the digital artists at Messy Edge are by and large very good at talking about their art, and about the political context of their art.

OK, I like them too.

Continue reading “Messy Notes from the Messy Edge”

Intelligent Futures: Automation, AI and Cognitive Ecologies

By Maisie Ridgway

Intelligent Futures was a postgraduate and ECR conference, supported by CHASE DTP and Sussex Humanities Lab. Over the course of two days, the conference challenged researchers to find original, philosophical and cultural approaches to Artificial Intelligence. The interdisciplinary explorations spanned the social sciences, informatics, psychology, art, literature and more, promoting critical and speculative engagements with technical cognition.

Thomas Nyckel of the Technical University Braunschweig led the first panel, which sought to engage with AI from a philosophical standpoint. Nyckel’s paper fostered an alternative approach for understanding the processes of digital devices and computation through the variable definitions of the ‘rule of thumb’. Nyckel’s identified two categorisations of the rule of thumb – Frederick Taylor’s conception, concerned with exact scientific results, and Alan Turing’s interest in approximate computational methodologies. What materialised was a synthesis of ideas that challenged the nature and myth of scientific exactitude and complicated the binary of workman and machine, or the approximate rule of thumb and exact scientific methods. Mattia Paganelli followed Nyckel with a paper on the misnomer of ‘artificial’ when speaking of non‑human intelligence. Paganelli argued that the term shored up the binary between subject and object by attributing an a priori definition of the perimeter of possibility for a given system. Rather than artificial, Paganelli suggested the term ‘thinking machines’, understanding intelligence as a process of learning, plasticity and openness to name a few key attributes.

The next panel united speakers under the common theme of ethics. Camilla Elphick of the University of Sussex spoke about her ongoing project to develop an AI chatbot, named Spot, as a means by which victims of work place sexual harassment can report their experiences. Spot’s distinctly unhuman style of engagement meant that victims did not feel embarrassed, judged, scrutinised or pressured, garnering more accurate information. On an entirely different note, Marek Iwaniak’s paper explored how theologies and pre-technological religious imaginaries could engage with the novel challenge of AI. Iwaniak speculated on possible intersections between various religions and AI such as a Buddhist approach to technical cognition whereby the ultimate aim of developers could be an AI consciousness of pure bliss.

The final panel of the day explored changing ideas of writing, asking whether an anthropocentric conception of literary creativity obscured other forms of nonhuman creative generativity from view. John Phelan of the Open University investigated if AI could ever critically appreciate poetry or poetic significance outside of large sample readings of rhyme schemes for example. Emma Newport, from the University of Sussex, considered digitised end-of-life writing via the unusual case of the popular Mumsnet contributor IamtheZombie, whose posthumous in memoriam comments from fellow Mumsnet users formed an innovative kind of obituary, a cellular tissue of text that democratised the death process and made end-of-life writing a collaborative act.

Joanna Zylinska’s keynote speech, entitled ‘Creative Computers, Art Robots and AI Dreams,’ drew together the prevalent themes of the day, excavating the myth of the robot to determine that humans, especially the great artists amongst us, have always been technological. Zylinska used various art-AI projects, including Taryn Southern’s AI generated music and The Rembrandt Microsoft Project (involving AI creating an imitation Rembrandt painting), in order to interrogate the value of AI artistic production, delineate how our senses construct the world we inhabit, and ask what it means that seeing, for example, no longer requires a human looker. The result was a critique of AI as that which exponentially amplifies our desires and therefore marketability, as well as an optimism for the possible joys of robotic and AI art, described by Zylinska as ludic creation.

Day two began with a panel on epistemology. Emma Stamm of Virginia Tech presented a paper on the renaissance of psychedelic science and the implications this field could have for AI. Stamm described the importance of qualitative over quantitative methods of research, arguing that qualitative research into psychedelic drugs problematises the positivist and generalising principles of machine learning as a basis for AI. Juljan Krause from the University of Southampton followed with his thoughts on representations of quantum computing in popular science discourse. Krause offered an interesting overview of how the emerging technology of quantum computing functions differently from present modes of computation. He then explored media representations of quantum computation, which portray an ongoing quantum computational arms race (led by China).

The penultimate panel revolved around the theme of aesthetics. In his paper on art and artificial intelligence Michael Haworth reconfigured the relationship between human and machine as an equal coupling, a functional interdependence manifest in the example of AARON, the painting and drawing robot. Haworth relayed how the creativity of AARON rests in the relation between the program and programmer, an interplay that performs a structural shift from the human as tool user to the human as engineer and organiser. Following Haworth, Dominique Baron-Bonarjee presented a performative lecture during which she enacted Liquidity, an embodied, meditative practice that she designed to explore ideas of free time in an increasingly automated age. Baron-Bonarjee monitored the activity of her brain throughout the lecture using a MUSE headband which measured her brain waves.

Memory and Time formed the focus of the final panel, offering an eclectic range of thoughts on automation and redundancy, logistics and inscribed narrative time. Kieran Brayford discussed the possible ramifications of mass technological unemployment and forced leisure time, coming to similar conclusions as Phelan with regards to the limitations of automated agents as unable to explain significance. Eva-Maria Nyckel’s followed Brayford’s more general overview with a specific example of automated industry in the form of Amazon’s anticipatory shipping method. Nyckel used a patent for the shipping method as the basis to unpack the potentially huge impact it could have on the temporality of logistical processes. In a change of topic, Daniel Barrow of Birkbeck University turned to contemporary experimental fiction and the technological, inhuman agency of Big Data, which finds its form in a static and autonomous narrative time.

To close the conference a number of notable Sussex based academics and researchers came together for a round table discussion. Taking part were Caroline Bassett, Peter Boxall, David M. Berry, Beatrice Fazi, Simon McGregor and Michael Jonik, all of whom were asked to choose one key word that would explain their research. Boxall chose the word ‘artificiality’ as a recourse to explore its supposed opposite- the human. He offered the idea of the Augustinian human as somewhere between beast and angel, a figure made legible according to taxonomies that we map onto it. McGregor’s word was ‘alien’. As a cognitive scientist McGregor explained that the world is full of minds that cannot be reduced to their materiality and are eternally separated by an unbroachable void. Any attempts to understand these minds would aid efforts to predict their behaviour but would also bring us further away from other humans. Other keywords included intelligence, contingency, mastery, and critical reason, with final thoughts settling on the infinitude of computation, the passive intellect of algorithmic infrastructure, and the potential for the world to continue in the absence of humans.

Over its two days, Intelligent Futures gathered a range of richly provocative critical interventions, and created spaces for stimulating discussion of Artificial Intelligence. It demonstrated the importance of further critical, interdisciplinary study of Artificial Intelligence, as it continues to inform, and to transform, the societies we live in.

Maisie Ridgway is a CHASE funded PhD student at the University of Sussex. Her research interests include ideas of pre-digital poetics from Joyce to present, the viral vitality of language and the points at which science and literature intersect, mutually informing each other.