Mending Dame Durrants’ Shoes

This week Louise Falcini gave us an update on the AHRC-funded project The Poor Law: Small Bills and Petty Finance 1700-1834.

The Old Poor Law in England and Wales, administered by the local parish, dispensed benefits to paupers providing a uniquely comprehensive, pre-modern system of relief. The law remained in force until 1834, and provided goods and services to keep the poor alive. Each parish provided food, clothes, housing and medical care. This project will investigate the experiences of people across the social spectrum whose lives were touched by the Old Poor Law, whether as paupers or as poor-law employees or suppliers.

The project seeks to enrich our understanding of the many lives touched by the Old Poor Law. This means paupers, but it also means workhouse mistresses and other administrators, midwives, tailors, cobblers, butchers, bakers, and many others. Intricate everyday social and economic networks sprung up around the Poor Law, about which we still know very little.

To fill these gaps to bursting, the project draws on a previously neglected class of sources: thousands upon thousands of slips of paper archived in Cumbria, Staffordshire and East Sussex, often tightly folded or rolled, of varying degrees of legibility, and all in the perplexing loops and waves of an eighteenth century hand …

Overseer note

These Overseers’ vouchers – similar to receipts – record the supply of food, clothes, healthcare, and other goods and services. Glimpse by glimpse, cross-reference by cross-reference, these fine-grained fragments glom together, revealing ever larger and more refined images of forgotten lives. Who was working at which dates? How did procurement and price fluctuate? What scale of income was possible for the suppliers the parish employed? What goods were stocked? Who knew whom, and when? Who had what? What broke or wore out when? As well as the digital database itself, the project will generate a dictionary of partial biographies, collaboratively authored by professional academics and volunteer researchers.

Louise took us through the data capture tool used by volunteer researchers. A potentially intimidating fifty-nine fields subtend the user-friendly front-end. The tool is equipped with several useful features. For example, it is possible to work remotely. The researcher has the option to “pin” the content of a field from one record to the next. The database automatically saves every iteration of each record. The controlled vocabulary is hopefully flexible enough to helpfully accommodate any anomalies. It’s also relatively easy to flag up records for conservation assessment or transcription assistance, or to go back and edit records. Right now they’re working on implementing automated catalogue entry creation, drawing on the Calm archive management system.


Personally, one of the things I find exciting about the project is how it engages both with the history of work and with the future of work. Part of its core mission is to illuminate institutions of disciplinarity, entrepreneurship, and precarity in eighteenth and early nineteenth century England. At the same time the project also involves, at its heart, questions about how we work in the twenty-first century.

Just take that pinning function, which means that researchers can avoid re-transcribing the same text if it’s repeated over a series of records. It almost feels inadequate to frame this as a “useful feature,” with all those overtones of efficiency and productivity! I’m not one of those people who can really geek out over user experience design. But most of us can relate to the experience of sustained labour in slightly the wrong conditions or using slightly the wrong tools. Most of us intuit that the moments of waste woven into such labour can’t really be expressed just in economic terms. And I’m pretty sure the moments of frustration woven into such labour can’t be expressed in purely psychological terms either. Those moments might perhaps be articulated in the language of metaethics and aesthetics? – or perhaps they need their very own (as it were) controlled vocabulary. But whatever they are, I think they manifest more clearly in voluntary labour, where it is less easy to let out that resigned sigh and think, “Whatever, work sucks. Come on Friday.”

I don’t have any first-hand experience of working with this particular data capture tool. But from the outside, the design certainly appears broadly worker-centric. I think digital work interfaces, especially those inviting various kinds of voluntary labour, can be useful sites for thinking more widely about how to challenge a productivity-centric division of labour with a worker-centric design of labour. At the same time, I guess there are also distinctive dangers to doing that kind of thinking in that kind of context. I wouldn’t be surprised if the digital humanities’ love of innovation, however reflexive and critical it is, tempts us to downplay the importance of the minute particularity of every worker’s experience, and the ways in which working practices can be made more hospitable and responsive to that particularity. (Demos before demos, that’s my demand).

I asked Louise what she thought motivated the volunteer researchers. Not that I was surprised – if something is worth doing there are people willing to do it, given the opportunity! – but I wondered what drew these particular people to this particular work? In the case of these parishes, it helps that there are good existing sources into which the voucher data can be integrated, meaning that individual stories are coming to life especially rapidly and richly-resolved. Beyond this? Obviously, the motives were various. And obviously, once a research community was established, it has the potential to become a motivating energy in itself. But Louise also reckoned that curiosity about these histories – about themes of class, poor relief and the prehistory of welfare, social and economic justice, and of course about work – played a huge role in establishing it in the first place.

Blake wrote in Milton about “a moment in each Day that Satan cannot find / Nor can his Watch Fiends find it.” I bet there is a moment within every rote task that those Watch Fields have definitely stuck there on purpose. It’s that ungainly, draining, inimitable moment that can swell with every iteration till it somehow comes to dominate the task’s entire temporality. It is politically commendable to insist that these moments persist in any task designed fait accompli from a distance, by people who will never have to complete that task more than once or twice … no matter how noble or comradely their intentions. But even if we should be careful about any dogmatic redesign of labour, I think we should at least be exploring how to redesign the redesign of labour. Karl Marx wrote in his magnum opus The Wit and Wisdom of Karl Marx that, unlike some of his utopian contemporaries, he was not interested in writing recipes for the cook-shops of the future. In some translations, not recipes but receipts. It actually is definitely the future now. And some of us are hungry.

JLW


The Poor Law: Small Bills and Petty Finance 1700-1834 is an AHRC-funded project.

  • PI: Alannah Tomkins (Keele)
  • Co-I: Tim Hitchcock (Sussex)
  • Research Fellow: Louise Falcini (Sussex)
  • Research Associate: Peter Collinge (Keele)

Data Cleaning and Preparation

Earlier today Ben Jackson gave the first in this semester’s series of digital methods open workshops. Here are a few rough notes on what we covered. If you missed the workshop and want to try out some of it on your own, you can find the tasks here. For details of forthcoming workshops, go here. All workshops are free and open to everyone (but it helps if you register).

Ben started us off with a rapid hurtle through some of his recent and ongoing projects (slides), including his collaboration with Caroline Bassett exploring ways of analysing and visualising Philip K. Dick’s writing (counting electric sheep, baa charts), and work bringing to life the text data of the Old Bailey Online. (It’s a tremendously rich archive of nearly 200,000 trials heard at the Old Bailey between 1674 and 1913). Bringing to life, and also bringing to unlife: Ben uses a kind of estrangement effect to remind the observer of what the data isn’t telling us, populating his legal drama puppet show with a cast of spoopy skellingtons.

Ben Jackson puppet show

Most of the workshop was a free exploration of prompts and tools Ben pulled together. People basically tried out whatever they liked, while he glided from table to table rendering assistance.

Calibre is a free ebook manager that is also a bit of a Swiss army knife, and it just so happens one of its fold-out doohickeys is a very good ebook-to-plain-text converter. Ebook files (.AZW, .EPUB, .MOBI etc.) are stuffed with all kind of metadata that usually needs to be cleared away before you can do any analysis on the raw text itself. We also did something similar with another free tool, AntFileConverter, turning PDF into plain text. The lesson was that documents can be ornery and eccentric, and different converter tools will work differently and give rise to different glitches: “no single converter that will just magically work on every document.”

AntFileConverter is part of a family of tools. We also checked out TagAnt and AntConc. I feel like I only scratched the surface of these. TagAnt creates a copy of a text file with all the grammatical parts-of-speech tagged. So if you input something like “We waited for ages at Clapham Junction, with the guard complaining about people blocking the
doors” you get something like “We_PP waited_VVD for_IN ages_NNS at_IN Clapham_NP Junction_NP ,_, with_IN the_DT guard_NN complaining_VVG about_IN people_NNS blocking_VVG the_DT doors_NNS ._SENT” as output. PP is a personal pronoun, VVD is a past tense verb, IN is a preposition or subordinating conjunction, and so on. By itself this just seems to be an extremely pedantic form of vandalism. It does let you fairly easily find out if, for example, an author just loves adverbs. And tagging parts of speech could be the first step toward more interesting manipulations, for creative purposes (shuffle all the adverbs) and/or analytic purposes (analysis of genre or authorship attribution).

AntConc allows you to create concordances. A concordance is (more or less) an alphabetical list of key terms in a text, each one nestled in a fragment of its original context. So it’s a useful way to browse an unreadably large corpus based on some particular word (and so to some extent some particular theme) that interests you. Sure Augustine had stuff to say about sin and grace, but what did he think about, I don’t know, fingers?

Fingers

So a concordance helps you to find sections you might want to read more thoroughly.  But I guess it doesn’t just have to be used like that — like a kind of map, or a very comprehensive index — but could also be read in its own right, and that reading could comprise a legitimate way of encountering and gaining knowledge of the underlying text.

How might, for instance, reading every appearance of the word “light” constitute its own way of knowing how the term “light” is working within a text? Are such readings reliably productive of knowledge? Or is it more like you might get lucky and stumble on something intelligible, like how a particular word is being tugged in distinct, divergent directions by two different discourses it’s implicated in?

How do these tools actually work? Well, going by the name and a logo, a really fast clever ant just does it for you. Thanks ant!

AntConc screenshot

 

Voyant Tools is a web-based reading and analysis environment for digital texts. What does that mean in practice? When you feed it your text file, a bright little dashboard pops up with five resizable areas. Each one of these contains a tool, and you can swap different tools in and out. I’d guess there are about fifty or so tools, although I’m not sure how distinct they all are really.

Voyant Tools screenshot 1

At least one tool was very familiar: “Cirrus” in the top left corner makes a word cloud of the text you’ve inputted, with the most frequent words appearing the largest. Very common words like “a” and “the” are filtered out (in the lingo, they are “stopwords”). The bottom right tool, “Contexts,” was also pretty familiar, since it seems to be a concordance, like we’d just been doing in AntConc. “Summary” and “Trends” were pretty self-explanatory. “TermsBerry” required a bit more poking and prodding. It clusters the more frequent words near the middle, the rarer words round the edges. When you hover your mouse pointer over a word, some of the other drupelets light up to show you what other words tend to appear nearby. You can mess with the thresholds and decide exactly how close counts as “nearby.”

The “Topics” tool looks interesting. It starts with random seeds, then builds up a distinct word cluster around these seeds based on co-occurence and then tries to work out how these word clusters are distributed throughout the text. Each word cluster (or “topic”) technically contains all the words in the text, but each one is named after the top ten terms in the cluster. A few of these seem knitted together by some strong affect (“bed i’ve past lay depression writing chore couple suffering usually”) or a kind of prosody or soundscape (“it’s daily hope rope dropped round drain okay bucket bowls”). Others feel tantalisingly not-quite-arbitrary, resonant with linkages in the same way a surrealist painting is (“bike asda hard ago tried open bag surprisingly guy beard”). But I’m not sure how far I trust my instincts about these artefacts, and I definitely don’t yet know how they might be used to deepen my knowledge of a text, or how they relate to various notions you might invoke in a close reading (theme, conceit, discourse, semantic field, layer, thread, note, tone, mood, preoccupation, etc.).

The various tools on your Voyant dashboard also seemed to be linked, although I didn’t get round to fully figuring that out. Definitely whenever I clicked on a word in the “Reader” tool the other displays would change. Oh: and Voyant Tools seems to be pretty fussy, and didn’t want to run on some people’s laptops. I didn’t have any trouble though.

I got a bit sucked into trying to work out what the “Knot” tool does — it’s this strange rainbow claw waving at me — and didn’t spend much time on the last exercise, which was about regular expressions (or regex). Basically, these are conventions which let you do very fancy and complicated find-replace routines. You can search something like ‘a[a-z]’ which will match aa, ab, ac, ad, etc. Or (one of Ben’s examples) by replacing <[^>]+> with nothing, you can clear out all the XML tags in a text document. You can use regular expressions in plain old Word (just make sure you check the box in the find-replace dialogue), but they probably work a little better in a text editor like Atom or Sublime Text.

“The purpose of this part of the task is to teach you how to use them, not to teach you how to write them.” Phew! For me, regular expressions never seem to stick around very long in my memory, but it’s very useful to know in broad terms what they’re capable of. Every now and then a task pops up in the form of, “Oh my God, I have to go through the whole thing and change every …” and that’s my cue to start puzzling and Googling and figuring out whether it can be done with regular expressions. If it can, it will probably be quicker and more accurate, and it will definitely be more satisfying.

So: plenty explored, plenty more to explore. And I’m looking forward to the next workshop, Archival Historical Research with Tropy, on 19 February.

JLW