The Art of Livetweeting

Let me begin by stating that while I have practiced this art I certainly haven’t perfected it. However, after serving as an official live-tweeter for the 2017 Annual Meeting of the Organization for American Historians, I thought I might offer some thoughts on how to live tweet – some mine, others drawn from fellow livetweeters.

(Nota Bene: I am aware of debates over the practice of livetweeting. This post assumes that you are attending a conference where livetweeting is acceptable and even encourage)

Give them the hook, not the worm

What’s the point of livetweeting? For me, it’s partly to give people who can’t attend the conference an idea of what’s happening, and it’s to encourage people who see the tweets to explore the work of a panel or panelist if they’re intrigued.

The point is not to serve as a court reporter or blow-by-blow accurate accounting of every word said.

Sometimes I’ll tweet choice tidbits – a really good sentence or a particularly entertaining example. I try to distill the argument (this is easy when the presenter says “this paper will argue…”) but if I can’t quite capture it in 140 characters, I’ll say the paper is “discussing” the general topic and leave it at that.

To quote Blues Traveller, “the hook brings you back” – I see my tweets as a hook bringing the viewers back to that scholar at a future date – to their blog, articles, op-eds, books, or twitter account. My duty to the scholar about whose paper I’m tweeting is to give the twitter audience the hook. The worm – that is, the whole work – comes later.

Tag it

Most conferences these days have a hashtag, so make sure you’re using the official one (it’s often in the conference program). In addition, some conferences may have hashtags for sessions; using these helps people following from outside the room (and outside the conference) keep track of conversations. This year’s NCPH program clearly explained hashtags for the conference, sessions, workshops, and even the plenary and keynote:

Tagging both the conference and the session also makes it simpler if you or someone else wants to go back and storify the tweets for a particular session.

You can also tag the speakers by using their twitter handles, if known. Sometimes people will put their handle on an introductory slide, or it might be in the conference program. More on why you should do that in the next bit:

Credit as credit is due

When live tweeting, it is your responsibility to represent the argument of the speaker to the best of your ability and  to make it clear when you’re quoting (easy, use quotation marks), summarizing (no quotation marks), or adding in your own thoughts and/or commentary.

The best way I’ve found to make the distinction clear is to lead with the speaker. Compare these two tweets:

In the first indicate that Edward is speaking; in the second I clearly indicate that this tweet is a “personal takeaway” and thus my thoughts, not any of the panelists. Even if you don’t know who to credit (someone from the audience asking a question) you can tweet “Q” at the beginning to make it clearer who is speaking.

Thread your tweets

All credit to Lindsay Chervinsky for this, although I since noticed other people doing it.

Threading your tweets meaning composing each subsequent tweet as a reply to the previous. I mostly work in TweetDeck or the apps on my phone and tablet, which make this fairly straightforward, and the new twitter reply format means you don’t have to worry about deleting the “@yourname” at the beginning of every tweet.

Why do this? It allows someone to click on a single tweet and open up the entire series of your tweets relating to a session. Chervinsky does this with the entire session (click through for an example) but I prefer to do it per paper, relying on session hashtags to do the rest of the work.

Copy and Paste

All of this is well and good, you may be thinking, but how do I capture key ideas and summarize them, with the right tags and @s when the conversation is going on around me, sometimes very quickly? Liz Covart, historian, podcaster and livetweeter extraordinaire, shared this tip with me: she takes notes in a separate application (Text Edit, Evernote, whatever), and then copies over the key ideas to her twitter client. One of the advantages of this is that it allows you to capture the big ideas without worrying about a character limit, and distill them down to the 120-odd characters you have to work with later.

 

Hopefully these ideas have helped demystify the process of livetweeting for those who might be interested. As always, respect the wishes of your conference and panelist regarding whether or not to tweet. Those with more experience than me: what would you add that I missed?

Community History and Public Memory

Yesterday afternoon I stood outside my parish church (ECUSA) with other residents of our town to witness the dedication of a plaque. The text of the plaque reads:

With gratitude and repentance we honor the Enslaved People whose skills and labor helped build the Falls Church.

It sits next to the plaque honoring the architect, James Wren, an equal distance from the church. The effort to commemorate the unnamed people who built the church (for which the town is named) started with members of the Tinner Hill Heritage Foundation over a decade ago, and the celebration yesterday included members of the Foundation, the parish, other local churches, local reenactors, and city residents. The dedication even made the local news.

Within the city of Falls Church, the plaque is part of efforts to increase the visibility of the African American community. The new complex on Tinner Hill preserved the stone arch commemorating the first rural branch of the NAACP and inscribed a timeline, of sorts, into the sidewalk along Lee Highway leading up to the arch. Given that Falls Church city became an independent municipality in the late 1940s largely to gain control of the schools, these efforts for creating new sites of public memory are particularly important.

There are also currents within the Episcopal Church to recognize its involvement in the practice of racial slavery. The Diocese of Rhode Island, rather than closing a cathedral with declining attendance, has changed the space to a Center for Reconciliation, specifically acknowledging and exploring the involvement of the Diocese in the slave trade.

With much of the conversation about public monuments in the last few years dominated by questions of how to deal with memorials we may no longer celebrate, it is interesting to take a moment to think about how new memorials and monuments are created and how they, too, can shape public conversations about (community) history.

Seeking a system

I’ve been trying to find a way of organizing my manuscript sources for the dissertation in a way which is readable, makes sense, and can be exported to do interesting things if the need arises.

At present, most of my sources are in a spreadsheet that I use to record documents as I photograph in the archive, with columns for year, month, day, sender, receiver, gist, and archival information; I add the transcription later. I had set up what is essentially a vertical spreadsheet to capture essential data in addition to a large text field for the transcription, but that was in a rather old version of FileMakerPro and only had enough records to serve as a test case.

(I used to be very good at building relational databases in FileMaker, making use of the special features and functions, etc. I have a fondness for it because as a kid I would play around with the database option in ClarisWorks, which FileMaker grew out of.)

A new version of FileMaker, even at the individual educator price-point, is well above what I’m willing to pay, and it feels a bit odd to use FileMaker after years of hanging out with people who write their own code. Thus, I have been considering my options. Ideally I’d like a relational database with tables for individuals, letters, locations, and repositories (the last is negotiable), and the ability to export data as a csv so I can plug the information into something like R to play around with it. If nothing else I would like a large text field with formatting options for entering and reading transcriptions.

Airtable, which is hosted online and thus would be accessible from any computer (and backed up externally) does not have a good way to view large text blocks, as far as I can tell, so it’s out of the running.

I already use Zotero to keep track of secondary sources, and it is apparently possible to get metadata out of it and into a csv. However, the notes field has always felt small to me and there is not (currently) a place to track the locations of the author and recipient of a letter except within the transcription itself. It’s not ideal but is still an option.

Another option is Omeka S, which is still in beta. It has the ability to create resource templates with whatever elements you want from various existing vocabularies, has media storage and a mapping plugin, and it is possible to import from CSVs and output via API. The main sticking points are that I’d have to come up with a standardized “title” for the documents, and coming up with a specific mapping of my current metadata fields to a mix of Dublin Core and Bibliographic Ontology elements. Of the options that don’t require me (re)learning how to code, Omeka S is the biggest contender after just continuing to use a spreadsheet.

There is a forthcoming system that looks promising – Tropy – but it’s not yet in beta and I need something sooner rather than later (maybe for the next project?)

I could, of course, try to remember my lessons in PHP and MySQL from the autumn of 2013, when I built a small but functional database with a sample set of data. Or I could try and learn Ruby or Django in order to fork and modify Project Quincy to suit my own needs. I’m not entirely sure about the cost/benefit of that in terms of time.

At this point, I’m sticking to my spreadsheet for metadata (currently using Google Sheets, although I know how fickle Google can be) and keeping transcriptions in a separate location. Any suggestions of alternate (well documented) open source database solutions are welcome.