Thinking (and Reading) about Digital Humanities I

While the first two sections of my second reading list focus on archival theory, the last group of readings have focused on digital humanities. I added these readings because DH is a field I’ve gotten really interested in since taking a seminar on it during my first semester at William and Mary. Like the archival theory list, this section consists of two parts: one focusing on the appeal of digital humanities, and one addressing issues of representation and ethics, with a lot of overlap in between them. There’s also a lot of overlap with the archival theory list, since a lot of the readings I selected focused on both digital archives and more broadly frameworks of power.

Today, I’d to share my thoughts of this part of the list and its applicability to my own work.

Every chapter in this book was written collaboratively to emphasize the team-based methods of Digital Humanities.

As with the archival theory list, I started off with the works that focused on the allure of DH. For DH proponents, digital humanities isn’t just about putting humanities work on a computer, but a whole new approach to conducting scholarship through three interconnected qualities: collaboration, big data, and a shift away from text as the primary means of creating and expressing knowledge. In works like Digital Humanities, Anne Burdick and her co-authors juxtapose the traditional approach of solitary scholarship with a team-based method (the book itself was written collaboratively to demonstrate this point). To put it simply, DH encourages collaboration because computer-based scholarship is too expansive a field for any one person to completely master. The chances of finding someone who can code, create visualizations, and do humanities-based scholarship such as close textual readings are pretty slim, so more often than not you team up with other scholars with different skills sets. The potential of this cross-pollination, Burdick and others argue, is that you have scholarly dialogues occurring across the humanities/science divide, which opens up new research questions and project possibilities. Given that academia has been accused of siloing itself off by departments, such interdisciplinary collaboration has been regarded as a good thing.

This 3D printed abstract sculpture from historian David Staley represents the top 100 words used by the Florida Historical Quarterly from its 85-year run of back issues. The work interprets the words as a topographical map, with the height of the peaks representing frequency of usage. In addition to seeing overall work use, viewers can also see how the journal’s use of language has changed over time, reflecting changing ideas and concepts. You can read more about here.

Another theme that comes through this first part of the list is an emphasis on design. As Burdick, David Staley, and others have argued, written texts in the form of monographs and articles have been the primary means of creating and expressing knowledge in academia for centuries, a form that encourages solitary research and production. By privileging this form, other ways of making sense of the world, such as visual art, textiles, and other creative means, tend to get summarily dismissed as being less serious than written texts and regarded simply as a supplement to the printed word. Yet as digital projects in the form of graphs, spatial renderings of history, and other forms have demonstrated, visualizations are not just a means of illustrating textual knowledge, but of producing knowledge in its own right by exploring themes or ideas that text can’t render very well.

Related to the enthusiasm over visualization in these works is the excitement over big data, or the use of computers to process large amounts of data. As scholars such as Franco Moretti and Mathew Jockers argue in works such as Graphs, Maps, Trees and Macroanalysis, traditional humanities approaches that emphasize close readings of finite amounts of information (in their case, literature), results in a distorted understanding of a particular academic field. Since humans are only capable of reading a finite number of books, or looking at a finite number of paintings, we only see a small percentage of the total human output, resulting in an overemphasis on so-called masterpieces at the expense of all other works. What these authors argue for is adapting quantitative approaches to humanities scholarship through machine reading. Since computers can process a lot more text than we can in a faster amount of time, we should use their abilities to “read” large amounts of text and then write codes for them that enable them to look for specific words or themes (for example, how often texts use articles such as “the”). In its way, we can better contextualize the so-called masterpieces we tend to highlight in our close readings within a broader literary landscape, and see how they fit in with the vast body of other lesser-known works.

Visualizations like this project on MoMA’s photography collection, created by Lev Manovich and the Cultural Analytics Lab, enables viewers to see large quanities of images and discern general trends about them. In this case, the photographs are sorted by tonal value.

I’ll admit, I found a lot of these readings exciting in their optimism, even if I know from my coursework and conferences experiences that DH is, like every academic discipline, a flawed field. The years I spent in museums taught me that all projects are collaborative in one form or another, so seeing a field actively embrace teamwork with all its complexities and messiness is both exciting and a little daunting, especially for an introvert who finds networking overwhelming sometimes. I also found the readings on big data and visualizations exciting, particularly as someone coming from an art history background. One thing that has always irked me about the field is its emphasis on “greatest hits” at the expense of understanding works by lesser-known or less-skilled artists. While newer fields such as Visual Studies have attempted to address this issue by expanding the field of study beyond the so-called fine arts, art history remains a pretty conservative field in a lot of respects. As digital humanists such as Lev Manovich demonstrate in their visualizations, however, computer-based scholarship offers a way to process a large number of paintings or artworks and not only observe patterns or trends, but also underscore the diversity of human creativity.

I could potentially see such visualizations working well for my own research. As I mentioned in my CDHC talk, the logistical complexity of the Community Art Center Project makes it difficult to describe its operations through text, but mapping it digitally could offer a way to better discern its exhibition shipping patterns. Additionally, applying quantitative methods to the thousands of art works shown through the program could offer insights into the overall program itself regarding the kind of work it selected for exhibition, whether it’s the prominence of specific color palettes or preference in subject matter. These kinds of distance readings, combined with a close reading of a specific exhibition or two, could be very useful.

While these texts tend to paint digital humanities and its potential in a glowing, uncritical light, more recent texts have debated whether DH has actually revolutionized academia or replicated the biases of the university, and offer their own suggestions for creating more inclusive scholarship. Big data and macroanalysis-based methods, in particular, have been questioned for their flattening effect, and in the case of contemporary texts or images from living people, the ethics of using their works without permission or acknowledgment of their labor. Moretti’s assault of graduate students also underscores the ongoing issue of ethics in scholarship.

We’ll take a look at some of these critiques next week.

Leave a comment

Your email address will not be published. Required fields are marked *