In August 2015, a minor furor broke out on the SIGCIS and Humanist discussion forum about the merits or otherwise of Tara McPherson’s essay “Why Are the Digital Humanities So White? or Thinking the Histories of Race and Computation.”
McPherson’s attempt to knit together “discussions of race (or other modes of difference) with our technological productions within the digital humanities (or in our studies of code),” depends on drawing parallels between the development of MULTICS (and then UNIX) and the more or less contemporaneous Civil Rights events of the 1960s. Her case rests strongly on the notions of modularity and encapsulation, which she presents as something akin to code apartheid. McPherson opines “I am not arguing that the programmers creating UNIX at Bell Labs and in Berkeley were consciously encoding new modes of racism and racial understanding into digital systems.” She states her purpose as showing “the ways in which the organization of information and capital in the 1960s powerfully responds—across many registers—to the struggles for racial justice and democracy that so categorized the United States at the time.” In pursuit of this goal McPherson sketches two historical fragments drawn from the history of the 1960s: the first is a potted history of the development of UNIX, “well known to code junkies and computer geeks,” while the second, familiar to “scholars of culture, of gender, and of race like the members of the ASA”a concentrates on the “struggles over racial justice, [and] antiwar activism” going on at the same time.
McPherson’s suggestion is that rather than being viewed as parallel but independent, these fragments should be seen as “deeply interdependent.” This would be a much more interesting suggestion if only it were accompanied by an evidence-based argument showing grounds for believing the developers of UNIX were in any way following a racist agenda, or were demonstrably responding to racist notions. McPherson’s opinion seems to rest entirely on the notion that because UNIX was developed in a time and place where racism was foregrounded, features of the UNIX operating system must be reflective of that wider social context. While not completely uncontentious, it is unremarkable to assert that all things may be thought of as interdependent. The interest lies in showing the degree of ‘influence’ is more than marginal, and this requires evidence of a stronger kind than is offered in McPherson’s essay.
It is hardly surprising, given the intentionally provocative and polemical character of the original essay, together with the privileged status she gives to explanations of features of MULTICS/UNIX that depend on attributing covert or unconscious racism on the part of their designers, that some contributors to the discussion forum conversations expressed themselves in somewhat intemperate tones.
I do not find McPherson’s ‘post hoc ergo propter hoc’ position persuasive. Her essay is underdeveloped, and fails to consider many other explanations for the single technical feature of UNIX onto which she latches. As it stands, she offers no more reason to believe UNIX is racist, than it is sexist, and while either or both of these claims may have merit, she does nowhere near enough to convince, but more than enough to provoke. She seems content to note ‘parallels,’ some of which appear contrived, without demonstrating linkage or causal connection. One of the issues is certainly that different disciplines see the world in different ways, and this extends to include notions of what counts as an argument, or a ‘proof’.
The heated nature of the discussion set me thinking about the extent to which work in the ‘Digital Humanities’ which should marked by inter-disciplinarity and collegiality, is so often characterized by groups separated by a common cause. My own experience of working with people from different disciplines, as well as my personal interdisciplinary journey, has been almost entirely productive, and I have learned much from colleagues whose perspective and intellectual direction of travel stands in marked contrast to my own.
By way of example, some years ago I was lucky enough to spend time exploring preservation issues with the contemporary artist Michael Takeo Magruder, and the examples that follow are drawn from his corpus of work.
A great deal of progress has been made over the last decade in the field of digital preservation, and to all extents and purposes we may regard the problem of simple bit storage as solved. However, the world does not stand still, and new problems arise while the old problems are being addressed. Simple digital objects such as ascii files represent an ever-decreasing proportion of the overall problem space, and attention is still required in the domain of “complex digital object” preservation.
As it happens, many of the problems digital preservation is currently facing are well illustrated by looking at the difficulties involved in preserving contemporary artworks having a digital component. Artists have always been early adopters of new technologies, and have taken inspiration from the opportunities digital computing has provided for novel forms of artistic expression. The result is that contemporary art now represents one of the ‘bleeding edges’ of digital preservation.
It is useful to begin by noting something that is obvious to artists, but perhaps not so much to technologists: even when fully digital, artworks are not ‘only’ digital. They are not to be understood merely in terms of hardware and software, and any preservation approach that does not acknowledge this is doomed to failure. In any attempt to preserve and make accessible the present for the future, it is vital to comprehend the full context of that which we are attempting to bequeath to future generations. This is often expressed in preservation circles in terms of preserving ‘significant’ properties for particular ‘stakeholder’ communities.
Contemporary art now represents one of the ‘bleeding edges’ of digital preservation.
Of course, many contemporary artworks are not fully digital in the first place, but are combined with analogue elements. For example, Jose Carlos Martinat Mendoza’s sculpture, “Brutalismo,” is a scale model of the Peruvian military headquarters (itself an example of ‘brutalist’ architecture). The artwork incorporates a computer that searches the Web for references to ‘Brutalismo/Brutalism’. The search hits, which capture a variety of examples of political and architectural brutalism, are printed on small pieces of paper that are spat out onto the gallery floor. “Brutalismo” is a ‘hybrid’ object, drawing its expressive power, and its artistic value, from a combination of digital and analogue elements, neither of which can fully be understood in isolation. Not only do we need to be sensitive to the object itself when devising a preservation strategy, but attention must also be paid to how it was situated in the gallery. Placing the artwork against cheery primary colors conveys something different than does a muted grey environment. A preservation approach that preserves only the digital elements of “Brutalismo” (even if we extend this to include the computer hardware) simply misses the point. Similarly, the artwork is more than its physical embodiment, however arresting that may be. An additional set of difficulties arises from the incorporation of live data.
Another striking example of the use of live data is Magruder’s “Data Plex” series of artworks that use live data feeds from real-life scenarios to generate three-dimensional geometry and textures in real time, creating virtual realms that refract ever-changing, volatile forces in and upon the real world (see Figure 1). The technology that drives “Data Plex” is a combination of server-side Java and client-side Virtual Reality Modeling Language (VRML) and, in preservation terms, this makes it relatively easy to capture ‘screen grabs’ and to save the .wrl geometry files that might be thought of as constituting the complete environment.
However, it is far from clear that a series of snapshots are capable of capturing fully the artistic characteristics of the artwork. “Data Plex (economy)” was conceived in the immediate aftermath of the 2008 global financial crash, and was intended by the artist to allow audiences to interact with a live ’embodiment’ of the financial market, and to witness its volatile fluctuations in real time. It might be possible to capture this unique moment in time by storing a record of the fluctuations in the Dow Jones Industrial Average that ‘drove’ the artwork in 2009, and preserving this information along with the rest. Regrettably, this approach is not entirely unproblematic as the original installation interacted with the live Web, not with a fixed dataset, and the artwork would require reprogramming to do otherwise. In taking this approach to preservation, we would be privileging one particular period in the ‘life’ of the artwork rather than capturing a history—some of which is still to be written.
Another of Magruder’s artworks, “Data Flower (Prototype I),” creates an endless cycle of ephemeral synthetic flowers using a combination of VRML to define the basic flower geometry, attenuated by (pseudo) randomized parameters that produce subtle mutations within the petal formations, and ensure each flower develops differently (see Figure 2). Finally, surface textures are produced by sampling the 100 most recently uploaded Flickr photographs that have the tag ‘flower’. These are stored in a temporary database, from which an image is (pseudo) randomly selected and applied across the developing floral geometry. The final on-screen appearance is therefore the result of a combination of algorithmic calculation, (pseudo) randomness, and an entirely unpredictable and ever-changing set of external images. As with “Brutalismo,” attempting to reconstruct even a fraction of the “Data Flower’s” timeline is made difficult because not all of the information is stored internally; the internal database being overwritten every day, and may not be available from external data sources at the time when preservation is attempted.
It is not always easy to be sure what constitutes the object of preservation.
The situation becomes further complicated when some element of interactivity is incorporated, for example, when the behavior of an artwork is affected by the number of people who are within a gallery at a given moment, or the exact time of day when a viewer passes the artwork. In cases such as these, it is simply not feasible to preserve faithfully the actual behavior of an artwork for future examination.
These, and similar considerations, should make us consider very carefully what exactly it is we are trying to preserve, and for which stakeholder community? It is not always easy to be sure what constitutes the object of preservation. While preserving a series of ‘snapshots’ may be perfectly acceptable to some stakeholder groups, it is unlikely to satisfy others.
Interactivity always presents the preservationist or conservator with difficulties and may be considered as a ‘problem case’ in its own right or as an additional complication when combined with other problems such as object hybridism.
The widespread and increasing use of social media adds yet another dimension. Platforms like Second Life offer interesting creative possibilities for artists to explore. In addition to capturing something of the current Zeitgeist, social media platforms permit artists to collaborate with each other and with users in ways that are otherwise not open to them. Of course, collaborative working can create problems in understanding clearly who “owns” an artwork, and how to acknowledge the contributions of everyone who played a part in the creative process. In this context, it is well worth reading Jerry McDonough et al.’s very instructive report on the Preserving Virtual Worlds project (http://hdl.handle.net/2142/17097), which draws attention, in Chapter 7, to the difficulties that arise in trying to preserve an ‘Island’ in Second Life. Even though the team had at their disposal tools that should have enabled them to achieve more or less complete preservation, they were, primarily as the result of ‘rights’ issues, only able to manage “extremely partial and static representations of the original.”
Working with contemporary artists has made clear to me that any inclination one may have to believe that preserving artworks is primarily a matter of developing an appropriate set of software tools and workflows, is quite mistaken. From the artistic perspective, the hardware and software aspects of artworks are clearly important, indeed without them then works would not exist. However, the essence of the object of preservation lies somewhere beyond these components, and calls into question any technologically deterministic approach to preservation. This is a lesson that is well worth extending into other areas of preservation.
I have not hinted at how the challenges I have highlighted might be partially or fully addressed, but will return to this topic in a future column.
Figures
Figure 1. Michael Takeo Magruder’s “Data Plex” series of artworks utilize live data feeds from real-life scenarios to generate three-dimensional geometry and textures in real time; http://www.takeo.org/nspace/ns031/.
Figure 2. Michael Takeo “Magruder’s Data Flower (Prototype I)” creates an endless cycle of ephemeral synthetic flowers; http://www.takeo.org/nspace/ns034/.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment