Eastwood on Appraisal

In Currents of Archival Thinking, Terry Eastwood delineates the history of archival appraisal in a similar fashion to how Blouin (1999) and Hohmann (2016) described the historical arc of the profession. Eastwood gives an overview of the very stark paradigm shift that occurred as traditional positivism and essentialism yielded to postmodernism. As Eastwood explains, over time the profession’s most respected figures gradually moved away from viewing history and truth as uniform and objective to understanding these concepts as being much more relative and destabilized in meaning.

Eastwood goes back further in time than Blouin or Hohmann by exploring the nineteenth-century of archives. He demonstrates how records were couched in terms relating to natural law. Like Blouin, who mentioned that archivists have traditionally been neutral agents of nation-states complicit in perpetuating established powers, Eastwood demonstrates that records were assumed to have more authority the higher up they were on the hierarchical ladder. For example, organizational records were seen to be more “authentic” than private papers. Public archives were deemed more important than private ones. So, there was a natural order to records that very much mimicked the hierarchical orders of society.

Eastwood also discusses the rampant growth of records and how this changed the practice of archival science. If the postwar era of Jenkinson’s time was the first moment of fragmentation in documentary history, the next moment of major disruption, according to Eastwood, occurred during the advent of the welfare state. The first fragmentation occurred as a result of increased efforts to secure national security by figuring out how to optimally mobilize industrial resources and manpower, which created reams of records usually in the form of memos and correspondences. The second fragmentation occurred due to government intervention into just about all areas of human activity, creating paper trails for all citizens. Finally, this emphasis on human activity forced archivists into realizing that all records are products of human activity, and that the use of records will change because human activity is a mutable thing. This observation made it imperative for archivists to get a hold of records basically from the moment of their creation in order to understand and describe their original purposes, because provenance itself changed meaning. Instead of having a fixed meaning as being the result of some original and special dispensation, provenance itself  became “mutable and multifaceted”

Eastwood demonstrates his allegiance to postmodernism by stating that archives are not sources of truth, that they only have meaning relative to the user or reader of the archives. In this way, archives represent “traces of thought, expression, and activity.” Eastwood would immediately have archivists give up any conception of records serving as evidence of reality (being of “evidentiary value”), and instead have archivists focus on contextualizing memories that are triggered when readers select and use the archives.

Advertisements

The Heart of Archivy and Social Memory

v2_2093566b
As an interested student, it is perhaps difficult to arrive at a study of archives when the current literature, written by some of the profession’s most respected faculty, explicitly denies the authority of archival records. But many authors working from the postmodern perspective argue that such denial is necessary for becoming a good archivist. These authors also assert that, far from being nuggets of objective truth, archives are often sources of misinformation and intellectual deceit. As Francis X. Blouin, Jr. notes in Archivists, Mediation, and Constructs of Social Memory, archivists have been agents of the modern nation-state, complicit with dominant cultural and political aims for many generations. In more recent times, however, – owing to the growth of the Annales as a method of historical inquiry, as well as the emergence of counter-cultural thinking in the latter half of the twentieth century – academics have moved to study archives not as a place where study originates, but as an “object” of study.

In the modern era, archival thought was dominated by Sir Hilary Jenkinson, or Jenkinsonian ideology. Jenkinson was against the idea of archivists determining the value of documentary records. In other words, he was against the appraisal process, so important to the profession today. But Jenkinson was against appraisal for admittedly pragmatic reasons. For instance, he surmised that the bulk of records coming into archives from the many bureaucratic entities of the WWI postwar era could not be adequately processed by archivists. It was, Jenkinson reasoned, far too much work for such a modest field and its practitioners. Therefore, instead of intentionally stripping archivists from a very important duty, Jenkinson attempted to alleviate a burden; the burden of record inundation and the administrative suffocation that would result. Still, Jenkinson had mistaken thoughts about the nature of records. He believed that records were these “building blocks” of historical, objective truth. That is, historical truth was not to be found distilled into any singular document, but would instead eventually be revealed through the aggregation of records. This was believed to be a natural sequence, requiring patience and good stewardship from archivists. Therefore, Jenkinson’s attitude toward appraisal was considerably laissez-faire. The principle he exercised was absolute nonintervention into the war-tested process of records management. This positioning of the profession ultimately stripped archivists away from appraisal, which was a task deemed more suitable for records managers.

Another thought, offered by Paige Hohmann in On Impartiality and Interrelatedness: Reactions to Jenkinsonian Appraisal in the Twentieth Century, is that Jenkinson was simply a product of his time. In the postwar era, for example, there was great nationalism rooted in society with concurrent strong belief in government and presidential morality. This background made it easy to subscribe to a prevailing empirical positivism. The culture was also characterized by a nascent Weberian economy whereby, in the words of archival scholar Fiorella Foscarini in Understanding the context of records creation and use: ‘Hard’ versus ‘soft’ approaches to records management: “labor was rationally divided and fixed sets of responsibilities were assigned to every individual office in accordance with written rules and regulations.” In other words, this development in the society allowed for an increasing divergence of “specializations” or “departments,” where people adopted rigid work roles and were meant to act as distinct working units in a rather industrial machine.

The history of archives is very much the history of established powers. The elite groups of history are always comprised of select men or women. This leaves out a plethora of other histories, namely folk histories, which have escaped thousands of years of human documentation. But this did not go unnoticed in the latter half of the twentieth century. Leading postmodern figures such as Jean-François Lyotard and Jean Baudrillard, as well as poststructuralists like Michel Foucault and Jacques Derrida, moved away from viewing history as an empirical study, focusing instead on cultural institutions and social interactions with various power structures. The idea of social memory began to take precedence over authoritarian history, and more abstract notions of individual pasts were considered over and above any unchallenged acceptance of a uniform past. This allowed for the possibility of recontextualizing or imagining when thinking about history (an important concept when talking about the activation of archival records). Derrida, in particular, spoke of archival gaps in the historical record when he wrote about the absence of archive. To summarize this concept of absence, records which are not made are often more important than the records that are made, because gaps represent a wider conception of thoughts and feelings in historical time and thus a more accurate glimpse of social memory. It is also imperative to think about gaps in terms of “silences,” demanding to know who has been silenced.

Power often obscures truth, and because archives have traditionally been the products of political power, archives cannot be assumed to be coterminous with social memory. If an archives is filled with records that obscure truth, it behooves the archivist to redirect the means of archival study. Instead of being neutral agents of obscurantism, archivists need to provide space for the effective mediation of records. Unfortunately, this is difficult in a capitalist economy where archivists exist within larger business structures and are forced into “dealing with corporate goals, standard requirements, and technological constraints on the one side, and records creators… on the other” (Foscarini). This problematizes the ability to provide uninhibited mediation. But archivists should be mediators. They should connect individuals with opportunities to activate records for recreation, social justice, rituals of healing and commemoration, etc. etc. If this objective comes up against corporate restraints, mediation becomes impossible, and archivists are made to exist in a vacuum.

Terry Cook has written in ‘We Are What We Keep; We Keep What We Are’: Archival Appraisal Past, Present and Future: “appraisal is the very heart of archivy, what gives it life, allows it to survive, from which all other functions follow, and that appraisal has been absent for too long from the archival corpus of ideas.” Cook, as a stalwart defender of social memory and justice, maintains that archivists must not lose out in claiming the right to act as appraisers of record history. For if anyone can determine the value of cultural products, there would inevitably be a culture war where groups would seek to elevate themselves and destroy others. Because creators of the historical record have traditionally had power, archivists must stand as a last line of defense in tempering the kind of power that marginalizes and silences others, considering in that wake politics of class and ethics, as well as principles of nondiscrimination and inclusive democracy. Indeed, archival records are not static or fixed. They are relational and suspect. Physical archives require activation. Archival absences require imagination. Where the latter is concerned, the process of imagining history needs to rely on an established critique of archival processes, which goes straight back to the appraisal question; a question thoroughly mired in democratic philosophy.

Considering Web Classification

For those with a more traditional background in library science, or simply with experience in cataloging departments, I think it may be too easy to feel that cataloging has to be a manual process, controlled by the human cataloger. This may be the case with books, because they have physical dimensions and cataloging-in-publication data which needs to be entered into a cataloging system, either through the process of copy cataloging or original cataloging. Moreover, some libraries may take the liberty to add subject headings to cataloging records that meet the criteria of their own hand-selected collections. However, web resources are a different beast. Classifying web resources can seem like a daunting task because there is such a proliferation of content on the Web, including not just static webpages, but blogs, wiki’s, and videos. The discussion of cataloging web resources once revolved around deciding how to classify just webpages, but now it is a question of classifying web content, which relies increasingly on metadata standards like Dublin Core. The Dublin Core Initiative measures not only standard bibliographic attributes, but those unique to the Web, such as creator(s), format, type of resource, etc.

I think for awhile now we have seen a move away from Library of Congress classification (LCC) or Dewey Decimal classification (DDC), especially in regards to classifying the semantic web. In fact, I have not seen any earnest discussion of applying these classification schemas to web resources. The two projects that had earnestly tried to apply LCC or DDC were the CyberStacks Project out of Iowa State University and OCLC’s NetFirst. These projects seem all but dead now. I think the reason is that applying the alphanumeric codes of LCC and DDC is a process which relies on human matching of subject disciplines, which is just too much of a Sisyphean task when it comes to Web resources. In other words, it is still too difficult for artificial intelligence and machine learning to pin down subject disciplines based on keyword analysis. That being said, we are not without commercialized computer resources to aid in the classification of web resources. There are automated tools which index just about anything they are programmed to index, like web-based keywords or metatags.

These tools make the bibliographic management of the web possible. Bibliometric mapping of the Web can produce large databases of indexed material, which puts the Internet in the cross-hairs of catalogers. So ideally, the best “system” to classify Web materials is to use the many tools that are available to digital librarians which allow for taking bibliographic snapshots of the Web, such as webcrawlers designed for the purpose.

As far as the ephemeral nature of the Web goes, I do not think LIS professionals need to concern themselves too much with cataloging Web material that eventually disappears due to link rot. Canonical webpages – or webpages of content that are sponsored – will provide enough material for catalogers to work on. I see this as being no different than cataloging books that have gone through the publication process. There has always been a certain authority that measures bibliographic worth. Of course, I am aware of the implications of leaving out self-created folk content. But the original purpose of cataloging was to capture the whole of knowledge as nearest as possible, and there is enough information out there to catalog, in print form and on the Web, in order to accomplish this objective.

At any rate, indexing the semantic web through the use of automated products produces large and numerous digital libraries. My ideal system for classifying web resources would be, for starters, a greater emphasis on this endeavor. But also the application of useful digital tools to aid the cataloger in matching content to knowledge base.

The Infinite Archive

Eric Ketelaar’s paper, Archives as Spaces of Memory, struck me as an important contribution to the paradigmatic postmodern literature on archives. Ketelaar’s paper is divided into two main sections. In the first section, he discusses the differences between legal records and archival records. This discussion is framed by an interesting contextual history of the Nuremberg trials. The second section of Ketelaar’s paper focuses on the concept of Archives 2.0, in which the use of Web 2.0 technologies such as “annotation systems, wikis, clusters of blogs, social network visualisations, social recommender systems, and new ways of visualising conversations…” (18) can enliven the use and impact of archives on society. Throughout the paper, Ketelaar’s thesis remains clear. He argues that archival records – when opened up to a community for participatory interaction – can strengthen communal bonds which invariably heal societies that have undergone a traumatic experience or sequence of traumas.

When discussing the Nuremberg trials, Ketelaar argued that the law itself, even the successful service of justice through and by the law, is not enough to bring closure to the victims of an atrocity. He quotes Dutch psychologist Nico Frijda who says: “the past for victims and survivors, and their families, is ‘unfinished business’: they go on searching for meaning how the humiliations, the cruelties, the systematic destruction could have come about” (13). In other words, when the trial is over, the perpetrators of a crime are dealt with accordingly by the justice system, but the memory of what happened – the trauma – continues to affect the victims. The courts, however, are impartial and unemotional, and as far as they are concerned, when guilt has been proven and criminals are indicted, there is nothing left for them to do. Indeed, legal records in a trial are meant to be used by the prosecutors to serve an objective, finite end. Once the case is closed, the records are sealed away. As Ketelaar writes, “[t]he law aspires to a degree of finality, that neither History nor Memory does” (11).

Ketelaar’s conception of the “infinite archive” suggests that records are meant to be used ad infinitum for purposes that are restorative and creative. He says that “[a] record is never finished, never complete, the record is ‘always in a process of becoming” (12). This is the main difference between the two record groups as discussed by Ketelaar. He would likely maintain that legal records are these stale things which, while they are very important in their own right and can certainly be archived, they are not infinitely archival. According to Ketelaar, archives can heal trauma(s) because the records contained within have the power to serve what he refers to as “memory-justice” (13). Indeed, archival records, unlike law records, can be used or “activated” by the victims of history. They can be tapped for their healing powers by victimized or marginalized groups of people. Legal records cannot.

I think this is an important consideration. Knowing that archival records can be used as therapeutic resources, it becomes imperative to discover new and effective ways of providing access to archives. This is why Ketelaar shifts in his discussion to talk about Archives 2.0. By now, it is obvious that new media and social networking have produced novel ways of engaging in cultural modes of thought and creation. Ketelaar brings up some important concepts in this section such as “parallel provenance” and “co-creatorship.” In terms of archives, these concepts support the Records Continuum Model of Frank Upward. Ketelaar writes, “the social and cultural phenomenon of co-creatorship entails a shift of the traditional paradigm of the organic nature of records and the principle of provenance” (15). Participatory archives is important, then, for the reasons mentioned above. Releasing the fixity of archives allows for the process of re-creation and reconciliation, which is vital for the health of society. As emotional fixity can result in depression and dissociation from society, participatory archives can only be a good thing. Still, there are problems inherent in releasing archives for public use and activation. For instance, Archives 2.0 increases the problem of ensuring data protection, consent, and privacy. Ketelaar does admit that “[t]his needs a new generation of access policies, tools and practices, less collection driven, but directed towards archives as social spaces and records as social entities” (18). So despite the altruism Ketelaar exhibits in his call to release the archives, one can sense that new traumas could emerge in these social spaces.

Beginning thoughts on IR systems

Following the logic of Zavalina and Vassilieva in Understanding the Information Needs of Large-Scale Digital Library Users (2014), I think information retrieval (IR) systems should be informed by the information-seeking behaviors of the user community. This ensures that the IR system is designed with the users in mind and that the main purpose of the system is to help users acquire their informational needs. As a principle of design, this is also necessary if the system is to have a democratizing effect. You want to have an IR system that empowers the user, allowing them to easily navigate the interface and satisfy their needs through an intuitive and smart system. This seems pretty much like the ideal.

But saying an IR system should be “informed” by user behavior is different from saying that an IR system should “adapt” to user behavior. The former presupposes that the IR system designers understand and can predict the searching habits of individuals. They would then try to accommodate a wide range of user search styles through the implementation of useful tools, like relevance rankings or context help. Adapting a system around users, however, means that the IR system you would get would look like something akin to Google, where popularity and site traffic dictate what will be optimized.

Of course, it is no secret among LIS professionals that search skills among the general population suffer from a lack of information literacy and specific knowledge of IR systems and how the system retrieves user inputted keywords. Khapre and Basha in A Theoretical Paradigm of Information Retrieval in Information Science and Computer Science (2012) mentioned the principle of least effort. While the idea inherent in the principle of least effort is from the design perspective meant to optimize retrieval based on limited user knowledge, the phenomenon of least effort in information-seeking behavior is still problematic. In a matching program, where a user comes up with a query which is analyzed and matched to a document by organized keywords, broad and unfocused keywords will yield fuzzy search results.

Therefore an IR system cannot adapt to users without sacrificing its functionality for precision. An IR system must be able to handle very specific intellectual queries at a very granular level. I think this question poses a central dilemma in the field of information retrieval and access. Indeed, there is a lot of cognitive dissonance between “man and machine,” as it were. User expectations are way too high. People have become spoiled with the ease of performing Google searches and obtaining instant results to whatever research requirements they have. But I think it is important to realize that IR systems are sophisticated tools that require a sophisticated understanding of how to use them. In Khapre and Basha’s article, they pointed out that technology can change our thoughts and, importantly, that “technology is making it difficult for users to recognize that it is external, known only to the simple “interface value””. This concept of interface value is an important one in human-computer interaction, because users have expectations of the IR system which they take at “interface value.” But they are completely ignorant of the internal coding of the IR system, which is considerably complex and based on algorithmic science that usually escapes the end user’s interest or opportunity for study.