[This local archive copy is from the official and canonical URL, http://hhobel.phl.univie.ac.at/mii/mii/node7.html; please refer to the canonical source document if possible.]


next up previous
Next: The Edited Discussion Up: Michael BiggsClaus Huitfeldt: Previous: This article

Edited Version of Renear's Target Paper

Discussions of theoretical or philosophical issues in text encoding have always been important to our community and there has throughout been a certain amount of published critique and engagement: Mamrak et al. (1988) and Raymond et al. (1993) respond to Coombs et al. (1987), Fortier (1995) responds to an early version of the TEI Guidelines; Sperberg-McQueen (1991) responds to Fortier (1995); Huitfeldt (1995) to DeRose et al. (1990) and Renear et al. (1992), etc. I am particularly interested in how the views of textuality have evolved from a kind of Platonic essentialism to positions that seem more Antirealist.

Both the representation of linguistic content and the inclusion of additional information about either the editorial structure or the format of a document are generally included in the notion of text encoding.

Early computer systems used ``format-based text processing.'' The data file typically consisted of (i) the linguistic character content (e.g., the letters of the alphabet, punctuation marks, digits, and symbols) and (ii) interspersed codes indicating information about that content. The identification of linguistic content is relatively unproblematic. Obviously certain patterns of formatting codes tended to recur and in the systematic use of these codes there was a natural tendency to identify a code not with its formatting effects, but directly with a type of text element. In the 1970s a number of software designers and computer scientists came independently to the conclusion that the best way to design efficient and functional text processing systems was to base them on the view that there are certain features of texts which are fundamental and salient, and that all processing of texts should be implemented indirectly through the identification and processing of these features. These features have been called ``content objects'' (DeRose et al. 1990) and this approach to text processing ``content-based text processing.''

The earliest endorsement of this approach was by software engineers who offered a theoretical backing which both explained and predicted its efficiencies (Reid 1981, Goldfarb 1981). But in the course of the discussion, some partisans of content-based text processing inevitably claimed more directly that alternative representational practices were inefficient because they were based on a ``false model'' of text (Coombs et al. 1987, DeRose et al. 1990).

The two different approaches to text processing; format-based and content-based, seemed to locate their differences in the kinds of markup they deployed. Text encoding theorists explored its nature, providing taxonomies and distinctions. Three types of markup in particular are typically identified: Descriptive Markup which identifies a particular textual element, e.g., ``<paragraph>,'' ``<title>,'' ``<stanza>''; Procedural Markup which specifies a procedure to be performed, typically a formatting procedure, e.g., ``.sk 3;.in 5;'' meaning ``skip 3 lines, indent 5 columns''; and Presentational Markup which consists of graphic devices such as leading, font changes, word spaces, etc.

The interplay between these three categories of markup during a typical instance of text processing suggests both that they mark salient aspects of text processing and suggests a certain priority for descriptive markup. An effort to standardize markup systems begun in the early 1980s and eventually resulted in an international standard for defining descriptive markup systems, ISO 8879: ``Information Processing, Text and Office Systems, Standard Generalized Markup Language.'' (SGML) SGML is a ``meta-grammar'' for defining sets of markup tags. The technique for specifying these syntactical constraints is similar to the production rule meta-grammar developed by Noam Chomsky to describe natural languages. The principal vehicle for the development and standardization of descriptive markup for the humanities is the ``Text Encoding Initiative'' (TEI). There are several advantages to this kind of content-based text processing: composition is simplified, writing tools are supported, alternative document views and links are facilitated, formatting can be generically specified and modified, apparatus construction can be automated, output device support is enhanced, portability is maximized, information retrieval is supported, and analytical procedures are supported.

Phase one of the textual ontology was a form of Platonism. The earliest arguments for the content-object approach to text processing were not intended to make an ontological point about ``what texts really are,'' but rather to promote a particular set of techniques and practices as being more efficient and functional than the competing alternatives. The straightforward ontological question posed by DeRose et al. (1990) ``What is Text, Really?'' was given a straightforward ontological answer: ``text is a hierarchy of content objects,'' or, in a slogan and an acronym, text is an ``OHCO.'' The claim is that in some relevant sense of ``book'', ``text,'' or ``document'' (perhaps ``qua intellectual objects'') these things ``are'' ordered hierarchies of content objects. They are ``hierarchical'' because these objects nest inside one another. They are ``ordered'' because there is a linear relationship to objects: for any two objects within a book one object comes before the other. They are ``content objects'' because they organize text into natural units that are, in some sense, based on meaning and communicative intentions. In the writings and conversations of the text encoding community in the 1980s and early 1990s at least these five broad categories of arguments that text is an ordered hierarchy of content objects can be discerned.

Pragmatic/Scientific: Texts modeled as OHCOs are easier to create, modify, analyze, etc. The comparative efficiency and functionality of treating texts ``as if'' they were OHCOs is best explained, according to this argument, by the hypothesis that texts ``are'' OHCOs.

Empirical/Ontological: Content objects and their relations figure very prominently in our talk about texts, e.g., titles, stanzas, lines, etc., and in our theorizing about texts and related subjects such as authorship, criticism, etc. If we resolve ontological questions by looking to the nominal phrases in our theoretical assertions, then we will conclude that such things exist and are the components of texts. The persuasiveness of these arguments is increased by the fact that theories from many diverse disciplines, and even conflicting theories, are committed to content objects as fundamental explanatory entities.

Metaphysical/Essentialist: This is the classic argument from hypothetical variation, used to distinguish essential from accidental properties in scholastic metaphysics or, in a more contemporary philosophical idiom, to establish ``identity conditions'' for objects. Here one argues that if a layout feature of a text changes, the ``text itself'' still remains essentially the same; but if the number or structure of the text's content objects changes, say the number of chapters varies or one paragraph is replaced by another, then it is no longer ``the same text.''

Productive Power: An OHCO representation of a text can mechanically generate other competing representations, e.g., an OHCO representation can be formatted into a bitmap image, but none of these other representations can mechanically generate an OHCO representation. This is closely connected with the pragmatic/scientific arguments: that OHCO representatives are richer in information and are more effective for managing text processing.

Conceptual Priority: Understanding and creating text necessarily requires grasping the OHCO structure of a text, but does not essentially involve grasping any other structure. Therefore it is the OHCO structure that is essential to textuality.

If the forgoing arguments are good then the OHCO thesis: explains the success of certain representational strategies; is logically implied by many important theories about text; matches our intuitions about what is essential and what accidental about textual identity; is richer in relevant content than competing models; and matches our intuitions about what is essential of textual production and consumption.

Phase two was pluralistic. When researchers from the literary and linguistic communities began using SGML in their work, the implicit assumption that every document could be represented as a single logical hierarchical structure quickly created practical problems for text encoding (Barnard et al. 1988). For example, a verse drama contains dialogue lines, metrical lines, and sentences, but these do not fit in a single hierarchy of non-overlapping objects. Taking one particular sense of textual identity led the SGML community to assume that there was only one logical hierarchy for each document. However, researchers from TEI found that there seemed to be many ``logical'' hierarchies that had equal claim to be constitutive of the text. Thus where the original OHCO Platonists and the designers of SGML took the editorial hierarchy of genre to be primary, the literary scholars of the TEI took the structures elicited by specialized disciplines and methodological practices to be equally important.

The natural modification of OHCO Platonism was to see texts not as single ordered hierarchies, but as a ``system'' of ordered hierarchies. Each hierarchy corresponds to an ``aspect'' of the text and these aspects are revealed by various ``analytic perspectives,'' where an analytical perspective is, roughly, a natural family of methodology, theory, and analytical practice. Each analytical perspective (AP), e.g., prosodic, on a text does seem to typically determine a hierarchy of elements. The doctrine affirms the following hierarchy-saving principle:

AP-1: An analytical perspective on a text determines an ordered hierarchy of content objects.
AP-1 seems to reflect actual text encoding practices in the literary and linguistic text encoding communities. However, there are technical terms, such as ``enjambment'' and ``caesura,'' that specifically refer to relationships between objects from overlapping families. Because a technical vocabulary can be considered a sign of an analytical perspective the existence of this terminology suggests that some analytical perspectives contain overlapping objects. However, one might attempt to accommodate this counterexample with a revision still very much in the spirit of recognizing hierarchies as fundamentally important:
AP-2: For every distinct pair of objects x and y that overlap in the structure determined by some perspective P(1), there exist diverse perspectives P(2) and P(3) such that P(2) and P(3) are sub-perspectives of P(1) and x is an object in P(2) and not in P(3) and y is an object in P(3) and not in P(2)
where: ``x is a sub-perspective of y'' if and only if x is a perspective and y is a perspective and the rules, theories, methods, and practices of x are all included in the rules, theories, methods, and practices of y, but not vice versa. Our simple Platonic model of text as an ordered hierarchy of content objects has given way to a system of concurrent perspectives. Moreover, Huitfeldt has pointed out that despite the apparent hierarchical tendency within analytical perspectives, not only is there no assurance that decomposition into ultimate sub-perspectives without overlaps is possible, but we can also demonstrate that it is not possible. Possible element tokens in some perspectives clearly overlap with other element tokens ``of the same type.'' Examples of this are strikeouts, versions, and phrases (in textual criticism), narrative objects in narratology, hypertext anchors and targets, and many others.

Phase three was Antirealistic. Modifying OHCO Platonism to Pluralism introduces the role that disciplinary methodologies and analytic practices play in text encoding. Some text encoding theorists see text encoding not as merely identifying the features of a text but as playing a more constitutive role. Pichler has made a number of statements that seem to be clear expressions of Antirealism, e.g.,

Machine-readable texts make it ... clear to us what texts are and what text editing means: Texts are not objectively existing entities which just need to be discovered and presented, but entities which have to be constructed. (Pichler 1995b, p.774)
Pluralistic Realism allowed that there are many perspectives on a text, but assumes that texts have the structures they have independently of our interests, our theories, and our beliefs about them. The Antirealist trend in text encoding rejects this view seeing texts as the product of the theories and analytical tools we deploy when we transcribe, edit, analyze, or encode them. Just as Landow (1992), Bolter (1991), and Lanham (1993) have claimed that electronic textuality confirms certain tenets of post-modernism, Pichler and others also suggest that texts do not exist independently and objectively, but are constructed by us. The passage from Pichler is ontological but he also endorses a companion Antirealism that is semantic:
... the essential question is not about a true representation, but: Whom do we want to serve with our transcriptions? Philosophers? Grammarians? Or graphologists? What is ``correct'' will depend on the answer to this question. And what we are going to represent, and how, is determined by our research interests...and not by a text which exists independently and which we are going to depict. (Pichler 1995a, p.690)

Our aim in transcription is not to represent as correctly as possible the originals, but rather to prepare from the original text another text so as to serve as accurately as possible certain interests in the text. (Pichler 1995a, p.6).

Huitfeldt (1995) also presented a number of criticisms of OHCO Platonism. His Antirealist tendencies are subtle for the most part:
... I have come to think that these questions [e.g., What is a text?] do not represent a fruitful first approach to our theme ... The answer to the question what is a text depends on the context, methods and purpose of our investigations. (Huitfeldt 1995, p.235)
But here and there they are unmistakable:
`devising a representational system that does not impose but only maps linguistic structures' (Coulmas 1989) is impossible (p. 238).
Both Huitfeldt and Pichler emphasize two particular claims about text and seem to see arguments for Antirealism. The first is that our understanding (representation, encoding, etc.) of a text is fundamentally interpretational: ``there are no facts about a text which are objective in the sense of not being interpretational'' (Huitfeldt 1995, p.237). Although he assures us that this does not mean that all facts about a texts are entirely subjective as ``there are some things about which all competent readers agree'' (ibid.). The second is that there are many diverse methodological perspectives on a text: ``a text may have many different kinds of structure (physical, compositional, narrative, grammatical)'' (ibid.).

The first claim is that representation and transcription is interpretational at every level. Assuming that ``interpretation'' here means inference, what is the significance of the claim that our knowledge of text is inferential? The missing premise would be that entities that could only be inferred in principle, are not real. That is, the argument depends on an extreme form of positivism. The second claim is that there are many diverse analytic perspectives on a text. Here the premise needed to get to the Antirealist conclusion would be that if what we find in the world is determined at least in part by our interests and by the methodologies and theories we choose to deploy in the course of inquiry, then what there is in the world is determined by our interests, theories, and methods.

The above comments pertain directly to ontological Antirealism. Epistemological Antirealism in text encoding probably derives in part from the ontological, so my response to ontological Realism also removes the support it gives to epistemological Antirealism. But as epistemological Antirealism can also draw some support from other sources we might consider these Antirealist claims independently. Considering again the quotation from Pichler:

Our aim in transcription is not to represent as correctly as possible the originals, but rather to prepare from the original text another text so as to serve as accurately as possible certain interests in the text. (Pichler 1995a, p.6).
I would argue that the apparently Antirealist formulations of this claim are either (1) non-antirealist truisms, (2) covertly Realist truths, or (3) false. It is certainly true that our aim in transcription is to help researchers and that this guides our encoding decisions. If this is what Pichler is saying it is a truism of encoding. But if he is saying that truth doesn't matter to encoders, then he is saying something false. Suppose a transcription has nothing at all to do with the text but helps the researcher win a prize. In such a case a (false) transcription would serve the researcher's interests quite well, but no one would claim that it is thereby a reasonable encoding, or one which is to be in any sense commended as a transcription.

In their articles both Huitfeldt and Pichler are making very profound observations about text encoding, transcription, and texts. I have ignored coming to grips with what is deep and perhaps correct in their arguments in order to make my own points about their apparent Antirealism. But regardless of their intentions I think that what they say, particularly against the current post-modern background, raises Antirealist questions about the nature of theories of text and text encoding.


next up previous
Next: The Edited Discussion Up: Michael BiggsClaus Huitfeldt: Previous: This article

hh
Fri Jul 25 22:00:35 MEST 1997