8. Beyond Author-Centricity in Scholarly Editing
© Hans Walter Gabler, CC BY 4.0 https://doi.org/10.11647/OBP.0120.08
Preliminary: document–text–work
Authorship and The Author are lodestars of literary criticism. They are specifically, too, the habitual points of orientation for textual criticism and scholarly editing. Here, where materially the very foundations of literary studies are laid, we find aggregating around the notions and concepts of ‘authorship’ and ‘author’ further terms, such as: authority; authorisation; the author’s will; the author’s intention. These form a dense and particularly forceful cluster in this field because here critics and editors confront texts in their diverse instantiations in and on documents. Given documents, some form of authoriality is always assumed behind them. Indeed, we commonly construe the relationship by defining documents as derivates, and thus as functions, of ‘authoriality’. Yet if we anchor the perspective in the materiality itself, the model may equally be reversed. Since it is from the materiality of the documents alone that the authoriality behind them may be discerned, we may legitimately declare ‘authoriality’ a function of the documents. The validity of such reversal, as well as its consequences in theory and practice, is what this essay attempts to explore.1
Documents constitute the ineluctably material supports for texts. Without the stone, clay, papyrus, parchment, or paper on which we find them inscribed, texts would have no material reality. Hence, in our age-old traditions of writing and the written, text and document live in a seemingly inseparable symbiosis, to the extent that we substitute one for the other in everyday speech, even in conception. Contracts, as well as wills, for instance, are formulated in language as texts. Yet it is customarily on the grounds that we possess and can show them as legal documents (signed, witnessed and sealed) that we declare them valid and binding. However, to define the material document as ‘the contract’ or ‘the will’ is a pragmatic shortcut, a negotiating of the everyday in a mode of speech-act symbolism. Logically, text and document are distinct and separable entities.2
To recognise that text and document are logically separable provides a basis for assessing or reassessing the value and weight of the terms in our opening cluster from a point of view of textual criticism and editing. In practice, and in our cultural experience, admittedly, we never encounter texts other than inscribed on, and carried by documents—or presented, as if on documents, on screens. Or hardly ever: for a poem or a narrative recited from memory, or composed on the spur of the moment, may still exemplify to us the primal invention and transmission of a text independently of any encoding on, and into, a material or virtual support. This has repercussions for differentiating ‘text’ and ‘work’. To paraphrase what I have developed at greater length elsewhere: works in language can be instantiated both materially and immaterially. As instantiated, we perceive works as texts. Any one given text instantiates the work. What binds the instantiations together is ‘the work’. The work exists immaterially, yet it is at the same time more than a mere notion. It possesses conceptual substance, for it constitutes the energizing centre of the entirety of its textual instantiations. Among the work’s many textual instantiations belong, too, texts as established in editions. An edited text may in fact be an instantiation optimally representing the work, even while it is never more—though commonly nothing less—than one considered textual representation of the work; or, a representation editorially preconsidered before being offered as a main textual foundation for a critical consideration of the work by interpreters and readers.3
Author–authorship–authority and the variable text
If in this manner the exercise ground for the thought and labour of the textual critic and editor lies in precincts of overlap between the immateriality of the work and the materiality of its textual instantiations, textual critics and editors must have clear and well correlated conceptions of the forces here at play. A work is the outcome of its originator’s creativity; ‘by default’ we term its originator its author. An author, in the first instance, is, or was, an historical person, even though, in the second instance, a work may have originated with a team of authors, or else may be anonymous, since who created it has failed to be recorded.
In relation to both works and authors, notions of authorship need to be taken into consideration. If and when they are, we discover that there is a pragmatically real as well as a conceptually abstract side to ‘authorship’. Authorship may be defined as the activity of real-world authors, singly or collectively. But in reverse, it may be defined from the perspective of a body of writing subsumed under the label of an author name. The Scandinavian languages possess the term ‘författarskap’ [Swedish], which translates into English most readily as ‘oeuvre’, or ‘works’, or into German as ‘das Werk’ signifying the body of works carrying the label because empirically originating with the author or authors supplying the labelling name. Although defined grammatically in the possessive case of the author name (Shakespeare’s oeuvre, Goethes Werke, Strindbergs författarskap), the ‘oeuvre’, ‘die Werke’, the ‘författarskap’ most immediately yet comprises the (immaterial) works of these authors in the (material) manifestation of their texts.
Such lines of argument lead to conceiving clearly of the ‘author’ as not an historical personage merely. On closer reflection, our awareness is sharpened that the ‘author’ not only is, but has always been, too, a projection from the works under his or her name—such as they existed in the public realm as texts subsumed under the titles of these works. The works’ guarantors were, of old, Ovid, or Horace, or Seneca, or Cicero, or Aristotle—with the name not so much designating the historical personage as metonymically extrapolated from the work. For invoking the guarantors—the authorities—a paraphrase was as good as a verbatim citation, provided it expressed and was considered true to the author’s thought, such as it was by cultural consensus understood from his works. In such manner, a medieval writer (Geoffrey Chaucer, say) would cite an author from antiquity (Ovid) as his ‘authority’. We have as a matter of fact to this day not abandoned treating authors’ names in like manner: we read Dante, Shakespeare, Goethe, Henry James or Virginia Woolf, or indeed ‘our Shakespeare’, ‘our Goethe’, which emphasises that we construct an author’s image subjectively from the works read.
Or, more precisely: in the reading of the author, we create the author image from the works through their texts. Such texts differ. Texts are, and have always been, variant. This is a fact of life, and is a consequence of the ineluctable materiality, as well as the ever-pervasive instincts of renewal, that characterise the world we live in, as do our books and texts. The variability of texts therefore may be destructive in nature: the result of corruption or material decay; or it may be constructive: the outcome of renewed creative input, be it through revision, or through participatory, emendational or conjectural, editing. One way or another: that texts are always variant is an ontological truth. Yet at the same time, it is a truth that has always been largely elided. Our cultural urge is for stable and immutable texts. Or, more cautiously put: our post-enlightenment urge is for stability and immutability as the sovereign qualities of texts. This has to do with a new cultural estimate, as well as a new self-estimate, of authors—a point to which I shall return.
It is worth following up, first, the circumstance that the limitless variability of texts has been elided, or else accepted, differently in different historical periods. From this follows, in turn, a reversal in the definition of ‘authority’. If it is accurate to say, as suggested, that medieval writers and audiences would cite ‘authority’ by author name, and in faithful reference mainly to thought and idea of given works, it seems to be a fact also that, as medievalist scholarship sees it, scribes and scriptoria in the Middle Ages, for all their endeavours to transmit ‘good’ texts, lived quite happily, at the same time, with, and in, the variability of the works’ texts, and indeed actively participated in spawning further their variability. Yet unbeknownst to them these medieval agents of textual transmission also worked towards the emergence of an idea of the textus receptus—historically, a humanist achievement (and culturally closely related, as it happens, to the medial shift from a manuscript-based to a print-based norm of communication and transmission). The establishment of the notion of the textus receptus marks a shift, too, from the canonising of works to a canonising of texts; or: of works as texts. This is the historical moment, furthermore, that marks the beginning of our own pervasive notion, or illusion, that in the shape of the text materially in our hands we have possession of the work, which yet this text can but represent, but can in truth not be.
It is at this point that the concept of ‘authority’ acquires a new definition. ‘Authority’ is no longer the author name that guarantees the genuineness of the thought and articulated ideas elicited from a reading memory of an author’s works. It is now what is sought so as to authenticate the establishing of authors’ texts with literatim accuracy. This is the view that textual criticism and editing still entertain today. It is, however, only seemingly self-evident. It subscribes to an understanding of ‘authority’ that is historically contingent and became fully codified only in the early nineteenth century. For even though the editing of surviving texts of works from classical antiquity had been carried forward in an unbroken tradition since the Age of Humanism, and a fresh tradition, moreover, of editing vernacular texts on the model of classical editing had latterly grown, it was only in the early nineteenth century that textual criticism and editing came into their own as scholarly disciplines.
Historicism and textual scholarship
That the notion of ‘authority’ in the way we understand it today became the focus of the newly instituted disciplines was a main outcome, too, of the central innovation of the age in thought and method: the rise and eventual dominance of historicism. Thinking historically meant the ability to think and reason backward through history. In terms of texts, this meant establishing causes for the state and shape they took in the documents from the past in which they materially survived. If in two surviving documents they were found to differ, the assumption was that at least one exemplified an error. Reason sought a cause for the presumptive error, and this cause was logically situated at a lost stage and in a lost document of transmission. Among preserved stages and documents, at the same time, what texts survived in whole or in parts was often amply spread over time and place. Texts from all extant documents could therefore be collated, and from evaluating collations critically it became possible to establish chronologies of document transmission as well as to arrange textual differences diachronically. It remained to infer what the differences signified, which meant mainly, what they revealed about relationships among the extant documents and texts, as well as their relationship individually or in groups to lost antecedents.
For methods of analysis to impose on the patterns of presumptive relationships, a model was provided by enlightenment science. In the eighteenth century, the Swedish biologist Carl von Linné developed a binary-structured systematics of nature. This proved adaptable to the new text-critical thinking. Surviving texts differed between themselves as well as in relation to their lost antecedent states simply (it was assumed) in a binary fashion, by error or non-error. Under this assumption, they became relatable, moreover, in groupings apparently analogous to families, for which, consequently, family trees could be drawn. This move was the foundation of the stemmatic method in textual criticism.4 (Incidentally, it anticipated, in a manner, Charles Darwin’s genetically, hence historically, oriented adaptation of Carl von Linné’s historically ‘flat’ taxonomies.) As method, stemmatics is double-tiered. For the purposes of textual criticism, it operates on the historical givens, the documents and their texts. Critically analysed, the results from collating all extant document texts are schematised in a graph, the stemma. The process of collation thus strives to be inclusive. The ensuing operation of critical editing, by contrast, is predicated on exclusion. For on the grounds of reasoning that the stemma provides, every document text that fails to meet the validity criterion underlying the analysis of the collational variation can, for the labour of critically constituting the edited text, be left aside. This leaves, ideally, just one document text on which to build a critically edited text. If this base text features what the analysis of variants has deemed to be an error, the erroneous reading is emended by what has been critically assessed as a genuine reading from another document text; or else by a conjectural reading devised by the editor’s ingenuity. Else, all instances of variation from the body of collated document texts are recorded, if at all, in an apparatus (footnoted or appended).
The stemmatic method as a whole was (and, where practiced, still is) predicated on the assumption that family trees could be established: the very idea of family relationships meant that extant documents and their texts descended from an inferentially, if not materially, recoverable ancestry. This ancestry not only could, but positively had to be construed, if only to make sense of the collation evidence from the extant documents and their texts. In all material respects, admittedly, the fountainhead of a given text was irredeemably lost. To varying degrees, nonetheless, lost documents could be inferred by drawing logical conclusions from the variation between the texts of the extant ones. In fact, it was only by such inference that the missing links between the extant documents and their texts could be filled in, and thus the stemma as a graph of interconnections could be achieved at all. The lost documents were posited in terms of their presumptive texts: that is, they were furnished logically with text ‘cloned’ from the extant textual states. Ideally, a text could thus be diachronically reconstructed back to its very source, its (presumptive) one text of origin. And if the ideal—imagined, say, as that fountainhead, the very first manuscript to come from the author’s hand—proved irretrievable, a ‘real’ common ancestor of all extant derivative texts could rationally (that is, by critical assessment of the collations performed) still be arrived at: the archetype.
Real authors and stable texts
The rationale of stemmatics came at a price. It made no allowance for the ‘fact of life’ that variability is a natural condition of texts. Behind this blind spot lies the cultural assumption of a stable and finalised text. This notion in turn is rooted in the cultural role conceded to the author. As the editing of texts in the vernacular increased through the seventeenth and eighteenth centuries all over Europe, their authors came to be perceived no longer as abstract, even though nameable, ‘authorities’, as in earlier times. They were instead known to be, or to have been, real, historically situated individuals. Texts transmitted were both attributable to, and claimed by them. What is more, these texts came in printed editions of multiple copies. No longer was every copy of a text different from another, as throughout the eras before print. The public awareness of texts from real identifiable authors was thus that they were identical, and in practical terms invariant (at least throughout given book editions). The (printed) text in hand came therefore not only to stand in for, and materially to represent, the work (as is common understanding still today). It was (as it still is) taken to be the work. The underlying conception of ‘the work’, in other words, was, and is, one of a self-identical text manifestation, invariant and closed. The cultural notion of the invariant text published by an empirical author, furthermore, was seen to coincide with, and to reinforce, the earlier logical construct of (in stemmatic terms) an archetype, and a fortiori an original text (‘Urtext’), constituting, as a posited material text, the work of the auctor absconditus of the distant past.
Authors of the present as of the past came to be seen, and indeed defined, as canonical authors. This view, too, emerged from the rise of historicism. Its finest flower was the perception of the artist, and for our purposes specifically of the author, as an original genius. This mode of appreciation carried a double aspect. It conferred upon the author a societal recognition. It reciprocally shaped an author’s self-image and imbued the author with a sense of his or her public identity and role. Johann Wolfgang Goethe was probably Germany’s most exalted exponent of the new author type. He became Johann Wolfgang von Goethe, in fact, precisely in recognition of his eminent public role. He was seen along the canonical lines of the cultural tradition. Since he so also saw himself, he helped in person, too, to shape his public image for his time and for posterity.5 One means by which he did so was his editing, or his overseeing of the editing, of his work, that is: of his oeuvre. Behind such editing stood Goethe’s authority. Even allowing that this was to a significant degree the authority of the writer, it was fundamentally as well the authority of the man, the citizen, the courtier, and the public figure.
The standing of the historical personage in life raises the question: what relation does this empirical, real-life authority bear to the concept of ‘authority’ in textual criticism and editing? The more immediate the real presence of authors has become to readers, as well as to societies, the stronger, naturally, has grown, on the one hand, their claim to authority over their work, together specifically with authority over the text(s) of that work; and, on the other hand, the readiness of society and the body politic to concede their claim. Such encompassing authority has in fact been legally codified. Real authors’ copyrights and moral rights are protected today virtually throughout the world. Yet this laudable acceptance of real-life authors and their personal rights in the societies in which we live obscures, rather than clarifies or resolves, the fundamental systemic problem of whether or how to relate the empiric and societal conceptions of ‘authority’ to the scholarly endeavour of securing the written cultural heritage of texts. In one respect, it must remain uncontested that authors can do whatever they wish with the material record of their authoring enterprises. Specifically, they can exercise practical authority over acts of copying and publication. Their wishes must carry weight in the endeavours of bringing their work as texts to their readers. Anywhere along the way, too, they are of course free to discard any amount of traces of their work, for instance throw away (or, in our digital age, attempt to erase) notes or drafts, or shred typescripts or marked-up proofs. In another light, however, any such pragmatics in real-life situations bear but obliquely on assessments of textual authority.
The fallacies of document and textual authority
But what kind of animal, we should pause to reflect, is ‘textual authority’ at all? In devising the methodology of stemmatics, and in particular in the endeavour of critical analysis of patterns of text relationships revealed through collation, the aim, as we have noted, was to establish textual validity against errors of transmission. The texts by hypothetical logic constructed for the inferred documents—the archetype or, exceptionally, the fountainhead texts—could not meaningfully be seen as invested with authority, since they were mere retro-projections from their surviving descendants. Even less meaningfully could they so be seen, considering that there were not—not even for any posited originals—any public or legal, or private, let alone any manifest writing acts of their authors’ on record from which to infer, or by which to confer ‘authority’. Such considerations should lead us to discern the fallacy underlying the very concept of ‘textual authority’.6
To this end, we might profitably attempt to disentangle, for the benefit also of textual criticism and scholarly editing, the real-life author from the author function that, in terms of theory, texts both imply and indeed generate and constitute. If it can be said that Roland Barthes’s ‘death of the author’ has, as a slogan, generally tended to overshadow Michel Foucault’s significant elucidation of the ‘author function’,7 it would probably also be true that textual critics, and editors in particular, must be counted among those who still hold both tenets in scorn. (They will insist: ‘The author is real: look, these manuscripts are incontrovertible proof that the author is not dead—or was not when he wrote them!’) Seen with a colder eye, however, the proof of the author that manuscripts provide, in truth, only evidences (alike to footprints in the sand) that an author once (or, as the case may be, repeatedly) traced his hand and writing implement over the manuscript page. The real-life author, consequently, cannot honestly be conceded to be more—though also no less—than an empirical and legal authority over the documents carrying the texts of his works. To concede to him or her an overriding authority over those texts, and on top of that to consider those texts, as texts, themselves invested with an innate authority, amounts to performing an argumentative leap akin to what psychology terms a displacement. It is this that constitutes the fallacy suggested.
This brings us back, in passing, to our initial consideration of the contract and the will as legal documents. The validity of contracts and wills by civil and legal convention is attested by the material documents as such. Their texts are, as it were, by definition free from error,8 and particularly so as, and when, they accord with formulaic conventions. Signature and seal, moreover, reinforce that the document vouches absolutely for the text it contains. It appears that, from the formalisms that characterise this pragmatic model of negotiating legal states of authority, evolved the formalisations of authority and authorisation in the triangle relationship of text, document and author. Yet the purported analogy, for all that it has gone unquestioned for centuries, does not in truth hold. Texts in the cultural realms of transmission are by definition not faultless, but on the contrary prone to error. The documents that carry them are, in their great variety, ‘formless’, and they are private. As such, they exist outside societal conventions and laws. The creative subjectivity of authors, and indeed their freedom of will in making decisions, finally, cannot affect in their essence either documents or the texts they carry. Documents and texts are entities outside of authors as real-life individuals. Hence authors, even though they are pragmatically their agents, cannot themselves rise to a position of essential authority over them, so as to decree an authoritative status for documents and texts. At most, they can testify to, and attest their relative validity.
How the elision of the pragmatic and the essential came about can be historically retraced, too, in terms of the progression of a methodology for the emerging discipline of textual criticism and scholarly editing. Stemmatology was, as we have seen, the discipline’s early method for analysing and editing transmissions of texts from antiquity and the middle ages. These were distinctly transmissions of texts. On top of that, they were transmissions spread over unique document exemplars, individually variant among each other. They were, to use the technical term, radiating transmissions. The rules for regulating the correlation of the texts in a radiating stemma, even while text-centred, were at the same time influenced, admittedly, by the cultural ascendancy of the author in that age of historicism when the stemmatic method developed. For its a priori assumption was that of the past author’s presumptively one and only original text (or the archetype as its prophet). This however did not deflect, but on the contrary strengthened, the text-critical and editorial procedures aimed at the validation of transmitted text. The concomitant strategies of radical disregard for manuscript texts critically adjudicated as inferior resulted in choosing one document text that escaped such adjudication as the foundation for a critical text to represent a work.
Such selection by rational variant analysis, and thus from within the material of transmission itself, proved not to be feasible, however, given modern, and often actually contemporary, text situations and transmissions. Yet towards these the interest and engagement of textual criticism and editing increasingly turned. What they required were procedures to deal with a largely linear descent of texts in transmission, often combined, moreover, with processes of composition, and empirically controlled, moreover, by real-life authors insistently present. Under the authorial eye, the decision on which, and from which document and text to build a critical edition, was no longer felt to be the editor’s responsibility alone. Though procedurally it belonged to the editor, it was conceptually deferred to the author. An alternative methodology to stemmatics was thus devised to support an ‘author-centric turn’. Methods in textual criticism and editing turned from being indigenously based on a critically established validity of text, to being exogenously predicated on (authorial) authority.9
As a basis for the procedures of scholarly editing, the new principles stipulated the ‘authorised document’. The text it carried was declared to possess ‘textual authority’. By embedding itself in cultural conventions, moreover, the method invested the real-life author with the power, the pragmatic authority, to declare both document authorisation and textual authority. Such, in outline, was the new methodological framework considered best suited to post-medieval textual and transmissional situations. They also resituated the editor. Stemmatology, as said, had operated without a comparably encapsulating framework. Its methods, aimed at text validation, were essentially rooted in the editor’s critical judgement. The notions of authorisation and textual authority, by contrast, constituted and constitute a priori regulators for the establishing of edited texts.
Author-centricity versus the author function
Founded mainly on empiric and societal convention, the author-centric framework for scholarly editing is arbitrary and, as indicated, exogenous to texts. Its inherent difficulties, which are logical as well as methodological, have nonetheless been insistently elided, or (more generally) not even perceived. They can be made out, however, on at least two levels. Firstly, the empiric and arbitrary conferral of authority amounts to a set of vicarious gestures (on the part of real-life authors) and assumptions (on the part of textual critics and editors, not to mention the cultural environment at large). Secondly, and more essentially, that conferral depends on the assumption that texts represent not only finalised acts of will by authors, but are in themselves invariant, stable, (pre)determinant, and closed. Yet according to present-day positions taken by theorists of language and of literature, none of these a prioria can in truth be upheld. Pivotal today is the insight that texts are variable and in principle always open. They are constant not in stability and closure, conferred by a finalising authorial fiat, but constant, if constant at all, only in that they are always capable of also being otherwise.
This recognition can be made operable, too, in terms of textual scholarship and editing; yet if so, it can be made operable from inside the material body of texts only. The key would seem to lie in the notion of the author function. As a theoretical tenet, it has amply proved its applicability to, for instance, the critical analysis of narrative. It can equally, I suggest, be utilised to deal analytically and critically with texts and their materials of composition and transmission. If in an ontological sense it is in the nature of texts to be variable, and if at the same time texts are the creations of authors, then variability is the mark that texts carry of their authors’ creativity, as well as of their own inexhaustible potential, as texts, for being otherwise.10 Systemically, therefore, in terms of the autonomy of texts, their variability is an expression of the author function, which is inscribed into them, and thus contributes to constituting texts as texts. Constituting texts in ways predicated on variability—the quality that is of their nature as it is of the nature of language, out of which texts are generated—is in terms of creativity the primary prerogative of authors. Secondarily, and in critical terms, such constituting should be acknowledged, too, as a goal of scholarly editing. From the perspective of today we should see it as incumbent on scholarly editions of the future not only to record variation of texts through their processes of past transmission. This is and will be, as it has hitherto been, the function of apparatus presentations of variants. Yet editions to come should equally endeavour to do justice to the variability within texts throughout the processes of their very creation. For this, as one may already dimly discern, it will not be sufficient to devise new formats for scholarly editions. The ways in which to embed textual criticism and scholarly editing in literary criticism and theory will themselves demand to be thought through with renewed attention. The reflections on authority (as it is conceded to real-life authors), on document authorisation, or on textual authority here entertained, with the suggestion of abandoning these concepts, may pave the way towards such rethinking.
The author’s intention rooted in copy-text editing
First, however, a concept that has become central yet needs to be broached, and to be recognised as a hindrance to the progression—in theory as in pragmatics—of textual criticism and scholarly editing, literary criticism and literary theory together. This is the concept of the author’s intention. Invoking the author’s intention as the final arbiter for establishing scholarly editions is what gives the ultimate twist to that author-centred methodology, which (as we have argued) today appears untenable, since it is predicated on texts’ invariance, stability, (pre)determinacy, and closure. The notions herein of predeterminacy and final intention, in particular, disastrously reinforce each other. Firstly they imply that a text, as achieved at the final point in time of its recorded development, does not only represent but positively constitutes the work. Over and above this misperception, they imply, too, a teleological model for creative writing still unreflectively rooted in original-genius aesthetics.
The invocation of the author’s intentions has played a dominant role particularly in Anglo-American textual criticism and editing throughout most of the second half of the twentieth century. Here, intentionalist editing was codified as a result of the generalisation of the methods of copy-text editing that originated in Shakespearean textual scholarship.11
For the larger part of the twentieth century, Shakespearean textual scholarship was driven by twin forces of methodology. One was its submission to analytical and textual bibliography. The other, which concerns us here, was the transfer of procedures of text-critical treatment of the radial dispersion of texts in medieval manuscripts to the early post-Gutenberg linear transmissions of texts from manuscript printers’ copies to first and subsequent editions in book form. The main precepts of the methods applied to texts in print were developed by the eminent British textual scholar of the first half of the twentieth century, W. W. Greg. Greg’s strengths lay in the application of an all but unrivalled faculty of analytic logic to a rich archival observation and experience. They were rooted, moreover, in classical and medievalist methodologies of textual criticism. In his perception of texts and their transmission, he was at bottom a stemmatologist. Consequently, he understood how the extant earliest printings of Shakespeare’s texts naturally derive from lost manuscripts. At the same time, he recognised how close they were to their state and shape in those antecedent, if lost, scribal, or even autograph, documents. From this understanding, he pronounced rules for copy-text editing by which to constitute edited texts by reconstituting a textual state and shape critically inferred for the lost documents. It was archetype-directed text-critical and editorial thinking that thus claimed to be recovering a maximum, with luck even an optimum, of original Shakespeare text from the derivative witnesses-in-print to these texts.
However, this adaptation of a methodology originally devised for pre-Gutenberg manuscript transmissions had its pitfalls. For instance, Greg disastrously misjudged the textual situation for William Shakespeare’s King Lear. Here were two first printings—a Quarto single-play edition and the play’s rendering in the First Folio volume—that diverged widely. So strong was Greg’s stemmatological bent that he dogmatically refused to entertain the hypothesis that these two textual states reflected two distinct versions of the play. He held the variation between the two printings to be due to errors of transmission entirely. The alternative proposition is that the dramatist’s progressive development of the play in composition and revision may be captured from the divergence in variation between the two printed versions. This is the hypothesis that in Shakespeare criticism and textual criticism has meanwhile been thoroughly tested and validated.12
Where Greg recognised the need for adjustment to the inherited methodology, however, was with respect to conditions of transmission due to printing technology which were naturally unprecedented in the pre-Gutenberg manuscript era. Texts published in first editions, or in earlier editions more generally, could be observed to have been modified by their authors after publication. Technically, the authors had been given the opportunity to mark revisions on the earlier editions’ pages that were then worked in, in the printing house, into the resettings from those preceding editions. The existence of resettings of earlier printings that still contained authorial revisions puzzled R. B. McKerrow in his 1939 Prolegomena to a complete edition of Shakespeare he was preparing to edit, though he did not live to realise the edition.13 While conceding that derivative editions would not only perpetuate errors generated in setting the first editions, but would also add to them their own errors, McKerrow saw no alternative to choosing the texts from the derivative editions as his copy-texts. He thus took two generations of error into the bargain, since among these the genuine post-first-edition revisions would be contained. It was W. W. Greg who, posthumously for McKerrow, proposed a solution to the dilemma. By logically conceptualising materially evident text—printed text—under two aspects, an aspect of state (the text’s ‘substantive’ readings) and an aspect of shape (its ‘accidentals’, i.e., spellings, punctuation and the like), he devised rules for copy-text editing. They were published in 1950–195114 and triggered the so-called ‘copy-text theory of editing’, dominant in Anglo-American editorial scholarship from the 1960s onwards.
The rules stipulated that the first-edition text, or otherwise earliest text, was always to be chosen as copy-text for a scholarly, or critical, edition. This would ensure that the edited text came as close as possible to the lost manuscript printer’s copy. It would do so anyhow in its substance of readings that remained invariant throughout first and subsequent editions, but equally, and most particularly, also in its accidentals, regardless of whether these remained constant or varied in subsequent editions. Considering that accidentals were in early hand-printing largely left to the discretion of printers’ compositors, it was only in first editions, if at all (so Greg’s argument went), that the compositors might have followed copy and thus taken over its accidentals. In some cases that copy could actually be argued to have been an autograph. To this clear-cut ruling with regard to first-edition substantives and accidentals, there was, too, an important subsidiary. It stipulated that the (first-edition) copy-text was to be followed as well in cases of indifferent substantive variants. Such ‘indifferent variants’ naturally turned up among the body of substantive variants between first edition and revised edition. Since they were variants, they needed to be critically weighed as to whether or not they were revisions. If the assessment was inconclusive (indifferent) because they might easily be typesetters’ errors, the revised-edition variant was not to be admitted to the edited text.
It was essential in terms of Greg’s rulings, in other words, to isolate from the subsequent edition such readings as by their quality could be critically assessed as revisions. They, and they alone, were (text-)critically singled out from derivative but revised editions and were then editorially used to modify the copy-text into the critically edited text. Procedurally, the modification was done by way of emending the revisions into the copy-text. Within the texture of the (first-edition) copy-text the first-edition-state substantive readings were replaced by the corresponding substantive readings from the revised-state edition that had been critically assessed individually as revised readings.
This was the first time in Anglo-American scholarly editing that not only textual variation-in-transmission was scrutinised: that is, variation originating with agents other than the author (inferred for lost, or evident at extant, stages of transmissions). This was variation of an extraneous nature, and hence, virtually by definition, variation as ‘error’. Now, by way of critically discerning and isolating revisions in (bibliographically, and thus transmissionally) derivative editions, variation in the progression of texts was taken account of, too—variation by definition not ‘error’, since integral to the text(s) of the work in question in its (their) evolution over time.15 Interestingly though, as we have seen, Greg, thanks to his analytical powers, found a way of bending such new departures in the concerns of textual criticism back onto the inherited patterns of reaching out behind the surviving manifestations of texts. The composite, critically eclectic edition text was the mirror image, as it were, of the successfully reconstructed archetype text. Gregian copy-text editing was thus still firmly modelled on archetype editing, even while, paradoxically, the infusion by emendation of post-copy-text revisions into the copy-text substratum of the edited text was allowed, although it had no imaginable connection, or rather: bore an imaginary relation only, to a given text’s pre-survival state, materially lost.
Greg’s rules provided the foundation for the specifically Anglo-American mode of critical eclecticism in scholarly editing. Critical eclecticism to construct, as edited text, a composite of readings early and late in a textual development, and before, as well as after, its first material manifestation in the state and shape of a public text, requires belief in a teleology of texts, coupled with confidence that telescoping a textual development over time into the one plane of the edited text is a legitimate procedure. The adjective ‘critical’ is the important face-saver, forestalling the negative view that the procedure contaminates. It has of old in scholarly editing been branded as ‘contamination’ to implant readings from one historical instantiation, one version, of a text into another. For this reason, ‘critical eclecticism’ has been generally viewed with acute suspicion outside the Anglo-American sphere.
Greg’s copy-text editing itself was rooted in origin-oriented textual criticism as inherited from stemmatology. Yet from assumptions of a teleology of texts, at the same time, it nodded towards author-centred textual criticism and editing. The author—William Shakespeare to boot—incontestably played a role in Greg’s devising of rules. The lure of an autograph fair copy was simply irresistible, if not indeed the dramatist’s so-called foul papers, underlying, at the shortest transmissional distance, the surface of a play’s first manifestation in print. But even with Greg’s acute awareness of the author as factor and agent in the textual transmission, his text-critical procedures remained squarely bent on validating text. Even emending a first-edition text with the substantive revisions critically ascertained from a subsequent edition was understood as an editorial measure to validate the authorial text for the work. The copy-text editing rules were not aimed at fulfilling authorial intentions.
Their acute potential for being precisely so transmuted, however, was soon perceived. Fredson Bowers, an American textual scholar of the generation after Greg, not only saw, but capitalised on the intentionalist implications of Greg’s rules.16 The institution of these rules as the foundation for intention-oriented copy-text editing was Bowers’s doing. The fusion came to be known as the ‘Greg-Bowers theory’ of copy-text editing—not a ‘theory’ strictly speaking, perhaps, but unquestionably a set of strong principles for scholarly editing. Their base was Greg’s copy-text-editing rules generalised timelessly for the scholarly editing of texts (or at least of literary texts) of all kinds from all periods. Pragmatically, the generalisation was predicated on procedures of analytical and textual bibliography. The superstructure devised for the Greg-Bowers principles was the tenet that it was the ultimate task and duty of the critically eclectic scholarly edition to fulfil the author’s intentions, or the author’s final intentions, or the author’s latest intentions—by variant adjectives, the goal, as it was progressively argued, came to be variously modified.
What Bowers performed in thus giving an intentionalist turn to textual criticism and scholarly editing was something of a coup-d’êtat, or usurpation. For it was precisely at the intellectual moment in the course of the twentieth century when New Criticism culminated in literary theory of the Wellek-Beardsley persuasion, which resoundingly proclaimed the intentional fallacy,17 that Bowers defined fulfilling the author’s intention the ultimate goal of scholarly editing.18 With New Criticism in decline, and the critical invocation of intention banned as a fallacy, it was now the textual scholar and editor who bore through the throng the ‘well-wrought urn’ of the single, pristine, perfect text in shape of the critically eclectic text fulfilling the author’s intentions. The ‘Urtext’ and archetype of past conception became transubstantiated into the absolute text of ideal finality.
Intentionalist editing:
some problems of hermeneutics
Clearly, fulfilling the author’s intentions constitutes a fulfilment, too, of the author-centric orientation and dependency of textual criticism and scholarly editing of the past two centuries that we have been discussing. It goes beyond—indeed, it transgresses—the foundation in the materialities of transmissions that textual criticism and editing traditionally built and relied upon. For to realise the author’s intentions means to establish text that is precisely not inscribed in any material document. More specifically still: since it is alone from material documents that written authorial text may be read, the procedure of arriving at the text of the author’s intention must involve declaring what is written as somehow in error. This may be trivial, wherever, say, the mistake of a scribe, or a typist, or a printing-house compositor can be unambiguously made out and corrected—hardly an editorial measure, though, that were weighty enough to lay claim to fulfilling an authorial intention. Otherwise the making-out of the written as in error involves deeper enquiry. In such cases the scrutiny of the text as documented becomes genuinely interpretative.
This ought to give rise to concerns about the role and expertise of the textual critic and editor. They have but seldom, it is true, been denied critical faculties; nor should they themselves ever abdicate them. The question however is in what modes they should opt to exercise and invest them. The analysis of documents or of collations of texts demands of textual critics and editors critical skills. Such skills, moreover, are absolutely called upon to validate texts and text readings for the purpose of accepting or rejecting them for the edited texts of scholarly editions. Even when, under the ascendancy of the author, an overall responsibility for editorial decisions and results was increasingly delegated to authors—real-life authors to boot—and editors consequently tended rather to hide behind the author, their text-specific expertise and skills remained a (usually) sufficiently secure foundation for professionally executed scholarly editions. But when it was further imposed upon editions that they should aspire to fulfil authors’ intentions, not only was the question left unexplored what extension of expertise and skills this would entail; more fundamentally still, it appears that the intentionalist reconception of textual criticism and scholarly editing was proposed, unaware of the very nature of the imposition. Yet if our critique holds, the Greg-Bowers principles clearly empower the editor not just, as by older dispensations of textual editing, to assess and adjudicate, out of a specialised professionalism, the extant material record of given transmissions. In addition, the principles invest the editor with a hermeneutic dominance over the work. For if under teleological premises the author’s final intentions enter integrally into configuring the meaning of a text (as the expression of a work), then it follows that it is the author’s final intentions as supplied editorially that provide the textual capstone to realising the work’s ultimate meaning. This conundrum has a theory dimension that awaits a solution—unless it is a genuine alternative simply to abandon the intentionalist stance in editorial scholarship.
Reconceptions
Beyond the point at which it culminated in the intentionalism of twentieth-century Anglo-American textual criticism and editing, the author-centric trend in nineteenth- and twentieth-century editorial scholarship began to recede. Spearheaded by some twenty years (in the 1980s and 1990s) of invigorating theoretical debate in the Society for Textual Scholarship and its yearbook TEXT,19 as well as by single studies such as Jerome J. McGann’s Critique of Modern Textual Criticism (1983), there grew a diversification of concepts for textual and editorial scholarship that Peter L. Shillingsburg has meanwhile found categorisable into the (formal) ‘orientations’ he specifies20 alongside the ‘authorial orientation’ that we have singled out for our present reflections.
At virtually the same time in editorial history when the fulfilling of authorial intention was proclaimed the ultimate goal of scholarly editing in the Anglo-American domain, authorial intention was within the German editorial school declared outright unfit to provide a base for editors’ decisions towards establishing edition texts. The key pronouncement in the matter came from Hans Zeller, the Swiss-German Nestor of German textual scholarship: ‘A principle such as authorial intention cannot serve as a central criterion for the constitution of text [because it] remains a mere idea of the author on the part of the editor, and as such cannot be established reliably.’ Though so published in English only in 1995, the verdict in the German original is of 1971.21 At the same time, however, the landmark collection of German essays on textual criticism Texte und Varianten of 1971 adheres to, and embraces, what is present-day consensus still, namely the author-centric conceptions, attitudes and practices of inherited textual scholarship. The German variety of the discipline, it is true, has its own favourite problem areas, among which figure prominently the notion of the version (Fassung) and the textual fault (Textfehler). Yet the fundamental critique of author-centricity proposed here should apply as much to German textual scholarship as it does to its Anglo-American near-relation.
To return to Zeller’s pronouncement on intention: his salient specification is that ‘authorial intention cannot serve as a central criterion for the constitution of text.’ It thus does not rule out critical investigations of authorial intentions, be they manifestly expressed or inferable, nor does it disallow consideration, or even observance, of authorial intentions in establishing edited texts. Yet what it categorically denies is the usefulness of authorial intention as the ultimate arbiter and guide to editorial decisions in the critical constitution of edited texts. To differentiate so precisely is a stance from which to arrive at positive criteria for establishing edited texts. A scholarly edition, if and when referring to authorial intention, could under exceptional (meaning: particularly clear-cut) circumstances, introduce authorially intended readings, as critically recognised, into the edited text itself, but it would do this in the manner of conjectural emendation, strictly as the editor’s responsibility. Yet, commonly, an edition would present its editor’s critical assessment of authorial intention discursively in an editorial introduction and/or textual note. This would be the textual critic’s and editor’s ground from which to share in the hermeneutical exploration of a work through its texts. Conversely, the establishment as such of the edited text for a work would remain firmly grounded in the document-supported material evidence for the composition, revision, and transmission of the work’s text, or texts.
Renewed beginnings beyond author-centricity are possible and indeed conceivable for textual scholarship and critical editing. Summing up from what we have here considered and reflected upon, I would propose, simply, that texts themselves in their material manifestations in documents should again become the focus of textual criticism and scholarly editing. Here, the lodestar would no longer be ‘authority’ under the exogenous construction of authorisation and textual authority, superstructured moreover by deference to an authorial intention, in duty to be fulfilled by editors and editions. Textual criticism and scholarly editing will be well served to focus once more on textual validity, as erstwhile under the stemmatic dispensation. Ascertaining and establishing textual validity should thus constitute the core of a renewed methodology. Measures of textual validity would be gained from the author function. Pertaining ontologically to language composed as text, the author function is inscribed universally at any stage or moment of text composition or transmission.
This holds true even where real-life authors impinge most closely on textual traces, which is when we encounter them even physically (or at least in the mediate author-physicality of their handwriting or doodling) in documents of composition. Yet so distinctly, at the same time, is the author function as a compositional function present in drafts, that to edit writing from documents of composition means to edit from their material record not solely validated text as resulting from the acts of writing, but additionally a distinct authorship dimension emerging from the processes of that writing.
What this might mean should be as good a point of entry as any for sustained explorations of the hermeneutic dimension specific to textual criticism and scholarly editing. It is a question hitherto little considered. This is paradoxical, given the extent to which ‘meaning’ has demanded attention in recent theorisings of textual and editorial scholarship. Significantly, though, the need to reflect on ‘meaning’ has followed in the wake of explicating the notion of authorial intention,22 and debates have correspondingly been enacted at a middle, or even a total, distance from texts as materially evidenced. The considerations are, and have been, stimulating; in principle they concede to editorial scholarship an interpretative, and thus ultimately hermeneutic, dimension. What remains to be assessed, however, is the place and quality of interpretative criticism—the hermeneutic stance, in other words—as indissolubly tied back to the manifest materiality of texts and their transmissions.
As for texts as the materialisations of the authoring of works, I believe I have sufficiently indicated that the author dimension and perspective cannot, and must not be abandoned or sacrificed under renewed methodological tenets for the combined disciplines of textual criticism and editing. Yet here, in terms of text, ‘the author’ would cease to be an exogenous legislator and arbitrator, and instead be perceived from the inside, as it were, and thus as a systemically integrated text function within the body of the text-critical and editorial endeavour itself. In terms of textual transmission, the author would be definable as a function of the extant material documents. Simultaneously, though, it should go without saying that the existence of real-life authors would not be negated, nor would expressions of the will of empirical authors, nor would closely critical considerations of authorial intention, by dint of method, be anathematised. Text-critical investigations would continue to be directed towards them, and these would continue to be accounted for in introduction and commentary discourses of editions. In view, furthermore, of the future importance of editing distinct authorship dimensions for texts, considerations of authorial intention (which would similarly have their place in an edition’s discourses collateral to the edited text) should be matched by assessments of authorial responses to self-performed acts of writing (texts, once in the process of composition, will insist on ‘talking back’ to their authors—as anybody knows from everyday experience; and this basically dialogic situation of writing often enough leaves material traces in draft documents—which intrinsically is of both compositional and critical interest). A renewed methodology for textual criticism and scholarly editing, lastly, would as ever be geared towards the closest scrutiny of transmissions for exogenous error. In validating text against error it would still draw all that can be gained from subsidiary methods such as analytical and textual bibliography, palaeography, paper analysis, or digital imaging in all their highly advanced forms. The digital medium, finally, should itself, as no doubt it will, become the future home and environment for the scholarly edition. The present essay may be considered as contributing to reflections on principles towards a praxis of editions ultimately to live as digital scholarly editions.
1 What follows thus seeks to carry forward, and complement with propositions for the document-authoriality relationship, the argument begun with exploring the relationship between document and text in ‘The Primacy of the Document in Editing’, Ecdotica, 4 (2007), 197–207. [French version: ‘La prééminence du document dans l’édition’, in De l’hypertexte au manuscrit. L’apport et les limites du numérique pour l’édition et la valorisation de manuscrits littéraires modernes (Recherches & Travaux, n. 72), ed. by Françoise Leriche et Cécile Maynard (Grenoble: ELLUG, 2008), pp. 39–51.]
2 Hubert Best, international copyright lawyer, illuminatingly informs me (by private email) that ‘under Common Law […] the written contract is in fact only the evidence of the actual contract, which became a legally binding agreement when the parties entered into it. […] [W]ills and deeds […] require documentation and formalities (e.g. witnesses, in the case of a will).’ This reinforces my insistence on the logical distinction between document and text. At the same time, it exemplifies a cultural transition from the oral to the written. The legally binding agreement constituting a contract was by a performative speech act and handshake entered into by two living partners. A will such as we know it, by contrast, since it becomes meaningful only on the death of the person expressing the will, could not exist without the document ‘will’. Importantly, nonetheless, it is essentially not the material document, but the text contained in the document that the person witnessing testifies to.
3 Cf. the preceding essays ‘Editing Text—Editing Work’, and ‘Thoughts on Scholarly Editing’.
4 Interestingly, the first known graph of a family tree for documents and their texts is documented from Sweden (not coincidentally, perhaps, considering the Linné connection). It was drawn up by Karl Johan Schlyter, the country’s most penetratingly modern textual scholar of his time, who visualised for his 1827 edition of a legal codex, Westgötalagen, the relationship of the texts from ten extant and four inferred documents in a stemma he names ‘Schema Cognationis Codicum manusc’. Cf. Gösta Holm, ‘Carl Johan Schlyter and Textual Scholarship’, Saga och Sed. Kungliga Gustav Adolfs Akademiens årsbok (Uppsala: A. B. Lundequistska Bok Handeln, 1972), pp. 48–80. The Swedish precedence in the development of stemmatology that soon was to gather momentum in nineteenth and twentieth century classical and medieval textual scholarship appears hitherto to have gone unnoticed in Germany and elsewhere, where Karl Lachmann is, in the main, credited with its invention.
5 Klaus Hurlebusch, ‘Conceptualisations for Procedures of Authorship’, Studies in Bibliography, 41 (1988), 100–35, suggestively discusses the interaction between individual author and society in the forming of ‘author images’.
6 Peter L. Shillingsburg’s monograph, Scholarly Editing in the Computer Age: Theory and Practice. Third Edition (Ann Arbor: The University of Michigan Press, 1996), by contrast, is, from its opening sentence in ‘Part 1. Theory’ onwards, wholly predicated on ‘concepts of textual authority’.
7 Roland Barthes, ‘The Death of the Author’ [1967], in Image Music Text, essays selected and translated by Stephen Heath (London: Fontana Press, 1977), pp. 142–48, https://grrrr.org/data/edu/20110509-cascone/Barthes-image_music_text.pdf; Michel Foucault, ‘What is an Author?’ [1969], in The Foucault Reader, ed. by Paul Rabinow (New York: Pantheon, 1984), pp. 101–20.
8 Hubert Best (see footnote 2) adds the legal specification: ‘where the common law contract is merely evidence of the actual contract, if the document plainly does not conform with the actual agreement, it is set aside (doctrine of “mistake”).’
9 Peter L. Shillingsburg (see above, note 6) proposes, in ‘Chapter Two: Forms’, a set of ‘orientations’ for scholarly editing, among which one is the ‘authorial orientation’. My present argument is an attempt to give a historical depth perspective to Shillingsburg’s formal, and thus ‘flat’ taxonomy.
10 Such understanding provides the foundation for Roger Lüdeke’s theory of revision, developed in his Wi(e)derlesen. Revisionspraxis und Autorschaft bei Henry James (Tübingen: Stauffenburg, 2002).
11 Again, Peter L. Shillingsburg’s Scholarly Editing in the Computer Age, may be cited here for its convenient overview, in the chapter ‘Intention’, pp. 29–39 in the book’s ‘Part I. Theory’, of the concept of authorial intention and its application to Anglo-American scholarly editing in the latter half of the twentieth century. Of greater complexity is David C. Greetham, Theories of the Text (Oxford: Oxford University Press, 1999), chapter 4: ‘Intention in the Text’, pp. 157–205.
12 The late 1970s and the 1980s saw the liveliest debates of the fresh, and distinctly critically motivated views of the Lear question. Most diversified in its approaches is the book of essays, The Division of the Kingdoms, ed. by Gary Taylor and Michael Warren (Oxford: Clarendon Press, 1983; [paperback] 1987).
13 Ronald B. McKerrow, Prolegomena for the Oxford Shakespeare. A Study in Editorial Method (Oxford: Clarendon Press, 1939).
14 W. W. Greg, ‘The Rationale of Copy-Text’, Studies in Bibliography, 3 (1950–1951), 19–36; and Collected Papers, ed. by J. C. Maxwell (Oxford: Clarendon Press, 1966), pp. 374–91.
15 By contrast, landmark editions of German authors, as early as towards the end of the nineteenth century, had already given scope to the textual evolution of works under their authors’ hands.
16 It was Bowers who published Greg’s ‘Rationale of Copy-Text’ in the 1950–1951 volume of the annual Studies in Bibliography that he had begun to edit.
17 W. K. Wimsatt and Monroe Beardsley, ‘The Intentional Fallacy’, in The Verbal Icon, ed. by W. K. Wimsatt (Lexington: Kentucky University Press, 1954), pp. 3–18.
18 The range of Fredson Bowers’s contributions to the forming of principles and practice of editorial scholarship in the second half of the twentieth century may be gauged from his collection Essays in Bibliography, Text, and Editing (Charlottesville: University Press of Virginia, 1975). Of particular relevance to our discussion here are the essays ‘Multiple Authority: New Concepts of Copy-Text’, pp. 447–87 (reprinted from The Library, 5/27 [1972], 81–115) and ‘Remarks on Eclectic Texts’, pp. 488–528 (reprinted from Proof, 4 [1974], 13–58).
19 TEXT: An Interdisciplinary Annual of Textual Studies. Published from 1984 (for 1981) to 2006, 16 volumes in all.
20 The ‘documentary, aesthetic, authorial, sociological, and bibliographic’ orientations, Shillingsburg, Scholarly Editing in the Computer Age, p. 16ff.
21 Hans Zeller, ‘Record and Interpretation’, in Contemporary German Editorial Theory, ed. by Hans Walter Gabler, George Bornstein, and Gillian Borland Pierce (Ann Arbor: The University of Michigan Press, 1995), pp. 17–59 (pp. 24–25). The original essay in German, ‘Befund und Deutung’, appeared in Texte und Varianten. Probleme ihrer Edition und Interpretation, ed. by Hans Zeller and Gunter Martens (München: C. H. Beck, 1971), pp. 45–89.
22 Representative samplings are to be found in Peter L. Shillingsburg, Scholarly Editing in the Computer Age, D. C. Greetham, Theories of the Text, and Paul Eggert, Securing The Past (Cambridge: Cambridge University Press, 2009). Dario Compagno, ‘Theories of Authorship and Intention in the Twentieth Century: an Overview’, in Journal of Early Modern Studies, 1/1 (2012), 37–53, furthermore, helps to recognise how these investigations chime with the mainstream arguments in hermeneutics and literary theory.