Linking Fancy unto Fancy: Towards a Semantic Codex

Keywords

meaning, theme, organization, storytelling, summary / signification, thème, organisation, fabulation, résumé

How to Cite

Winder, W. (2009). Linking Fancy unto Fancy: Towards a Semantic Codex. Digital Studies/le Champ Numérique, 1(1). DOI: http://doi.org/10.16995/dscn.139

Download

Download HTML

2828

Views

130

Downloads



Then, upon the velvet sinking, I betook myself to linking
Fancy unto fancy, thinking what this ominous bird of yore --
What this grim, ungainly, ghastly, gaunt and ominous bird of yore
Meant in croaking "Nevermore."

E.A. Poe "The Raven"

1. Cascading summaries

We will begin by considering Zholkovsky and Shcheglov's formalist interpretation of a fable by the Russian fabulist Krylov called “Trishka’s Caftan” (Figure 1). Their approach is a very general way of capturing and mapping out the meaning of texts. We will consider next the theory and practice of an alternate mapping framework, cascading summaries. Summary trees, cascading summaries, or Russian doll texts are a kind of stratified hypertext (Vandendorpe) composed of a set of cascading and interleaved summaries. Such summaries represent a novel framework for describing the structure of textual meaning, a framework which seems crucially dependent on the electronic medium.

Summarization alters the view we have of text and its meaning. In particular a cascading summary gives us a concrete representation of a hidden dimension of texts. The horizontal, linear surface of the text is explicit; we follow it left to right in our normal linear reading. Concordances show us a vertical dimension of text by stacking up or aligning concordant segments throughout the text (Wooldridge; see also Greimas and Courtés “Isotopy”, the term used in European semiotics). Summarization shows us yet a third dimension, the depth of the text, where text segments are grouped and aligned with a textual variant that is a more general, abstract expression of the subtexts' combined meaning. No part of the abstract variant need be found in the text segments explicitly; it is an expression of the meaning of the segments and must be generated through manual or automatic summarization.

We all possess the natural language competence that allows us to say more with less; it is fundamental to our use of language. Dictionaries are built on the principle that a single word (less) can be unpacked into a an explicative definition (more). Dictionaries encode a general-purpose correspondence between words, not the specific correspondence that is progressively constructed through the vertical reading of text (linguists would say that definitions are part of langue, not parole). Yet narrative texts are on-the-fly dictionaries. To interpret them correctly, we must understand how they accumulate meaning in words as the text progresses. “Trishka's caftan” means little before the story; much at the end. How do texts accumulate meaning in symbols? Summaries, like the words they are composed of, mean more with less.

Figure 1: 'Trishka's Caftan' by Ivan Krylov

Figure 1: "Trishka's Caftan" by Ivan Krylov

The electronic medium is a prerequisite for reading textual depth effectively. Software is needed that manages textual depth as effectively as the codex manages horizontal text and concordances manage vertical text. The electronic medium offers us a kind of semantic codex that gives us an unusual insight into how meaning is organized in texts.

2. Weaving narrative in “Trishka's Caftan”

Trishka's parable describes the problem of compounding errors; the fixed expression “wearing Triska’s caftan” has been lexicalized in Russian, in part because of its simple yet emergent plot structure. We certainly have no trouble following the overall logic of the narrative. Much like in Poe’s poem “The Raven”, which is articulated around the raven’s repeated “nevermore”, the plot here constantly comes back to the same narrative crossroads. The repetition is compelling because of a shimmering of meaning, i.e. in these texts, more so than in others, the same theme is viewed from different angles. The crossroads reflects the thematic repetition and at the same time undermines it. In Poe’s poem, the word “nevermore”, though repeated mechanically, changes meaning depending on which question it answers. It means one thing, then another: the narrator asks the bird’s name and the answer is “Nevermore”. He asks when the raven will leave, and the answer is nevermore; will he get over his lost Lenore?—nevermore, and so on. The narrator vainly tries to vary the questions he puts to the raven as a way to escape the word and the looming nothingness it evokes.

In a similar way, Trishka vainly tries to vary his response to a problem but each seemingly different response he invents inevitably means exactly the same thing: that his repairs are misdirected. Trishka is missing the point. The problem is not such that he can resolve it himself, with the means he has at his disposal. It requires an external solution.

Both “Trishka” and “The Raven” have the same general compounding mechanism: one is about compounding error, the other about compounding despair, though they are indeed based on inverse positions of the questioner and the respondent. Poe's narrator varies his question but gets the same response; Trishka varies his response to the same question (how to repair the garment?), but gets the same answer: his garment is damaged. Repetition is forefronted in both cases.

These texts are captivating because they clearly display the emergent quality of meaning. The dialog of events –i.e. the problems posed and the answers attempted– constitute a cumulative meaning that is not found in each response separately. In other words, a single mistake does not make Trishka foolish, but not learning from his chain of mistakes does. Likewise, in “The Raven”, it is not any given answer that establishes the meaning, but rather the way all the narrator’s questions point unfailingly to the conclusion that his loss is permanent and profound. Emergent meaning is like compound interest at the bank. It has mathematical properties that are surprising. In these stories that general process is given form in repeated linguistic structures.

How meaning compounds and emerges is the crucial question, perhaps of psychological life in general, but at least of reading and literary analysis. Any reading is the process of constructing derived meaning from given meaning. If we want software to help us effectively describe and map the meaning of a story, we must formalize the process of deriving meaning from meaning.

The Russian formalists Yuri Shcheglov and Alexander Zholkovsky use the lesser machine of a diagram (Figure 2) to map out the meaning of the Trishka story according to their theory, the Poetics of Expressiveness (PE).

Text Box: Figure 2: "Trishka" by Zholkovsky and Shcheglov (33)
Figure 2: "Trishka" by Zholkovsky and Shcheglov (33)


Overall, the diagram explains in a step by step manner how the most general, abstract themes of box 3 (inane activity, paradoxicalness, symbolic character) are progressively embellished into the final surface text of box 10 (the verbal expression “Trishka's caftan”). Along the way themes are transformed by expressive devices (EDs). According to Zholkovsky and Schcheglov:

The metalanguage of elementary EDs makes it possible to record independently every minimal effect of increase in expressiveness en route from Θ [the theme] to T [the text]. In writing out a derivation, the scholar, as it were, ‘counts the tricks’, moving from Θ [the theme] to the T [the text] by gradual approximations, careful not to make any leaps, which would leave unexplained ‘how it turned out so well’ (38)

The first block, numbered 3, gives us the overall thematic structure of the tale. There is nothing artistic about these themes as such –themes are by definition devoid of artistry--, but they become artistic by the way they are combined and instantiated in the story. In PE, there is a radical separation between thematic information and the way that information is presented via expressive devices. An expressive device takes general thematic information and reconfigures it into either a derivative thematic configuration or an artistic surface text. That foundational distinction between content and form, or data and display, is understandable because the goal of the theory of expressiveness is to explain how the two become inextricably –artistically– intertwined.

An expressive device is something that transforms a theme into something more compelling than the “upstream” abstract and unartistic formulation. Three expressive devices used in the Trishka map are combination (COMB), concretization (CONCR), and concordance (CONCD).

  • Concretization is a movement to a concrete example: from object to household object.
  • Concordance is the explicit aligning of two thematic configurations: repairing and damaging are given a parallel structure.
  • Combination is a condensing of two thematically distinct items into one: repairing and damaging are one.

These expressive devices belong to our interpretative lexicon, just as words make up our sentence lexicon. We know how to read stories because different stories share the same lexicon of expressive devices.[1]

Zholkovsky and Shcheglov's map of the story is not simple, and even so it is far from complete, both because it is purely synchronic (giving the final, global interpretation, rather than showing how the interpretation comes into being with each line of the story read) and because it does not map crucial dimensions of the text. Surprisingly, their analysis applies only to a truncated version of the story (Figure 3): it does not deal with the “compound interest” theme.

Text Box: Trishka’s caftan got torn at the elbows.

			What is there to spend a long time thinking about? He took a needle.

			He cut a quarter off of each sleeve,

			And patched the elbows. The caftan is all ready again.

			Only the arms have become one quarter barer.

			So what, is that a misfortune?

			However, everybody is laughing at Trishka.

			But Trishka says: Well I am no fool,

			And I will remedy that trouble.

			I will patch the sleeves and make them even longer than before.

			Oh, Trishka is nobody's fool!

			He cut off the tails and hems,

			And made the sleeve endings longer.

			and now my Trishka is happy.

			Although he is wearing a caftan,

			That is shorter than many an undershirt.

			I have sometimes seen certain gentlemen

			Who, having made a mess of their affairs,

			Then correct them in a similar way;

			And lo! They are sporting Trishka’s caftan. 
			
			Figure 3: The "Trishka" analysed by Zholkovsky and Shcheglov
Figure 3: The "Trishka" analyzed by Zholkovsky and Shcheglov

PE has the descriptive machinery to deal with dimensions of compounding, through constructs such as RED (reduction) and VAR (variation), which describe how several different instances of the same theme combine in a single effect. (Though, as we will see, one might argue that their model describes, but does not explain emergence.) But in general their analysis will always be partial because it does not deal directly with the linguistic material of the source text. Theirs is a top-down analysis which, when it is projected onto the text makes good sense, but crucially does not describe how the map itself comes into being in a bottom-up fashion, starting with the text. In other words, PE starts with abstract themes and goes to the text's expression level; it leaves unanswered the question of how to start with the text's expression level and go to the theme. [2]

The expression “Triska caftan” is a heading or title that summarizes, encapsulates the text it represents. There is an exchange of meaning between the title and its text – the text spells out the title; the title highlights the text. The difficult question is not so much how the title fits the text, but rather how it is derived from the linguistic matter of the text.

Cascading summaries are a bottom-up, text to theme, and perhaps simpler and more intuitive way to understand the way themes are developed in a text. They represent a similar approach in that they can serve as a kind of map of how meaning is accumulated in texts, but they are certainly less technical, since they do not require the kind of theorization that PE embodies. In fact, summaries, whether cascading or not, are not part of a theory, but rather a natural language competence we all share. Readers are naturally able to summarize a text at different levels of granularity. Formalizing this natural behavior is a different kind of analysis.


3. From text structure to cascading summaries

Just like sentences, texts have both a syntactic (horizontal) and a constituent (depth) organization. The beginning of a story is syntactically linked to the middle and the middle to the end, but each of these syntactic units has parts that join themselves syntactically at a deeper constituent level; constituents have constituents and their syntax, and so on down the constituent hierarchy.

An outline (a simple syntactic tree) is a useful representation of the constituent and syntactic structure of a text. One kind of outline, a simple segment hierarchy, can be constructed by splitting a text into three fragments –a beginning, middle and end– and then dividing in turn each fragment into three parts. By continuing the splitting on each new fragment, we arrive at the deepest level of the hierarchy where each branch ends in a single sentence (or perhaps even a single clause, should we decide to analyze sentences into subjects and predicates):

  • 1 Introduction
    • 1.1 Beginning of introduction
      • 1.1.1 Beginning of beginning of introduction
        • 1.1.1.1 First statement beginning
        • 1.1.1.2 Middle of first statement
        • 1.1.1.3 End of first statement
      • 1.1.2 ...
      • 1.1.3 ...
    • 1.2 Middle of Introduction
      • 1.2.1 ...
      • 1.2.2 ...
      • 1.2.3 ....
    • 1.3 End of Introduction
      • 1.3.1 ....
      • 1.3.2 ....
      • 1.3.3 ....
  • 2 Body
    • 2.1 ....
    • 2.2 ....
    • 2.3 .....
  • 3 Conclusion ...

A segment hierarchy groups sentences and creates different logical generations of fragment topologies, as indicated by each column, and more directly represented by the legal numbering depth. Thus all the #.#.#. nodes are of the same generation, span the entire text, and chunk it at a unique level of granularity.[3]

Readers recognize and create such hierarchies implicitly. If different readers are asked to segment a text as just described, good readers will structure the text in very similar ways (major divisions in the same spots) and bad readers will not recognize certain boundaries properly. This is simply to say that good readers understand text grammar better than bad readers.

But readers do more than chunk the text. Outlines typically include headings. The segment hierarchy as we have just described it is a peculiar outline because it lacks meaningful headings. Just as naturally as readers divide the text, they also give (sub)titles to text segments that summarize and encapsulate subordinate branches. Such (sub)headings retell their segments in compact form; they summarize the text of the segments. (Note that we do not restrict headings to nominal groups: they can be one or more complete sentences.)

At the bottom of the segment hierarchy, at the terminal nodes, all the sentences of the text are aligned in their text order. If we insert at each superordinate node a heading that represents or summarizes what is in its subordinate nodes we will generate a set of heading generations that describe the whole text, but that are, in reverse order of each generation, less and less elaborate, up to the top node, a single heading. Collectively each generation of headings is a summary of the whole text, interleaved and overlapping with the other generation headings of the hierarchy.

This is the general structure of our cascading summaries: interleaved, overlapping summaries that are aligned on the same full text (See Figure 7 for a manually constructed cascading summary of “Trishka”).

Cascading summaries depend crucially on generating the proper headings, headings which with each generation must be coherent with each other. In fact, in the ideal cascading summary each generation of headings would be read seamlessly as a natural language summary. There are many ways to generate headings and each way offers insight into the nature of summaries in general and cascading summaries in particular.

4. Generating cascading summaries with Word's autosummarize

Microsoft Word has a function called autosummarize (under the “Tools” menu) that offers a crude tool for summarizing (of itself, it does not generate cascading summaries). It really should be called autoexcerpt, since it only picks out certain parts of a source text for its summary. The user can choose to extract a given percentage of the most important sentences of a document, ranked by their thematic importance using various measures. The ranking is based on very simple text features, like word frequency distribution, sentence position, positive and negative surface cues, and some salient discourse markers. (See Marcu 1999: 123 and bibliography for a list of markers. Autosummarize's algorithm is proprietary, but its results suggest that it uses traditional methods of analysis.)

Text Box: (5%) Trishka’s caftan got torn at the elbows.

		What is there to spend a long time thinking about? He took a needle.

		He cut a quarter off of each sleeve,

		And patched the elbows. The caftan is all ready again.

		Only the arms have become one quarter barer.

		So what, is that a misfortune?

		However, everybody is laughing at Trishka.

		But Trishka says: Well I am no fool, 

		And I will remedy that trouble.

		(15%) I will patch the sleeves and make them even longer than before.

		(10%)Oh, Trishka is nobody's fool!

		(20%)He cut off the tails and hems,

		(25%)And made the sleeve endings longer. 

		(30%)and now my Trishka is happy. 

		(35%)Although he is wearing a caftan, 

		That is shorter than many an undershirt.

		I have sometimes seen certain gentlemen

		Who, having made a mess of their affairs,

		Then correct them in a similar way;

		(40%) And lo! 

       (45%)They are sporting Trishka’s caftan.

		Figure 4: “Trishka’s Caftan”  (Autosummarize edition, formatted with tabs)

Figure 4 "Trishka's Caftan" (Autosummarize edition, formatted with tabs)

We can use Word to generate a series of summaries in 5% increments from 5% to 100%, and at each 5% increment the summary is that much closer to the original text (Figure 4). Such series of summaries are essentially generations of heading for the cascading summary. They are not aligned explicitly on the source text, but because the “headings” are all sentences Word has chosen from the source text, the alignment can be readily established: since we can locate each heading in the source text, it would be a trivial programming task to have Word segment, align and interleave each summary to automatically form a cascading summary.

Of course “Trishka’s Caftan” is not a fair test of the usefulness of autosummarize for pertinent summarization. Autosummarize is really only useful for long, well-structured documents, perhaps of at least a hundred pages. On longer texts, such as novels, the results are perhaps better, but are still very odd. (Autosummarizing Thoreau’s Walden is not particularly enlightening; the only noticeably interesting thing it does is to pick out some of the famous quotations, such as “the mass of men lead lives of quiet desperation”.) Our goal here is simply to show in concrete terms how one might establish a cascading summary automatically, i.e. a sequence of different summaries, each of which is a more concise version of the original than the preceding one in the sequence. Let us consider therefore an autosummarize cascading summary that is regular (which fragments the text in a regular fashion, something Word would never produce). In this kind of cascading summary the headings could be generated automatically from the source text.

If we tilt the autosummarize cascading summary 90 degrees counter-clockwise, we will see an outline as a tree “planted” in the single most representative sentence (Figure 6), which in the “Trishka” example was found at the 5% level. The top row is the set of all sentences of the text (i.e. the original text): all the other sentences in the tree are drawn from that common pool through the excerpting process. The sum of sentences on a given row make a complete summary. It is a stratum or level of the hierarchy which we have called a heading generation, the topmost branches of the tree being the source text segmented by sentences.

Figure 5: Sentence Selection
Figure 5: Sentence selection

There are many ways to imagine how extracted sentences could be distributed in this tree. If the extraction were totally systematic, sentences would flow down the tree towards the root, as is done in sports ladders. So summarization would always be a question of choosing the best of two sentences. For example, Sa is selected to summarize the fragment composed of S1 and S2 and would be chosen among these two, and the same scenario for Sb, Sc and so on. Then St would be the winner between Sa and Sb; Su the winner between Sc and Sd, and so on.

Another way would be to redo the competition at each level with all the original, topmost sentences of that segment; sentences would not flow directly down the tree, but at each level would compete on equal footing for an ever-expanding pool of sentences. For example, Sa would summarize S1 and S2 and would be chosen among these two; St would summarize S1-4, and would be chosen from among those four, rather than on the basis of a competition between just the winners at the level below, Sa and Sb. In this scenario, extracting would be a question of selecting the sentence which is the most representative in an increasingly larger subset of sentences. Any sentence could therefore be found at a lower level, without it necessarily reappearing at intervening levels up to the top. So, for example, Sp might only appear at the root of the tree and nowhere else, except of course at the top, in the original text.

In fact, autosummarize follows a different method all together: any sentence at a lower level will be found at all the levels above it. For example, Sp, because it is the root sentence, will be found in all the other summaries. This is because every sentence in Word is ranked according to the total population of sentences at every level, not with respect to a given local chunk of the text. So the ranking happens once, globally, and the different percentage levels are chosen purely from the overall top ranking sentences, with no local representation.

The three most general methods are therefore: 1) a simple sports ladder ranking, where the local “winners” compete with other local winners, and two “ augmented” ladders, 2) an increasing pool of competition at each level, and 3) a single overall ranking.

Figure 6: Autosummarize selection process

Figure 6: Autosummarize Selection Process


All these models have obvious problems (which are seen concretely in the output of autosummarize), the crucial one being that selection should be conditioned by discourse topology. For example, it is clear that the “Trishka” story ends with a kind of summary or metanarrative, which should be evaluated separately. None of these models would deal with that discourse topology effectively.

Even though Microsoft calls their feature summarization, it is really only extracting salient subtexts. The main criticism one would have of their system is that they do not segment texts into meaningful subcomponents, each of which would have an appropriate representation at the next summary level. In other words, in Word each sentence of the text is considered with respect to all the other sentences and there is no appreciation for the role and meaning of a sentence in a given subtext. The first and models have the advantage of restricting competition to local regions and therefore making comparisons that are more appropriate. (The other obvious problem, which we are not considering at this point, is that all models are excerpting models, not summarization models.)

To illustrate the problem, one could imagine an autosummarize applied to things, such as a bicycle, rather than to texts. In order to create a “summary” of a bicycle, it would compare all components together, such as a bolt, a wheel, handlebars, or an inner tube, and select one as the most emblematic of the whole bicycle, perhaps the handlebars. For the kind of summaries we are considering, this kind of extraction is simply not good enough. We want to extract a miniature bicycle, not any single part, however emblematic; we would like to have a bit of a wheel, a bit of the handlebars, a bit of the seat, etc.

5. Reading a cascading “Trishka”: the stratified browser

Summaries are good guides to understanding stories. Properly used, they can teach us how a story should be read because they show how different levels of the story's details mesh; they describe in a step by step manner how a reader has teased out the layers of depth in the text.

Whether manually or automatically constructed, cascading summaries depend on the electronic environment. No doubt we could cite some pedagogical print predecessors, such as dictionaries, indices, parallel translations, marginal notes and cross references, but these are generally unwieldy and never display the same cascading granularity (one might argue that a dictionary does have that granularity, but it represents a unique kind of narrative). Paper simply cannot not manage textual depth adequately, principally because the volume of repetitious text is simply too great. (Dictionaries face this problem and generally resort to their own technical abbreviations and conventions.)

But once having moved to the electronic medium, one can imagine many application areas for cascading summaries. Let us consider one briefly.


5.1 One application area: teaching reading through cascading summaries

Motivating language learners to read in their target foreign language presents a dilemma: on the one hand students want to read authors in the original whose works they already know, perhaps as part of the literary canon in their native language. (Shakespeare is an example for English as a Second Language; Hugo for French as a Language). Texts from such authors offer a rich and yet familiar conceptual framework for language learning and a natural transition to immersion in a foreign language and culture.

On the other hand, texts by well-known authors are often difficult to read in the original, even for native speakers. Thus though "Hamlet" is extremely well known, its archaic lexicon and syntax are such that it would make little sense to suggest this text as introductory reading for an ESL student, however motivating and useful prior knowledge of the plot might be.

One solution to that dilemma is to rely on scaled down versions of classic stories that are hand-crafted for language learners. Instead of the original Hamlet, one could imagine any number of simplified Hamlets written at different levels of linguistic complexity. These derivative Hamlets could serve as useful stepping stones towards a mature reading of the original. The student would start by first reading a simple but complete version of Hamlet at the appropriate level, with a simple syntax and lexicon, and then reread the entire story, but in a more complex version. By reading the same story several times at increasing levels of complexity, the student should be able to gain in a systematic manner the linguistic expertise needed to read the original, each layer of summaries serving as a kind of dictionary to the other layers.

The crucial step, however, for such an application is the special software that manages cascading summaries -- what is needed is a new kind of “semantic codex”, where one can page through layers of meaning depth just as easily as one pages through horizontal text.


5.2 A prototype of a cascading summary browser

Figure 7 gives a manually constructed cascading summary of Trishka’s story: each subheading constitutes a more detailed and elaborate version of the higher levels, until we arrive at the lowest level: the original text.

Text Box: People tend to repeat their errors. 

		Someone tries to correct an error, but repeats it. 

		Trishka tried to repair his garment.

		Trishka's caftan needed mending.

		Trishka's caftan needed mending at the elbows.

		Trishka's caftan got torn at the elbows.

		He decided to repair it himself.

		Without thinking, he got to work.

		What is there to spend a long time thinking about? He took a needle.

		But he damaged another part while fixing the first.

		He made a patch out of one part and used it on another.

		He repaired them by cutting a bit off each sleeve and using what he cut off as patches.

		He cut a quarter off of each sleeve,

		And patched the elbows. The caftan is all ready again.

		But that left a gap where the patch was taken.

		The sleeves were however shorter.

						Only the arms have become one quarter barer.

			It was clearly a foolish solution.

				Everyone thought it was a foolish solution.

					But it seemed better to him.

						So what, is that a misfortune?

					Everybody thought Trishka was foolish.

						However, everybody is laughing at Trishka.

			He didn't want to accept failure so he persisted and made the same blunder yet again.

				But he persisted, using the same method again. 

				Trishka thought he was right and could fix the remaining problem (the same way). 

						But Trishka says: Well I am no fool, 

		And I will remedy that trouble.

				He decided to patch the sleeves. 

		I will patch the sleeves and make them even longer than before.

				He thought his solution was inventive.

		Oh, Trishka is nobody's fool!

				Trishka repaired the sleeves by cutting off the tails and hems and adding to the sleeves.

		He cut off the tails and hems,

		And made the sleeve endings longer. 

			Again, his solution was just as foolish as the first time. 

				He was satisfied with the result.

		and now my Trishka is happy. 

				But his caftan was too short

		Although he is wearing a caftan, 

		That is shorter than many an undershirt.

		Many people do that. 

		Many people compound their errors too.

		Many people are like Trishka: 

		Some people are like Trishka.

		I have sometimes seen certain gentlemen

			They compound their errors.

			They reproduce their original error when they try to fix things.

		Who, having made a mess of their affairs,

		Then correct them in a similar way;

		They are wearing Trishka's caftan.

		And lo! They are sporting Trishka's caftan.

		Figure 7: Manual cascading summary of "Trishka" as presented in the stratified browser

Figure 7: Manual cascading summary of "Trishka" as presented in the stratified browser

Though outlines are routinely managed in most word processors, a cascading summary requires a special kind of interface to properly exploit the readings it offers. One illustration of such software can be found in the javascript prototype “Stratified Browser” (Winder). (Most XML editors, such as XML Copy Editor or Oxygen can use XSLT to produce similar results.) The browser is designed to display any configuration of the cascade nodes. Most outline browsers will not allow a single column of an outline to be displayed, nor can arbitrary nodes be chosen and displayed alone on the page. The stratified browser does this in many different ways (Figure 8).

The functions in the upper command panel can be activated either through the menu to the left (B) or, for the most part, through the shortcuts to the right (D). In the middle is a drop down selection box with all the unique node number identifiers (C). The browser allows nodes to be displayed according to level or node. In Figure 8, the selected node is 0.0 – the topmost summary, which summarizes the whole text in one sentence (B). Levels and nodes can be independently opened, inserted or closed:

  • Open: shows a particular level or node alone on the page (in the Trishka example, there are 6 levels: levels 0 to 5);
  • Insert: inserts a level or node in whatever is already displayed;
  • Close: erases a given level or node.

Text Box:  Figure 8:  Stratified browser interface
Figure 8: Stratified browser interface

These functions are executed on the node/level above (up) or below (down) the selected node/level so that repeated clicks will continuously expand or contract the outline. "Open all" gives the full outline; "clear all" clears the page. Node operations are only insert and close; the other nodes are not erased by node operations. The abbreviations are therefore: All: o (open all), c (close all); Level: cup/cdn (close up/down), oup/odn (open up/down), iup/idn (insert up/down); Node: o (insert node), c (close node), oup/odn (insert up/down), cup/cdn (erase up/down). Different display styles which graphically indicate the hierarchy of the cascade can be selected from the menu (A): Plain font (same font throughout), Size (font size differences), indent, and so on.

Figure 9 is a screen shot of two levels (3 and 4) that are displayed simultaneously:

Text Box: Figure 9: Display of levels 3 and 4  in the stratified browser

Figure 9: Display of levels 3 and 4 in the stratified browser

6. Compounding meaning in the semantic codex

Unlike autosummarize, the manual summary given in Figure 7 and used in the stratified browser is not based on extraction, but rather each summary level is a new text, and the links between parent and daughter texts are added after each summary has been created. Manual summaries are constructed in an intuitive way, with no formalised guiding principle. In fact, formulating explicit guidelines for creating summaries is a central theoretical issue for the field of automatic summarization. (See for example the Rhetorical Structure Theory approach in Marcu 2000.) If we did have clear guidelines for summaries, then rather than writing each summary level from scratch, we could build tools that would generate a daughter summary – perhaps not a fully automatic system like autosummarize, but rather a framework for generating summaries of a specified length.

We can see some of the problems that compounding presents us with by contrasting the different sports ladders we presented above (“Generating Cascading Summaries”).

When we produce an outline, we are segmenting the signifier, and only arily the signified. If meaning is anchored in context, and all local context is continuous with the global context, then we cannot properly segment meaning into discrete, combinable units. This problem seems most acute in the case of “compound interest”, which relies on the accumulation of effect through repetition. A simple hint of compounding is sufficient to evoke the effects of compounding, just as an infinite series of numbers can be evoked by only 3 numbers and an ellipsis (2, 4, 6 ...). Can the levels of an outline deal with the continuity of meaning? (See Fuchs and Victorri.)

Notice that the augmented ladders could degenerate into a simple ladders, depending on the selection criteria of the “heats” between local sentences. There are many kinds of comparatives that could be used to determine the winner of a heat. For example, if sentences were compared using only the comparative “longer than” (more words than), there would be no difference between the simple and the first augmented sports ladder. The comparative “longer than” is an absolute transitive comparative: if A is longer than B, and B longer than C, then A is longer than C. For this kind of selection, losers in any heat would always remain losers, whether in the augmented or simple systems; the two ladders would collapse into one.

That would not be the case if gestalt comparatives[4] were used as selection criteria. These comparatives define an individual's qualities in function of the population it is being compared with. If the population changes, then the comparative quality changes. Thus, losers at one point in the ladder might be winners at a later level in the augmented ladder because in such ladders, unlike the simple ladder, the population shifts. For example, the descriptive quality “closer to the mean length of sentences” applied to one individual may not be applicable to the same individual as the population changes, because the reference value of the mean will shift as the population shifts. In one heat in which three sentences are compared, one sentence would be closest to the mean length, and the other two would therefore lose that heat. However, if the next heat included those same sentences plus a fourth, one of the former losers might win.

While gestalt comparatives are linguistically rare (comparatives are generally absolute transitive comparatives, such as “bigger”, “more intelligent”, “more beautiful”), topological position is however a very broad gestalt comparative that underlies much of our thinking. We can no doubt explain the interest, difficulty and computational intractability of the otherwise simple game of Go as the effect of gestalt comparatives. Thus, in Go the winning position, as defined by “surrounding” the enemy, depends on whether there is connectivity between the surrounding pieces (for both opponents). Who is surrounding whom cannot be established until the population's connectivity has been established and assessed and the crucial value of a position changes with each new board configuration. More generally, our understanding of space (for example, to be “left of”) depends on the point of reference; spatially, there is nothing intrinsic to the nature of an individual that will allow us to position it without its context.

Text meaning depends crucially on such topological considerations. Gestalt comparatives are a source of an important and computationally awkward dimension of text where the value of a textual item depends on its context – a context which evolves at each textual depth and with the expanding vertical and horizontal reading of the text. The meaning of a linguistic unit is not unchanging simply because the unit is at a fixed position in the text.

Meaning compounds simply through the virtual restriction and contraction of context. From this perspective text is a swarm of compounding layers, each at different developmental stages of the final text. Understanding text as generated by layers is the central challenge that the notion of cascading summaries begins to capture.

7. Conclusion

Whatever the complications that confront an automatic generation of summaries, the fact that we do summarize, usefully and effectively so, suggests that meaning can be chunked and diluted and that cascading summaries do offer the promise of allowing us to page through blocks of meaning rather than paging through blocks of signifiers, as we have done for centuries with the print codex.

The electronic medium offers a new framework for mapping out our understanding of text (see for example the data base format in Lin and Smith, as well as the mainstream research on TEI and on XML and XSLT, critiqued in the same article). We have not considered here general, low-level issues such as the relation between grids (the fundamental data interface of spreadsheets), tree hierarchies (the fundamental data format of XML and cascading summaries), and the underlying abstract database model. For many questions, the differences in data structures are unimportant. What does seems a crucial, looming concern is merging the discrete structures of the computer with the more amorphous form of linguistic meaning. Capturing that difficult negotiation will rely pivotally on text generation.

Classical hypertexts are not defined in terms of text generation or compounding. So-called dynamic links are syntactic references that are pasted on top of texts. On the other hand, the links of a cascading summary are ideally generated or at least systematically constrained by the source text’s meaning. Cascading summaries have semantic links because such links are generated according to relevance. Summaries are by nature tightly bound to the source text’s meaning, and that is why a cascading summary has a special place when theorizing semantic links.

There is a fundamental divide between paper and computers: paper does not generate; the computer does. The strategic terrain for humanities computing is making text generation part of how we think about texts, both practically and theoretically. The meaning of texts is not static, but paper makes it seem so. The fact that computers instantiate the generative dimension of texts is perhaps the most significant shift that the new medium brings to our understanding of how symbols become infused with meaning.


Works Cited

Fuchs, Catherine and Bernard Victorri, ed . Continuity in linguistic semantics. John Benjamins: Philadelphia, 1994.

Greimas, Algirdas Julien, and Joseph Courtés . Semiotics and language : an analytical dictionary translated by Larry Crist ... [et al.]. Bloomington : Indiana UP, 1982.

Liu, Yin and Jeff Smith . “A Relational Database Model for Text Encoding.” Computing in the Humanities Working Papers. 2008. Web. <http://www2.arts.ubc.ca/chwp/CHC2007/Liu_Smith/Liu_Smith.htm>.

Marcu, Daniel . “Discourse trees are good indicators of importance in text” Advances in automatic text summarization. Inderjeet Mani and Mark T. Maybury, eds. Cambridge, MA: MIT, 123-136. 1999.

—— . The theory and practice of discourse parsing and summarization. Cambridge, MA: MIT, 2000.

Shcheglov, Yuri and Alexander Zholkovsky . Poetics of Expressiveness: A Theory and Applications. Philadelphia: John Benjamins, 1987.

Vandendorpe, Christian . “Variétés de l’hypertexte”. Astrolab: Encyclopédie. University of Ottawa. Web. 2000. < http://www.uottawa.ca/academic/arts/astrolabe/auteurs.htm >.

Winder, William . Stratified Browser v. 2. 2004. Web. < http://faculty.arts.ubc.ca/winder/me/mutanda/version2/t_browser_main.htm >

Wooldridge, Russon . “Lectures-écritures verticales”. Oeuvres et critiques XIX.1 (1994): 115-22; rpt. < http://www.chass.utoronto.ca/~wulfric/articles2/vertic94/index.html >.

Zholkovsky, Alexander . Themes and Texts: Towards a Poetics of Expressiveness. Cornell UP: Ithaca, 1984.



[1] Whether EDs are specific to a particular genre or culture is an interesting question. To the degree that art is universal, one might speculate that at least some EDs are universal.

[2] Zholkovsky recognizes these limitations:

Results so far obtained are mostly paradigmatic in nature: PE can claim relative success in discovering and formulating vertical links between themes and texts. Other problems –such as the syntagmatic links within levels, the very definition of level as a horizontal stage of a derivation, and some related problems, for example, the formalisation of the derivational process– remains open. Another possible direction of future efforts is to reformulate the problem of intertextuality in terms of the PE framework. (31, our emphasis)

Summarization is perhaps the simplest form of intertextuality.

[3] If one were to disregard the meaning of the text when fragmenting, such hierarchies would be regular in structure, so that all branches would have exactly the same ramified form. But if the choice of the beginning, middle and end of a fragment is based on the fragment's particular structure and meaning, some branches might be more ramified than others. For example, a text might be divided into a long introduction (with many subdivisions), a short body (with few subdivisions) and an average conclusion (between the former two). As well, the division of each fragment into precisely three parts is arbitrary – if divisions are based on meaning, some fragments might be divided into more parts than others. In general, not all segment hierarchies are regular; some are irregular in various ways. In general, however, we will consider only the simple case of regular segment hierarchies.

[4] This is our term for relative transitive comparatives, which are linguistic equivalents of gestalt optical illusions, such as the Hering illusion.

Share

Authors

William Winder (University of British Columbia)

Download

Issue

Dates

Licence

Creative Commons Attribution 4.0

Identifiers

File Checksums (MD5)

  • HTML: e472a2a25c92305d783eba91571d8d3b