Saturday, April 23, 2011

Majority Text: True Power of the Probability Argument (pt V)

Some people may think that the argument in favor of the Majority text is simply this:  That errors, being introduced later in the stream, will almost always be stuck in the minority of manuscripts. 

This however, is not the actual argument at all.  The possibility that a manuscript with a given error (or set) could be copied more often than manuscripts without the error(s), is actually a given.   As Hodges notes:
"...of course an error might easily be copied more often than the original in any particular instance." (Pickering, Identity..., Appendix C p 165).  
 But the point is, this only works once.   Errors can't accumulate gradually in such a manner.  Lets see why.   We start with Yellow Packet copies getting copied more often, and this gives as an initial false majority for the Errors introduced by the Yellow master-copy:

Errors do indeed accumulate.   In the above diagram, all manuscripts copied from the first copy with the Yellow Packet will have its errors.  Further down, an Orange, Red, and Purple Error Packet are added.   But the effect is obvious:
White (Pure)     - 10 / 25  = 40% minority reading (unfortunate)
Yellow Packet   - 15 / 25  =  60% majority - false ('lucky break')
Orange Packet  -  8 / 25  =  32% minority 
Red Packet        -  3 / 25  =  12% minority
Purple Packet   -   1 / 25  =   4% minority
It doesn't take a genius to see that again, the natural tendency pins down most subsequent errors as minority readings.  This doesn't bode well for the Purple text.   The very manuscripts that support the Yellow Packet readings testify strongly against the Purple Packet readings. 

(Secondly, although the 'White text' (pure text) as a unit is in the minority of MSS, its readings remain in 99% of cases perfectly safe, still vouchsafed by majority readings. )

Even wiping out all earlier generations doesn't help.   This only stabilizes the percentages for each group of readings, once normal or average copying is resumed:

White   Packet   -  2 / 8  =  25% - minority   (in present / future)
Yellow  Packet   -  6 / 8  =  75% - majority  - 1 false reading set 

Orange Packet  -  4 / 8   =  50% - neutral     / uncertain

Red       Packet  -  2 / 8   =  25% - minority   / true reading
Purple  Packet   -  1 / 8   =  12% - minority    / true reading
Although this extreme case seems to undermine the reliability of majority readings, this simply isn't the case.  Probabilities remain strongly in favor of Majority readings.  Lets see why.   We need to remember that only a very small number of early and frequently copied readings will have a false majority rating (e.g. Yellow Packet)
The majority of errors in the extreme texts (e.g. the Purple Text) will have their reading-support all over the map, and with very few false-positives (e.g. Yellow); but the bulk of errors will remain graduated minority readings.
Error Packets (and real errors) will still be identifiable because:
(1) These minority readings will however, still be strongly associated with the most corrupted and generationally later texts (e.g. Orange, Red, Purple).  
(2) These texts will be easily identified, because (a) as texts or composite groups of error-packets they will remain minority texts. (b) The differently supported packets allow us to use genealogical methods.

 Typically, opponents of the Majority of MSS Model will reason that a process of uneven copying could occur repeatedly, boosting minority readings into majority readings on a larger scale.   Hodges showed the failure of this argument by showing that cumulatively speaking, the probabilities for multiple accidents favoring a bad text quickly skyrocket downward.  In discussing the case of a second error (or error packet) in a following generation, Hodges has stated:
"Now we have conceded that 'error A' is found in 4 copies while the true reading is only in 2.  But when 'error B' is introduced [in the next generation], its rival is found in 5 copies.  Will anyone suppose that 'error B' will have the same good fortune as 'error A', and be copied more than all the other branches combined?...but conceding this far less probable situation, ...will anyone believe the same for 'error C'? ...the probability is drastically reduced.  " (Append. p. 167)
These 'errors' would be equivalent to our Yellow, Orange, Red Packets respectively. 

Compounding Unlikely Events: Rapidly Decreasing Probability

We allowed for one catastrophe: over-copying of the Yellow Packet.  Hodges' argument here is actually so powerful, its clinching:

Probabilities are calculated by multiplication, with the probability of each event represented by a fraction less than 1.  A 50% chance of an error being over-copied (as an example) means 1/2 the time it could happen.  But the second error also being over-copied at the same time is (1/2) times (1/2), = 1/4, only a 25% chance.   The chance of three equally probable events in a row happening is 12.5%.   This is the same as flipping a coin.  For a fair and random coin-toss, the chances of tossing 7 'heads' in a row is less than 1%!:

Likewise, even with 50/50 odds, seven generations of errors have almost no chance of ever being consistently copied more often than their rival readings in a random undirected process.

But our observations here go far beyond even this argument:  Its a case of the experiment being poisoned before it can even get off the ground.  

The Defocussing Effect of Noise on Transmission

What is not being mentioned so far in any of the discussions is the fact that ALL scribes introduce errors, in every single copy.  Contrary to intuition, this actually also assists the Majority Reading Model, by sabotaging false positives further.

   The scheme above isolates four Error-Packets for discussion, and the analysis is valid because they are 'independent' in the sense that normally errors won't overlap or interfere with each other in the early transmission.  Its like a needle in a haystack:  The chances of two errors bumping into each other is nearly zero.

But with errors being added randomly and on average roughly equally with each copy, we have now introduced random noise into the signal at all points.  This random noise acts to mask the false signals as effectively as the true signals.

   One can think of injected random noise as a 'damping factor':  A bell rings clearly and long in the air.  But a mute, or mask attenuates both the loudness of the bell and the duration of the note.   Imbalances (spikes and dips) in the transmission process are softened, evened out and muted in a variety of ways, randomly.   This impedes the efficiency of transmission; the clarity, and the duration of false signals (errors) as well as true ones are attenuated.

However, the true signals have an enormous starting-advantage:  They are already 100% Majority readings, and it takes a lot of accumulated noise in the transmission to disturb the original text enough to actually supplant a majority reading.  These are modern considerations now well analyzed by engineers, but which were unknown to 19th century textual critics relying on 'common sense' or intuitive guesses. 
Although both true and false signals are attenuated and masked by noise, the much smaller error signals suffer the most relative damage from further random noise.  Anomalies in the error transmission are smoothed, masked, and truncated by random processes, which defocuss unique and unusual signals in the mix.


No comments:

Post a Comment