Friday, April 29, 2011

Chronology of Printed GNTs: 1500 - 1600

This is the first part of a series that deals with the history of printed texts of the New Testament. Consider if you will the following diagram:

Click to Enlarge

1500 - 1600 (16th century)
The 16th century was the beginning of printed critical Greek New Testaments, because even the very first, that of Erasmus, was made by the careful collation of as many manuscripts as could be readily available, and both critical judgement and comparison with the standard Latin text was used. The first three editors/printers of the Greek NT laid the foundation for the Great Protestant translations which quickly followed.

Erasmus (1516-22): The first printed Greek NT, with a fresh translation into Latin from it in parallel columns. Although he only had a half-dozen MSS available later, he must have also accessed various MSS while in England preparing his new translation (1505-6). It is estimated that he consulted about a dozen MSS, mostly later Byzantine copies, as well as the Latin Vulgate. Even at this early time, Erasmus was aware of the issues surrounding the four most serious variants,
(1) Mark's Ending: Mk 16:9-20,
(2) Pericope de Adultera: Jn 7:53-8:11,
(3) The Johannine Comma: 1st Jn 5:7, and
(4) The Great Mystery: 1st Tim 3:16.
In fact, Erasmus had left out the Johannine Comma in the first 2 printings. He also discussed variants like The Lord's Prayer (Matth 6:13), The Rich Young Man (Matt. 19:17-22), The Angelic Song (Lk 2:14), and The Bloody Sweat (Lk 22:43-44). It is said that copies of Revelation available to him at Basel lacked the final verses, and so he back-translated those from the Latin Vulgate, but Hoskier doubts this, and thinks he followed Codex 141.

Erasmus on several occasions preferred the Latin Vulgate reading, where the Byzantine texts seemed deficient, the most important being:
(1) Matt. 10:8 'raise the dead', as in א B C D 1, Latin Vulg.
(2) Matt 27:35 'that it might be fulfilled..' - MS 1, Caes. MSS, Syr-Hark., OL/Vulg. Euseb.
(3) John 3:25 '...the Jews about purification' - MS 1, P66, א , Caes. MSS, OL/Vulg.
(4) Acts 8:37, - MS F, Iren. Cypr. OL/Vulg.
(5) Acts 9:5,6 - MS E, 431, OL/Vulg.
(6) Acts 20:28 'Church of God' - Vulgate, א B etc.
(7) Rom 16:25 - placed at end of chapt 16 as in א B C D etc.
(8) Rev 22:19 - 'book' MS F, Vulgate, Boh. Ambrose, Prim. Haym.

Robert Stephanus, 'Estienne', (1550): The third and most important of R. Stephanus’ editions, known as the Editio Regia, substantially based on the final lifetime recension of Erasmus. A collation against the first edition of Stephanus, 1546, reveals that in 38 passages the editor here rejected the Complutensian reading in favor of that if Erasmus, whereas the converse occurs only twice. This edition is the important 1550 printing in Paris by Estienne (Robert Stephanus) of the Greek NT, based on the final lifetime edition of Erasmus of Rotterdam. This was this printing that Stephanus used to divide the text into numbered verses which he first put into print in the 1551 edition. These two Stephanus printings (1550, 1551) were utilized by the translators of the New Testament for the 1611 King James Bible and became cited as the fundament of the 1633 “Textus Receptus” Greek New Testament printed by the Elzeviers in Amsterdam (DM 4679) – “Est haec ipsa editio ex qua derivatur quem nostri textum receptum vulgo vocant, nomine rei minus bene aptato”.

Theodore Beza (1565, 1582): - was a French Protestant Christian theologian and scholar who played an important role in the early Reformation. A member of the monarchomaque movement who opposed absolute monarchy, he was a disciple of John Calvin and lived most of his life in Switzerland.

In 1565 he issued an edition of the Greek New Testament, accompanied in parallel columns by the text of the Vulgate and a translation of his own (already published as early as 1556). Annotations were added, also previously published, but now he greatly enriched and enlarged them.

In the preparation of this edition of the Greek text, but much more in the preparation of the second edition which he brought out in 1582, Beza may have availed himself of the help of two very valuable manuscripts. One is known as the Codex Bezae or Cantabrigensis, and was later presented by Beza to the University of Cambridge; the second is the Codex Claromontanus, which Beza had found in Clermont (now in the National Library at Paris).

It was not, however, to these sources that Beza was chiefly indebted, but rather to the previous edition of the eminent Robert Estienne (1550), itself based in great measure upon one of the later editions of Erasmus. Beza's labors in this direction were exceedingly helpful to those who came after.
"Beza’s name will ever be most honorably associated with biblical learning. Indeed, to many students his services in this department will constitute his only claim to notice. Every one who knows anything of the uncial manuscripts of the Greek New Testament has heard of the Codex Bezae, or of the history of the printed text of the New Testament has heard of Beza’s editions and of his Latin translation with notes. The Codex Bezae, known as D in the list of the uncials, also as Codex Cantabrigiensis, is a manuscript of the Gospels and Acts, originally also of the Catholic Epistles, dating from the sixth century.1310 Its transcriber would seem to have been a Gaul, ignorant of Greek. Beza procured it from the monastery of St. Irenaeus, at Lyons, when the city was sacked by Des Adrets, in 1562, but did not use it in his edition of the Greek Testament, because it departed so widely from the other manuscripts, which departures are often supported by the ancient Latin and Syriac versions. He presented it to the University of Cambridge in 1581, and it is now shown in the library among the great treasures.

Beza was also the possessor of an uncial manuscript of the Pauline Epistles, also dating from the sixth century. How he got hold of it is unknown. He merely says (Preface to his 3d ed. of the N. T., 1582) that it had been found at Clermont, near Beauvais, France. It may have been another fortune of war. After his death it was sold, and ultimately came into the Royal (now the National) Library in Paris, and there it is preserved.1311 Beza made some use of it. Both these manuscripts were accompanied by a Latin version of extreme antiquity.

Among the eminent editors of the Greek New Testament, Beza deserves prominent mention. He put forth four folio editions of Stephen’s Greek text; viz. 1565, 1582, 1589, with a Latin version, the Latin Vulgate, and Annotations. He issued also several octavo editions with his Latin version, and brief marginal notes (1565, 1567, 1580, 1590, 1604).1312

What especially interests the English Bible student is the close connection he had with the Authorized Version. Not only were his editions in the hands of King James’ revisers, but his Latin version with its notes was constantly used by them. He had already influenced the authors of the Genevan version (1557 and 1560), as was of course inevitable, and this version influenced the Authorized. As Beza was undoubtedly the best Continental exegete of the closing part of the sixteenth century, this influence of his Latin version and notes was on the whole beneficial. But then it must be confessed that he was also responsible for many errors of reading and rendering in the Authorized Version." *
* Ezra Abbot, the biblical textual critic, at Dr. Schaff’s request, made a very careful collation of the different editions of Beza with the Authorized Version, and found that "the Authorized Version agrees with Beza’s text of 1589 against Stephen’s of 1550 in about 90 places; with Stephen’s against Beza in about 40; and in from thirty to forty places, in most of which the variations are of a trivial character, it differs from both." - Schaff: The Revision of the English Version of the New Testament, New York, 1873 (Introd. p. xxviii). Cf. Farrar, History of Interpretation, p. 342, note 3.


Tuesday, April 26, 2011

Majority Text: The True Power of Probability (pt VII)

 The 'Catastrophe' Model

Now that we have the only viable genealogical stemma for a "catastrophe", it still isn't enough:  All the stemma does is provide an opportunity for a disaster to take place.

Its not a disaster by itself.  For we have presumed ordinary copying at every step.  Some copyists will make more mistakes than others, and some copies will be better proof-read than others.  But these variations do not create any kind of catastrophe.   The text will accumulate normal errors generation by generation, and the errors will be mostly among minority readings in the early copies.

In fact, if the copyists have done an honest job, the final copy will be as good or better than any other late copy we might have chosen as a master-copy for future copies.  

It won't be until the final 'flowering' and rapid expansion of the last copy that we'll have the particular errors of this copy-line become a permanent feature of our 'majority text'.  

But that's no real problem at all.  When we examine the text recreated from this final exemplar, it will only be slightly different in flavor from a text based on some other copy, or even a large group of other copies.  The overall error count will not be significantly higher from a practical point of view.

Planning a Disaster

 For our disaster we still need one more thing: A  massive alteration of the text all at one sitting is needed as it were, so that the new false readings become majority-readings.  The most likely scenario will have this happening all at one time, since it is a rare and unusual event. 

 Since errors can only be injected in packets, the opportunity for a real disaster only occurs once per generation in this 'catastrophe' model.   In the example above, there are only three chances: Copy #10 (1st gen), copy #1 (2nd gen), and copy #253 (3rd gen).   Once mass-copying begins, the opportunity is gone.

This is exactly what the modern critics who follow Hort and the text of Aleph/B propose.  Hort claimed,
"An authoritative Revision at Antioch..was itself subjected to a second authoritative Revision carrying out more completely the purpose of the first.  At what date between 250 and 350 the first process took place is impossible to say...the final process was apparently completed by 350 A.D. or thereabouts." (Intro, p.137)
Hort tentatively suggested Lucian (c 300 A.D.) as the leader and some scholars subsequently became dogmatic about it.
 Thiessen claimed:
"...the Peshitta [Syriac] is now identified as the Byzantine text, which almost certainly goes back to the revision made by Lucian of Antioch about 300 A.D." (H. C. Thiessen,  Introduction to the NT, (Eerd. 1955) p. 54-55. 
All that we really know of Lucian was provided by Eusebius (c. 310 A.D.), and Jerome (c. 380 A.D.).  But the picture painted by this evidence is quite different from that proposed by modern critics.   Eusebius praises Lucian as virtuous, and Jerome later calls him talented; but James Snapp Jr. explains:
" his Preface to the Gospels, Jerome had described the manuscripts which are associated with the names of Lucian and Hesychius without any sign of admiration. Specifically, Jerome had written:
"It is obvious that these writers [Lucian and Hesychius] could not emend anything in the OT after the labors of the Seventy [i.e., they could not improve upon the LXX]; and it was useless to correct the NT, for versions of Scripture already exist in the languages of many nations which show that their additions are false."
 Jerome suggests at least three popular 'revisions', each however being a regional text favored at its own major city-center.  Again James notes:
"Notice the setting of Jerome's comments. Jerome was, in 383, making a case for the superiority of the text-base which he has used as the basis of his revision of the Gospels. He had, he explained, supplemented [and corrected] the wildly-varying Latin copies by appealing to ancient Greek MSS, and he noted that he did not rely on MSS associated with Lucian and Hesychius. This implies that there were, at the time Jerome wrote, copies of the Gospels which were associated with Lucians name.
Jerome does not here deny consulting Origen's copies of the NT.  But it is known he went to Constantinople to use the oldest and best Greek copies there.  He has here specifically stated that he avoided using Lucian or Hesychius prefering older copies.
In Jerome's Introduction to Chronicles, he mentioned three popular forms of the Greek OT text:
"Alexandria and Egypt in their  LXX [copies] praise Hesychius as author; Constantinople to Antioch approves the copies of Lucian the martyr; the middle provinces between them read the Palestinian books edited by Origen, which Eusebius and Pamphilus published."
Also, addressing variants in Psalms, Jerome stated in his Epistle to Sunnias and Fretela (c. 403),
"You must know that there is one edition which Origen and Eusebius and all the Greek commentators call koine, that is common and widespread, and is by most people now called Lucianic; and there is another text, that of the LXX, which is found in the MSS of [Origen's] Hexapla, and has been faithfully translated by us into Latin."
Here Jerome clearly indicates that for the OT, he has avoided the Koine/Lucianic text, and used the text-critical work of Origen instead.

The historical data tells us two things:

(1) There was no wholesale destruction of MSS or competing texts.  At the time of Jerome's Vulgate (400), at least three major text-types were readily available, each being used and copied over wide regions and distributed from independent centers.  Additionally, Jerome was able to travel to centers like Constantinople to access even older copies, predating the 'recensions' known to him around 400 A.D.   Those manuscripts would have been older than Origen's copies (c. 250), Lucian's (c. 300), or Hesychius (c. 250-300).

(2)  Official Recensions were not readily accepted, and did not displace current texts.   Jerome's Latin Vulgate (a new translation of the Greek into Latin), meant to replace the Old Latin copies which were too varied, was strongly opposed regarding his attempt to conform the OT with the Hebrew text.  It was finally adopted after many readings Jerome had introduced had been restored back to the traditional text!

(3)  The Latin Vulgate conforms strongly to the Byzantine text-type, sharing most readings.  This tells us that the ancient manuscripts Jerome consulted must have had the Byzantine text.  Jerome thought this was older than both the Lucianic and Hesychian recensions, and avoided those.  This can only imply that the Lucianic Recension cannot be the Byzantine Text-type, or its source.  Jerome was able to distinguish it quite plainly from the Byzantine, which he adopted.

(4)  The NT Takeover by the Lucianic Recension simply did not take place.  The association of the Lucianic text with the 'Koine' is in reference to the Old Testament versus Origen's version of the LXX.

(5)  The conditions for a 'Catastrophe' of the type proposed by Textual Critics did not exist, and no such drastic alteration to the text could have happened.  The Byzantine text is probably the result rather of a 'normal' transmission process.

(to be continued...)


Sunday, April 24, 2011

The Sabotage of the Christian O.T.: (1550-1700) The Hebrew text

We noted that the sabotage of the Christian O.T. began with Martin Luther, adopting the medieval Hebrew text taken from European Jews, in a misguided and botched attempt to convert them to Christianity.

This competition between the Greek O.T. used by the early Christians (later to be translated into Latin) and the Hebrew texts preferred by Jews really began in the 1st and 2nd centuries A.D., when Christians (many of whom were Jews) were actively engaged in debating with Jews and establishing their own legitimacy.   But that initial confrontation went nowhere, and Christians and Jews each went their own way, with the Jews rejecting as "cursed" the ancient (pre-Christian)  translation into Greek which was originally viewed as "blessed".   At first, several Jewish scholars tried to publish new Greek translations, but this also went nowhere, as the early Church stuck to the now traditional Greek O.T. text of the Septuagint (LXX).

The Reformation however, offered a new opportunity for Jews to 'correct' the Christian text, and they lost no time in providing help with both the Hebrew text and its translation.
Medieval Hebrew Scroll

Jews suffering from persecution by the Roman Catholic Church and previous European governments were initially open to assisting the Reformers like Luther in securing their own independent O.T., more in conformity with their own texts (and interpretations).   So in Protestant jurisdictions, many Jewish scholars began to help the Reformers translate directly from Jewish copies into their local languages.

The naive position the Reformers put themselves in became rather apparent by the end of the 19th century, and many Protestants wanted to see some correction of this undesirable alliance.   The situation as to the English O.T. was described succinctly in the Dictionary of the Bible, (Ezra Abbot, Hackett, Smith, NY, 1872) vol. 4, p. 3441:
"Still less had been done at the commencement of the 17th century for the text of the O.T.  The Jewish teachers, from whom Protestant divines derived their knowledge, had given currency to the belief that in the Massoretic text were contained the ipisisima verba, of Revelation, free from all risks of error, from all casualties of transcription.   The conventional phrases, "the authentic Hebrew", "the Hebrew verity", were the expression of this undiscerning reverence. 1  They refused to apply the same rules of judgement here which they applied to the text of the N.T.   They assumed that the Masoretes were infallible, and were reluctant to acknowledge that there had been any variations since.  Even Walton did not escape being attacked as unsound by the great Puritan divine, Dr. John Owen, for having called attention to the fact of discrepancies (Proleg. cap vi.).   The materials for a revised text are, of course scantier than with the N.T.; but the labors of Kennicott, De Rossi, J H. Michaelis, and Davidson have not been fruitless, and here, as there, the older versions must be admitted as at least evidence of variations which once existed, but which were suppressed by the rigorous uniformity of the later Rabbis.  Conjectural emendations, suchas Newcombe, Lowth, and Ewald have so freely suggested, ought to be ventured on in such places only as are quite unintelligible without them."

1. The Judaizing spirit on this matter culminated in the Formula Helvetici Consensus, which pronounces the existing O.T. text to be "tum quoad consonas, tum quoad vocalia, sive puncta ipsa, sive punctorum potestatem, tum quoad rea, tum quoad verba, θεοπνευστος."

The Dean

Saturday, April 23, 2011

Majority Text: The True Power of IMPOSSIBILITY (pt VI)

In the last post, we looked at the exploding growth of unlikelyhood of a sequence of individually unlikely events.   Specifically, in a copying series, we considered Error-Packets added generation by generation.

We discovered that even though the Error-Packets were 'independent' in some sense, the best-case scenario would be like a series of independent coin-tosses.  The chances of an unlucky circumstance falsely favoring a minority reading more than a few times in a row was progressively more and more unlikely.

Mutually Exclusive Events are Impossible, not Improbable!

But now we are going to look more closely at the situation, and discover something far more fatal to the theory of a build-up of minority readings: 
Accumulated groups of readings cannot occupy majority positions.

Consider the following diagram, much more realistic, but also potentially dangerously favoring minority readings:

The first Error-Packet A introduced in first-generation copy # 10 here, is multiplied, because it is chosen as a master-copy.   For our purposes, we may allow that most other first-generation copies (#0 - 255) are simply destroyed by the Romans.  Now the Error-Packet is found in an undisputed majority (80% or more) of manuscripts. 

But now by definition and premise, copy # 10 must also be multiplied greatly, and its copies must stay in the copy-stream and be copied themselves, perpetually and in high numbers.  This is exactly what will allow Error Packet A to continue holding its majority-reading position.   If those too are destroyed, they were copied for nothing, and  Error Packet A effectively drops off the face of the earth, while copies without it carry on.

But now consider Error-Packet B, in second-generation copy # 1:  We want it also to become a Majority Reading.  But this is impossible, without destroying most other copies made from copy #10.   That is, if we again use the same trick, and multiply copies of manuscript #1 to beef up its readings down the line, and destroy the competing lines from copy #10, we have actually contradicted ourselves.  Because the whole purpose of multiplying copies of copy #10 was to provide a high manuscript count, by keeping them in the copying stream and having them continually multiply in excess of all others. 
In order to boost Error-Packet B, we have to abandon boosting other copies of Error-Packet A.   We want to boost both Error-Packets, so we can only boost copies of second generation copy #1, which contains both Error-Packets.
But this means all the extra copies of earlier generations in this line must be suppressed: either not copied, or else destroyed.  The net effect of this strategy will indeed guarantee that each error will be a majority reading, and all copies will support all Error-Packets equally.  But now the fans of copies from each previous generation are erased, and we are only allowed one copy in each generation! 
Errors Accumulate in a sequential series, not a branching stream
 In order to keep each new error in a majority position, we have to prevent all fanning of generations.  Only the key stream can be perpetuated, and only the final copy can be multiplied.   Early branching is simply not allowed in significant numbers.

Even here however, most errors can be identified and removed, without comparing manuscripts to independent lines, by the manuscript count!  Early errors will be majority readings, but most errors, and especially later errors, will be minority readings.

It is trivially true that any copy down the line will have accumulated errors from multiple generations.  And it is also trivially true that only copies along this line will have all the errors we are accumulating.  But it is also true that even now, even with a completely pruned genealogical tree, we still can't get evenly distributed errors as majority readings.   The later errors will simply not be present in the earlier copies.  The only genealogical tree which allows the majority of errors to become majority readings is as follows:

 This scenario is the only 'catastrophe' that can possibly generate a large number of errors as false majority readings, and only those errors in the copying line can become majority readings.   Two simultaneous events must occur:

(1)  Most previous copies must be destroyed, to remove good readings.

(2)  Copies must be mass-produced only at the final stage of transmission.

This is what the modern critical model is really proposing. 


Majority Text: True Power of the Probability Argument (pt V)

Some people may think that the argument in favor of the Majority text is simply this:  That errors, being introduced later in the stream, will almost always be stuck in the minority of manuscripts. 

This however, is not the actual argument at all.  The possibility that a manuscript with a given error (or set) could be copied more often than manuscripts without the error(s), is actually a given.   As Hodges notes:
"...of course an error might easily be copied more often than the original in any particular instance." (Pickering, Identity..., Appendix C p 165).  
 But the point is, this only works once.   Errors can't accumulate gradually in such a manner.  Lets see why.   We start with Yellow Packet copies getting copied more often, and this gives as an initial false majority for the Errors introduced by the Yellow master-copy:

Errors do indeed accumulate.   In the above diagram, all manuscripts copied from the first copy with the Yellow Packet will have its errors.  Further down, an Orange, Red, and Purple Error Packet are added.   But the effect is obvious:
White (Pure)     - 10 / 25  = 40% minority reading (unfortunate)
Yellow Packet   - 15 / 25  =  60% majority - false ('lucky break')
Orange Packet  -  8 / 25  =  32% minority 
Red Packet        -  3 / 25  =  12% minority
Purple Packet   -   1 / 25  =   4% minority
It doesn't take a genius to see that again, the natural tendency pins down most subsequent errors as minority readings.  This doesn't bode well for the Purple text.   The very manuscripts that support the Yellow Packet readings testify strongly against the Purple Packet readings. 

(Secondly, although the 'White text' (pure text) as a unit is in the minority of MSS, its readings remain in 99% of cases perfectly safe, still vouchsafed by majority readings. )

Even wiping out all earlier generations doesn't help.   This only stabilizes the percentages for each group of readings, once normal or average copying is resumed:

White   Packet   -  2 / 8  =  25% - minority   (in present / future)
Yellow  Packet   -  6 / 8  =  75% - majority  - 1 false reading set 

Orange Packet  -  4 / 8   =  50% - neutral     / uncertain

Red       Packet  -  2 / 8   =  25% - minority   / true reading
Purple  Packet   -  1 / 8   =  12% - minority    / true reading
Although this extreme case seems to undermine the reliability of majority readings, this simply isn't the case.  Probabilities remain strongly in favor of Majority readings.  Lets see why.   We need to remember that only a very small number of early and frequently copied readings will have a false majority rating (e.g. Yellow Packet)
The majority of errors in the extreme texts (e.g. the Purple Text) will have their reading-support all over the map, and with very few false-positives (e.g. Yellow); but the bulk of errors will remain graduated minority readings.
Error Packets (and real errors) will still be identifiable because:
(1) These minority readings will however, still be strongly associated with the most corrupted and generationally later texts (e.g. Orange, Red, Purple).  
(2) These texts will be easily identified, because (a) as texts or composite groups of error-packets they will remain minority texts. (b) The differently supported packets allow us to use genealogical methods.

 Typically, opponents of the Majority of MSS Model will reason that a process of uneven copying could occur repeatedly, boosting minority readings into majority readings on a larger scale.   Hodges showed the failure of this argument by showing that cumulatively speaking, the probabilities for multiple accidents favoring a bad text quickly skyrocket downward.  In discussing the case of a second error (or error packet) in a following generation, Hodges has stated:
"Now we have conceded that 'error A' is found in 4 copies while the true reading is only in 2.  But when 'error B' is introduced [in the next generation], its rival is found in 5 copies.  Will anyone suppose that 'error B' will have the same good fortune as 'error A', and be copied more than all the other branches combined?...but conceding this far less probable situation, ...will anyone believe the same for 'error C'? ...the probability is drastically reduced.  " (Append. p. 167)
These 'errors' would be equivalent to our Yellow, Orange, Red Packets respectively. 

Compounding Unlikely Events: Rapidly Decreasing Probability

We allowed for one catastrophe: over-copying of the Yellow Packet.  Hodges' argument here is actually so powerful, its clinching:

Probabilities are calculated by multiplication, with the probability of each event represented by a fraction less than 1.  A 50% chance of an error being over-copied (as an example) means 1/2 the time it could happen.  But the second error also being over-copied at the same time is (1/2) times (1/2), = 1/4, only a 25% chance.   The chance of three equally probable events in a row happening is 12.5%.   This is the same as flipping a coin.  For a fair and random coin-toss, the chances of tossing 7 'heads' in a row is less than 1%!:

Likewise, even with 50/50 odds, seven generations of errors have almost no chance of ever being consistently copied more often than their rival readings in a random undirected process.

But our observations here go far beyond even this argument:  Its a case of the experiment being poisoned before it can even get off the ground.  

The Defocussing Effect of Noise on Transmission

What is not being mentioned so far in any of the discussions is the fact that ALL scribes introduce errors, in every single copy.  Contrary to intuition, this actually also assists the Majority Reading Model, by sabotaging false positives further.

   The scheme above isolates four Error-Packets for discussion, and the analysis is valid because they are 'independent' in the sense that normally errors won't overlap or interfere with each other in the early transmission.  Its like a needle in a haystack:  The chances of two errors bumping into each other is nearly zero.

But with errors being added randomly and on average roughly equally with each copy, we have now introduced random noise into the signal at all points.  This random noise acts to mask the false signals as effectively as the true signals.

   One can think of injected random noise as a 'damping factor':  A bell rings clearly and long in the air.  But a mute, or mask attenuates both the loudness of the bell and the duration of the note.   Imbalances (spikes and dips) in the transmission process are softened, evened out and muted in a variety of ways, randomly.   This impedes the efficiency of transmission; the clarity, and the duration of false signals (errors) as well as true ones are attenuated.

However, the true signals have an enormous starting-advantage:  They are already 100% Majority readings, and it takes a lot of accumulated noise in the transmission to disturb the original text enough to actually supplant a majority reading.  These are modern considerations now well analyzed by engineers, but which were unknown to 19th century textual critics relying on 'common sense' or intuitive guesses. 
Although both true and false signals are attenuated and masked by noise, the much smaller error signals suffer the most relative damage from further random noise.  Anomalies in the error transmission are smoothed, masked, and truncated by random processes, which defocuss unique and unusual signals in the mix.


Wednesday, April 20, 2011

Majority Text: True Power of the Probability Argument (pt IV)

Going Deeper into the 
Probability Argument

Our simple copying tree can illustrate a lot of other interesting questions regarding the probability argument.    One observation which has been bypassed so far in vague protests and discussions is exactly what kind of catastrophe could result in a false majority text, and what combination of features it would have to have.

For instance, an obvious objection would be that the Majority model presumes that all manuscripts are actually available to be counted.  In fact it does not require this at all.  However the question of adequate sampling of the copying stream is a legitimate issue, and poor sampling would naturally be expected to skew results and their confidence factor as well.
 Taking our copying tree, it is reasonable to assume the earliest copies would gradually be lost, not just for counting, but also for collating.
First two and a half generations lost...
 One immediate observation is that the loss of the earliest manuscripts will indeed benefit the % score of an Error Packet.  Here the Yellow Packet now holds 8/25 or 32%, up from 26%, for a gain of 6%.   The Red Packet however, goes from 3/31 (10%) to 3/25 (12%) and only gains about 2%.   Not only is the payoff low, but such a moderate loss only significantly benefits the earliest minority readings, those with the highest initial percentage. 

How big a catastrophe is needed to flip a minority reading into a majority?

Three and a half generations lost...
 The score is now 6/18 = 33% for Yellow (only +1%!), and 17% for Red (+7%).   A minor surprise.  What is happening is that now the earlier Error Packet is losing votes along with the original readings, while the Red Packet is still gaining in voting power, because none of its voters has been affected by the catastrophe.   Yet it doesn't take much to see that no minority reading can really get much further ahead simply by the loss or destruction of earlier manuscripts.

Textual Disasters:

What we need is a REAL catastropheThe good manuscripts need to be specifically targeted, and with ruthless efficiency.   With those eliminated, at least some errors end up in a majority of surviving MSS.  The Yellow Packet readings are now in 6 out of 9 MSS (66%) with a comfortable majority. 
But the Red Packet remains a reading-block with only minority support.   What happened?  Even though every good manuscript has been eliminated, the good readings in each of the remaining groups ensure that most readings, namely the later ones, are stuck with only minority support.  Remember that these Error Packets are not directly competing, but are independent groups of errors in different areas of the text. Any overlap will be very small, and the chances of the scribes making the exact same errors are smaller still.
Its clear that even the loss of the best early MSS alone cannot cause the dominance of any but a few of the very earliest errors.  This means generally, that no amount of destruction of earlier manuscripts by itself could cause a minority text to become a majority text. That is, the errors in the manuscript will include early , middle, and late errors.  All types of errors will be confined to this group, but not all can make it into a majority of surviving manuscripts.  Some must remain minority readings, even though they uniquely characterize the text-type and may be exclusive to it.

We need a very special kind of disaster, to pull off the kind of coup which is claimed for the Textus Receptus (TR, = Byzantine text-type).    Remember that almost ALL the readings unique to this text-type are rejected by critics, and ALL are claimed to be 'late' (not existing before the text-type).   Only a very small number of important readings are admitted to be ancient by critics, and these are said not to be unique or characteristic of the TR (or the Byzantine text). 

But this claim flies against the mechanics of transmission.  If this small group of readings really were ancient, they would be majority readings and characteristic of the Byzantine (Majority) text-type, not mere peripheral readings.  And if the bulk of the Byzantine readings really were late, they would mainly be minority readings within the text-type, and would not saturate every Byzantine manuscript.

(to be continued...)


Tuesday, April 19, 2011

Majority Text: True Power of the Probability Argument (pt III)

...Finishing off Hort
Before moving into a proper discussion of the Majority Reading Probability Model, we would like to finish off our discussion of some of Hort's assertions in the previous post. 
Hort insisted that 'majority readings' were only valid when it came to singular readings (with only 1 or 2 witnesses in support), because only these could in his view be almost certainly identified as errors by the actual scribe of the surviving manuscript.  But the line isn't anywhere near so clear and easy as this.

(1)  Many accidental omissions avoid detection because the text still makes sense, and the lost content isn't critical to the text.  Dittography errors (accidental repetitions) by contrast are easy to spot, and quickly and easily repaired.   As a result, omissions were copied repeatedly, since the most common error-correction was done against the very same master-copy with the errors.  

(2)  Many accidental errors were copied because of lax error-correction, especially in early times, before standardized practices were developed.  This helps explain why so many errors are very early. 

(3)  Many errors would be mistaken for the correct text, and would invade other copying streams through cross-correction and mixture.   As a result, often diverse copies can attest to rare minority readings. 

(4)  Some omissions of key material would make that material appear to be a deliberate addition for doctrinal purposes, and cause correctors to prefer the omission.

(5)  Some areas of the text were prone to accidental error from stretches of similar words, giving independent copyists many opportunities to make the exact same errors:

Click to Enlarge
(6) Many minority readings would have originated as singular readings in previous copies, and there is no reason to treat scribes whose work is now known only through copies differently than scribes we can directly access.  A large number of minority readings will have the same features and probable causes as singular readings, and to refuse to apply our knowledge of scribal habits to non-singular readings is not sensible.  

Accepting only singular readings is a good skeptical methodology when assessing both an individual scribe and gathering data on general scribal tendencies.   But once knowledge of scribal tendencies can be generalized, it needs to be applied to all parts of the copying stream, including ancestors and lost exemplars behind surviving documents.   

Because of all these well-known factors, extreme minority readings cannot be ignored simply because they are not 'singular'.  Variety and quantity of independent attestation to a reading still counts as an important factor in evaluating variants. 

Factors that Further Enhance the Probability Argument 
for Majority Readings

Before we critique the Probability argument, it is important to look at other well-known and understood factors that uniformly increase the reliability of the majority reading.

In the original model, we showed minimal manuscript reproduction.  Each manuscript was only copied twice.   In the Hodge's original illustrations, they actually used a reproduction rate of 3 copies per master.  "each MS was copied three times, as in other generations..." (App. C, p. 162, footnote - online version).

Both of these rates however are extremely low and unrealistic.   In practice, it is almost certain that master-copies would usually be copied far more than just 2 or 3 times.  A good master-copy might be used dozens, or even scores of times over many years, until worn out or destroyed:

The result of actual practice will be a much bushier tree than the commonly  seen binary branches of simplified models.

 Nonetheless, sparse trees with low reproduction rates can represent a "worst case" scenario to test the robustness of the model.  Consider the following model tree, with a few enhancements (4 copy generations, 30 copies):

Click to Enlarge

Here we've chosen a start-rate of about 3 copies per master (2 generations), followed by a slow-down (3rd generation), slightly less than 3 per master, and finally  2 copies per master (4th generation).   We have also allowed that some copies will be dead-ends, and not copied at all.  This is a much more realistic picture of the probable beginnings of a copying run.  

Error Packets:

Multiple errors are added to a large book when copied; however, we can treat these as a single "Error Packet" which will now be transported in bulk from copy to copy, once obvious errors are caught.   This packet will infect all future copies down the line.  Above, the Yellow Packet (2nd generation error) has passed to 8 copies (8/31 = 26%).  The Red Packet, (3rd generation) has only spread to 3 copies (3/31 = 10%).   These low percentages show good reliabilty in the percentage indicators, providing basic conditions have held (moderately close copying rates in each generation).   A 4th generation error would drop to a 3% minority reading.

Varying Copy-Rates:

Even significantly retarding the copying rate in following generations has not affected the basic result.  The early 'dead-end' copies in fact could be connected almost anywhere.  A strict rate of 2 copies per master would have put them under the middle two (uncopied) 3rd generation copies.    The white uninfected copies could be arranged in almost any independent manner, with the same result.  This shows the robustness of the model even with varying copy-rates.  A steadier copy-rate would have actually lowered the Yellow Packet score further to about 22% (a 4% loss in votes). 

In fact it is difficult to force the MS support of an Error Packet high enough to mislead.  Most random fluctuations in the copying stream do not enhance MS counts for Error Packets, but lower them further.  Since there are almost infinite combinations of such 'negative' events possible, and relatively few 'beneficial' variations that would cause a significantly high 'false positive', the odds are greatly against an Error Packet achieving a majority vote in a random, undirected process.
  Most often, even significant and very 'lucky' anomalies in the copying process will not affect the count enough to turn an Error Packet into a 'Majority Reading'.   Thus not only will all negative and 'neutral' variations leave Error Packets with low scores, so will positive variations that don't score high enough.  Most equally probable random variations then will leave Error Packets as minority readings.
 This is important, for it means that only a directed process, (e.g., a deliberately manipulated copying stream) could result in Error Packets becoming Majority readings.

(to be continued...)


Monday, April 18, 2011

Majority Text: True Power of the Probability Argument (pt II)

In the last post we examined the basic premise behind the idea that the majority of manuscripts would usually have the correct reading, and that any particular error introduced later on downstream would be a minority reading.

This was known long before the time of Hort, and those proposing minority readings were conscious of having to counter the  a priori  weight of the majority of manuscripts.
"Had we reason to believe that all these authorities were of equal value, our course would be a simple one....we should simply reckon up the number upon opposing sides, and pronounce our verdict according to the numerical majority.  ...however, ... in a court of justice ... evidence given by different witnesses differs [greatly].  ... to merely [count] our witnesses will not do. We must distinguish their individual values.  ...Were we to be guided by the number of witnesses [only] on either side, we would at once have to favour of the Received Text." 
- W. Milligan ( The Words of the NT, 1873)
While insisting on the need for weighing witnesses, Milligan here openly concedes what everyone knows:  Most readings in the Textus Receptus (TR) are supported by an overwhelming majority of manuscripts. 

Milligan's own proposals avoid any direct attempt to debate the value of landslide majority readings.  Instead, he uses a crude procedure of dividing MSS into 'early' and 'late':  His fundamental axiom is that older manuscripts and their readings are better.   From this universal assumption, all 'early' MSS and their readings are simply given a priori preference over their numerically vastly superior, but mostly later rivals.  Assigning priority by fiat, he avoids having to deal with probability arguments regarding MS counts.

This arbitrary method however does nothing to actually refute the reasonable presumption that, all other things being equal, the majority reading is most probably original.

Hort himself knew the fallacy of Milligan's simplistic solution, for he insists,
"But the occasional preservation of comparatively ancient texts in comparatively modern MSS forbids confident reliance on priority of [MS] date unsustained by other marks of excellence." (Intro. p. 31)
Hort further conceded that for singular readings, the majority reading certainly did hold the probability of being correct:
"Where a minority consists of one document or hardly more, there is a valid presumption against the reading thus attested, because any one scribe is liable to err, whereas the fortuitous concurrence of a plurality of scribes in the same error is in most cases improbable;" (Ibid. p. 44)
Hort was certainly aware of the problem and power of a majority reading, and rather than dismiss it completely, he sought to severely limit its value.  He spends many pages presenting hypothetical arguments in an attempt to minimize and/or eliminate the validity of majority readings (e.g Intro., pp.40-46).   In order to override the weight of majority testimony, Hort in the main invokes the concept of genealogy.   His methods and arguments have been critiqued elsewhere, so we won't go into them here.

But the argument based on the majority of MSS actually is itself essentially a genealogical argument, something for the most part ignored in the literature.
Here we will be free to explore both its strengths and weaknesses. 


Sunday, April 17, 2011

Majority Text: The True Power of the Probability Argument

While as far back as the mid-1800s, textual critics had a natural sense of the value of the quantity as well as the quality of witnesses to the text, the concept wasn't put on firm mathematical ground until it was faced squarely by Wilber F. Pickering in his book, The Identity of the NT Text (Nelson, 1977/80), in an Appendix C, "The Implications of Statistical Probability...", actually written by Zane/David Hodges.   This book is freely available for viewing and download on Pickering's site here.  - (click to read).

There, Hodges argued that probability was decidedly in favor of the Majority text (the readings found in the majority of surviving manuscripts).
This was almost immediately challenged by D. A. Carson in his own appendix to The KJV Debate (Baker, 1979/80), a review of Pickering's book.

Although the original diagrams and equations are complex for ordinary readers, the gist of the argument can be simply illustrated as follows:

(1)  If each manuscript is copied more than once, then there will always be more copies in each following generation.   In the picture above, each row represents a copying generation, and the number of manuscripts doubles each generation.   The copies form a simple, ever expanding genealogical tree, as in the diagram above.

(2)  An error cannot be copied backward in time, so each error can only influence the copies which come after it, not those already written.  Even the act of mixture cannot change this fundamental fact.

(3)  The manuscripts with the given error will be in the minority.   The later the error, the smaller the minority.  Even just 2 or 3 generations later, errors quickly become clear minority readings.   (The diagram above poses minimal reproduction, and a 3rd generation error is stuck with about 25% attestation. )

(4)  Error accumulation is a self-limiting process, and later errors have little chance of influencing the text at all, even when preferred and adopted.  For instance, by the 10th generation, it is impossible to introduce significant errors into the copying stream, even with minimal reproduction rates.

The Assumptions of the Model:

The basic assumptions of this model are that it is a reasonably "normal" copying process.  Very little regulation is required for the model to be overwhelmingly accurate regarding the basic process of error accumulation.    To be functional and predictive, the model only makes a few assumptions:
a)  Most manuscripts should be copied more than once.  It is not even necessary that all manuscripts be copied.  The process is very robust and allows for a wide variation in rates and numbers.

b)  The relative rates of copying should be moderately equal in each generation, for most branches.  That is, one manuscript should not be copied too many more times than the others.  Again the process is robust, and difficult to skew or break.

It is important to understand that this model of manuscript reproduction is just a scientific physical description, and completely neutral as to the causes of errors in the transmission stream.  For this discussion, "error" does not signify any intent or lack of same on the part of the copyist or editor.   It only signifies physical variance from the original text.   The model is not influenced in the least by the motives of copyists or editors, and does not concern itself at all with how a variation in the text is introduced.  It only makes universal assumptions about the mechanics of copying and errors in transmission.

Critiques and objections to this model center around whether or not the transmission of the NT text really was "normal" in the sense described by the model.   D. A. Carson's objection for instance, is based on three points:
(a) Historical factors skewed the results, allowing the dominance of a less accurate text (the Byzantine).  He cites (1) the influence of Chrysostom, (2) the restriction and displacement of the Greek language.   Because of this he argues, the Byzantine text probably doesn't represent the original text.

(b)  The 'generational' argument fails because errors were not introduced "generation by generation, but wholesale, in the first 2 centuries".  Additionally, pressures to make the text uniform make the argument about most errors being minority readings null.

(c)  Catastrophes during transmission negate the predictions.  Carson uses the "flood" analogy to say that transmission trees can be 'restarted' from bad copies and previous evidence lost.   Presumably then, observations cannot be extrapolated back to pre-catastrophe conditions.

(d)  Early Christian copyists were inferior to Jewish scribes.  Carson argues that therefore they were careless with the text.   The majority of variants were early and accidental. 
Carson's objections however, are essentially a failure.

(1)  The probability argument, as already stated, is independent of the motives or causes of corruption.  It is only a statement about the physical process itself.
  This fact eliminates the basic objections found in (a).

(2) Far from contradicting the fact that most errors with significant attestation are early, the model actually predicts this.   Scrivener and Colwell may have found it (psychologically) 'paradoxical' but a mathematician doesn't.   The model is independent of objection (b) and (d).  Even if more than one error at a time is introduced in each copy (very likely), this only means that each group of errors can be treated as a single corruption event or 'variation unit'.  It makes no difference to the model, or its general predictions.

(3)  If Carson is going to posit a 'catastrophic event' (like a major recension, and an accompanying slash and burn of all other contemporary copies), then he has to actually show historical evidence of such a historical event.  Even a major recension cannot significantly alter the model, unless we add the destruction of most other unedited copies, and add the cooperation of all parts of Christendom (in the 4th century, the time when presumably this must placed), and also the large scale reproduction of the new substitute text.   Neither Hort nor Carson have ever produced the historical evidence that such a catastrophic event took place.

(4)  As to the 'carelessness' or lack of talent of early Christian scribes, this also has no effect on the model; it only affects the average rate of errors introduced to the text per generation.   Carson has failed to grasp the essential features of the model of normal transmission, which is unaffected by varying rates of error.

We will show the true problems, and limits of the probability model in a second post.

(to be continued...)


Friday, April 15, 2011

Early Critical Greek New Testaments

A look backward at the last 200 years or so of NT Textual Criticism is instructive.

Stephanus (Estienne) :  First with Numbered verses...

'The Infancy' (1450-1600)

1518 - First (?) Printed Bible - Aldus Manutius (Venice)

In what Edward Miller called 'The Infancy', he listed the first printed Greek New Testaments, which helped to spark and feed the original Reformation (A Guide to TC of NT, p.7 fwd):

The Fall of Constantinople (1453) seems to have caused many Greek scribes and manuscripts to have poured into Western Europe.

The first printed texts became the basis of many Reformation Bible translations:
The Complutensian Polyglott (Cardinal Ximenes, 1520),
1518 Polyglott: (multiple language edition)

Erasmus' Greek/Latin (1515-1535),
A Younger Erasmus

was quickly followed by those of
Robert Stephen (1546-1551) adding our modern verse-numbers,

Stephen (Esteinne)
Stephen:  Special printing font with ligatures


Theodore Beza (1565-1598) with some noted readings from D.

A Young Theodore Beza
Beza's text with extensive notes

'The Childhood' (1600-1800)

the Elzevirs (1624-1633) then followed,

Softcore porn inserts...

Brian Walton (1657) published a Polyglott, with collations from Bishop Ussher;
John Fell (1675) added collations from the ancient Memphitic and Gothic versions.

The text of Stephen was adopted by
John Mill
(1707), and this was generally taken in England as the standard or "Textus Receptus" (TR) for many years. To the TR, Mill began the first thorough effort at collation by adding a remarkable 30,000 readings to his apparatus and introduction.
Toinard (1707), Roman Catholic, first proposed using only the 2 oldest (Vatican) MSS + Latin.
Richard Bentley:  Too gay to actually complete project

Richard Bentley (1716) planned a comparison of the most ancient Greek and Latin texts (i.e., Codex A, B and D), assisted by John Walker (Trinity College), but never finished is idea:  later a version was published by Woide.
Mace (1729) published an edited NT. This was re-edited by Knapp (1797)
Bengel (1734) began the first attempt at a systematic textual criticism, with the grouping of MSS into families, and grading readings with a Greek letter (α, β, γ, δ, ε).
Bengel: Grumpy Old Men 2

Wetstein (1751-1752) labeled the Uncials (A-O) and Cursives (1-112). He did extensive collations of MSS, versions and Early Christian writers (ECW). Bowyer (1763) republished Wetstein.
Harwood (1776), a Presbyterian Unitarian, made the first early critical 'W/H' style text.
Matthaei (1786) in Moscow also collated and published new MSS from Mt. Athos, while Alter worked in Vienna and Birch laboured in Italy, Germany, Spain, with Adler's help.
Geddes (1792) an ex-Catholic Priest & Unitarian activist also attempted a critical translation.
Griesbach (1775-1805) following Semler, divided MSS into 3 text-types, Western Alexandrian, Byzantine, proposing each was a recension (product of a formal revision), and giving each a 'vote'. He also provided citations of Origen independently of Wetstein in his Symbolae Criticae. Griesbach was republished by Whittaker (1823, 2nd ed) and Schulz (1827, 3rd ed).

J.J. Griesbach:  Hamburger Diet

Scholz (1830-36) continued Griesbach's work, collating another 616 cursives, but reduced the text-types to two, grouping Western and Alexandrian, with later assent from Scrivener.

This early period seems to have been summed up well by Samuel Davidson (c. 1848):
"We are thankful to the collators of MSS for their great labour. But it may be doubted whether they be often competent to make the best critical text out of existing materials. ... We should rather see the collator and the editor of the text dissociated. We should like to have one person for each department." (quoted by Tregelles, Printed Text p. 172).
It is remarkable, if not notorious, that the first person to propose using only the oldest Uncial manuscripts for 'correcting' the Reformation Bible text was a Roman Catholic priest, Toinard.
This was about 150 years after the RC Council of Trent had already established both the canon and text of the NT.
The next two texts to seriously depart from the Traditional text were those of Harwood and Gedes, both Unitarian radicals seeking to alter the mainstream doctrines of Christianity.

This idea of rejecting the standard common text for that of two obscure 4th century Uncials (Alexandrinus and Vaticanus) was not based on any scientific analysis or credible methodology. The majority of manuscripts (MSS) had not yet been discovered, let alone collated. No theory of 'text-types', genealogy or early 'recensions' had been invented.

The only reason for preferring two old manuscripts from the Vatican was the vague notion that older manuscripts might be more pure copies, or closer to the original copies. But since even the oldest MSS were 300 years away from the originals, and were artificially edited church texts compiled from multiple sources after generations of copying, there could be no credible claim that they were relatively 'pure' without claiming that the majority of manuscripts had been corrupted after that period.

How else could 4th and 5th century manuscripts be better, unless the bulk of later manuscripts had been corrupted since that time? But this would require either:

1. That the later manuscripts had descended from a later revision, for which there was no historical evidence. Hort later proposed a 'Lucian Recension' as the common ancestor to all the later copies, but this would have had to have taken place prior to Jerome (c. 390 A.D.). Lucian lived c. 240-312 A.D. - or else,

2. That the later manuscripts were corrupted from a long process of gradual accumulation of error or editing, but this contradicted the fact that the standard common text of these manuscripts clearly existed in the 4th century! The same basic text is found in the Old Latin, the early versions and quotations of the early Christian writers, and Jerome's Vulgate (c. 390 A.D.).
Since both of these notions are shown false by the existence of earlier copies of the traditional (common) text, such as Codex Alexandrinus and the Latin manuscripts etc., the only sensible conclusion is that there were competing text-types in the 4th century. If so, the preference for the two 4th century Uncials is dubious.

Thursday, April 14, 2011

List of Articles related to KJV and Modern Versions

P75:  Leaf 57 verso - Click to Enlarge

Papyri (Capital Script) Related Articles

Uncials:  Key Articles (on Uncials)

Miniscules & Lectionaries: (Cursive Script)

Saturday, April 9, 2011

Where the 'Historical-Critical Method' led the world

Where the 'Historical-Critical Method' led the world, according to those who bought into it.

It should be remembered that at the turn of the 19th/20th centuries, science as we know it now was in its infancy, and there was no universal scientific standard or method, or even any real grasp of what a truly scientific method might entail.

Even the longstanding "assured results" of the physical sciences were in confused mess, as the early discoveries of quantum effects and the paradoxes of light-speed and the atom shook the very foundations of the Newtonian worldview.

While the less talented 'academics' of the soft sciences (historical, social) were busy embracing the materialistic, non-supernatural, deterministic universe, the real physicists were frantically abandoning it as completely unworkable in light of new discoveries.

Yet while the fields of Textual Criticism, Linguistics, History etc., had hardly even begun to address what a 'scientific method' might entail, and the vast amount of work ahead, along with the rigorous logic required, they had already run off to the press with the "assured results of modern criticism" regarding the New Testament.

Textual Critics had convinced themselves and others that they had essentially solved all the important questions regarding the NT text. All that lay ahead was to weigh the consequences of their brilliant analysis:
"...while we never can predict what may not be brought out from the timeless sands of Egypt, there is little hope of every securing that original text. ...
"Moreover, if we actually had an autograph manuscript, we could not be sure that no slips of the hasty pen of the writer had taken place...A perfect text must remain... the delusion of the ignorant.
"...What then is the conclusion? Evidently this - that in the 2nd century there was no general uniformity among the manuscripts. Most of them ...did not agree with one another. ...
"When we thus abandon the hope of securing a perfect text, and especially when we learn that the number of variations in existing manuscripts is roughly reckoned to be 200,000, we are tempted to despair of knowing what the original contents of the New Testament were.
...we must bear in mind that only "a very small proportion of the variations materially affects the sense, ...and no variation affects an article of faith or a moral precept.." (Vincent).
This was finely illustrated by the Revised Version (1882). When it first appeared, some persons who were not inclined to accept the teachings of the King James version hastened to examine it, hoping to find matters more to their taste. But though there was scarcely a verse that did not show some slight change, and though a few passages had been wholly omitted, it was the same New Testament after all.
...we had to give up 1st John 5:7 as a proof-text for the Trinity ...but there remained texts in abundance that could the doctrine.
There was only one thing that had to be hopelessly abandoned, namely, any interpretation of Scripture which hinges upon the precise form of a particular word, finding deepest significance in the use of an aorist instead of an imperfect tense, and in the presence or absence of the Greek article. This kind of exegesis, at least in its extreme form, is no longer possible; and I think that we all feel that its passing is not to be deplored."
- W. B. Hill, The Present Problems of NT Study, (NY, 1903) p. 19 fwd.
Must a 'perfect' text remain the "delusion of the ignorant"?
Did the majority of manuscripts in the 2nd century really "not agree with one another" to the extent we can't determine the actual text?
Must we "abandon all hope" of securing an accurate and true text?

Must any and all precision regarding the word of God, the Holy Scripture, be "hopelessly abandoned"?

Should we really dismiss "any exegesis or interpretation of Scripture which hinges upon the precise form of a particular word" as a fraudulent illusion?

What then do we do with St. Paul, who bases an entire argument on a single letter of the Hebrew O.T.? (Gal 3:16)?
What shall we say when Jesus does the same with a single phrase? (Mark 12:26-27)
What happens to John 1:12?

The answer is that the "one thing" that the proponent of the "historical-critical method" wants us to "hopelessly abandon" turns out to be confidence and certainty regarding a whole lot of things, namely just about every precise and specific statement in Holy Scripture!

Because these all must now become 'doubtful forms' suggesting 'illusory precision'. The real 'word of God' is a nebulous paraphrase, represented by the wide and bland fuzzy renderings of everything from the 'Living Bible' and the 'Message' to the Jehovah's Witless translation.

The door swings wide, and bangs in the wind.  Every liberal heresy, every mediocre notion, every confused understanding is free to walk in and preach from the pulpit.   Every foundation of Biblical truth, every fundamental Christian doctrine, every sure word of prophecy, in complimentary fashion, must now be released, sent into the forest as food for the wolves.

But it turns out however, that all this panic, all this frantic editing of the Holy word of God, all this chucking of hundreds of 'doubtful verses' on the basis of a handful of crappy Egyptian copies, was based on a fraudulent claim in the first place:
(1) That these 'experts' actually had a scientific method available;

(2) That they had actually done the massive preliminary work required;

(3) That they were honest and trustworthy men, worthy to shoulder the sacred task of editing Holy Scripture for the whole world.

But it can be proven that these men had no 'scientific method' that the rest of the world could openly inspect, or that any even among themselves could universally embrace:

It can be proven that in 1882 they had not done the massive amount of preparatory work needed.

And as a consequence it is apparent that they were not worthy to alter the Holy Scripture for all Christians, all future generations, and the whole world standing in need of salvation.

Three strikes and you're out.