Followers

Tuesday, April 25, 2017

The Tors-Costa Debate, Part 1

The Tors-Costa Debate
can be viewed at YouTube.
            Last week in Toronto, John Tors, an advocate for the Majority Text, won a debate against Tony Costa, who attempted to defend the Nestle-Aland compilation.  The stated purpose of the debate was to examine the Greek texts behind popular translations of the New Testament.  Today (and in the next two posts, God willing) I will summarize their debate, and offer comments in italics at various points.
            The moderator, Johnny Yao-Chung Chao, welcomed guests to the Toronto Free Presbyterian Church, introduced the debaters, and offered an opening prayer.  The debate was designed to consist of opening statements, followed by responses and a cross-examination period, followed by a time of spontaneous questions from the listeners. 
            Dr. Costa began with a standard summary of the purpose and materials of New Testament textual criticism, noting that the manuscripts contain many variants – mostly trivial, but not all – and he asked, in light of those variants, “How can we be sure and confident that we still have the New Testament today?” – “How do we get back to the original?”  Arguing for the Alexandrian Text, Costa stated that “During the first millennium, the Alexandrian manuscripts were actually in the majority.  The Byzantine manuscripts did not become the majority until the ninth century A.D.”
            At this point, Costa used a graphic that resembles the graph that one can consult on page 153 of James White’s book The King James Only Controversy (first edition).  (Costa’s claim, similarly, is found on the opposite page; White wrote that the Byzantine text “does not become the “majority” until the ninth century.”  That is not a realistic appraisal of the implications of the evidence; perhaps I will write more about this another day.  White’s/Costa’s chart shows how many manuscripts have survived from each century.  That’s all.  One would think that those who object to the idea of “normal” transmission would also object to the idea of “normal” survival, but apparently not when attempting to bolster their case.)
            Costa proposed that use of Alexandrian Greek manuscripts declines because of three factors:  (1)  the destruction of manuscripts by Roman persecutors, (2) the shift from Greek to Latin, and (3) the expansion of Islam.  Costa proposed that in countries governed by shariah-law, it was not safe to have scriptoriums.  (However, many monasteries in Egypt and other countries continued to produce New Testament manuscripts long after the territory in which they were situated was under Islamic rule.)  
            Costa then asserted, “For the first 300 years of the history of the church, all of the church fathers quoted from the Alexandrian text-type manuscripts, not from the Byzantine text-type manuscripts.  At least not till about 350, with John Chrysostom.”  Costa argued that the majority can change, and that the majority is not always right – just look at the majority of Germans who supported the Nazis, for example.  
            As he concluded his opening remarks, Costa made a theological point and a criticism of the Byzantine Priority view:  “God usually works with the small remnants sometimes.”  (Gaffe:  which is it:  usually, or sometimes?)  And, “The Majority Text does not approach a uniform text.  Maurice Robinson openly admits this.  The Majority Text suffers textual corruption as well.” 

            Tors, in his opening statement, restated the basic question on which the debate was intended to center:  Which  method of textual reconstruction should be used:  “reasoned eclecticism” or the Majority Reading Approach?  He began by addressing the much-repeated claim that it does not matter which method is used:  claims that the differences are trivial, and that no doctrine is affected by textual variants.  And then the gauntlet was hurled down:  “But that is not true.” 
            The textual variant in First Timothy 3:16, Tors insisted, has an impact on doctrine.  And the variant in John 7:8, where P66 and P75 and Codex Vaticanus disagree with the Nestle-Aland compilation, also has an impact.  There are many, many more examples.  (One could wish that he had specified a few more, such as Matthew 27:49 and Mark 6:22.) The survival of at least one doctrine of the faith – inerrancy – depends on which approach is used.
            Tors then reviewed some text-critical guidelines, or canons, utilized by supporters of the Alexandrian Text.  He emphasized the premises behind them, such as: 
            (1)  Scribes were more prone to add than to omit. 
            (2)  Scribes were prone to correct errors. 
            (3) Scribes were prone to harmonize. 
            Tors also pointed out that the critical text depends very heavily on Vaticanus and Sinaiticus, even though they disagree, on average, four times in every five verses.  (Gaffe:  Vaticanus was not discovered in the 1800’s; its New Testament text was reliably edited and made available to researchers at that time; its existence had been known for centuries.)
            In addition, Tors protested that at the root of the critical text is the genealogical method proposed by Hort back in 1881 – a theoretical transmission-history that Hort never bothered to prove. (As Colwell put it:  “That Westcott and Hort did not apply this method to the manuscripts of the New Testament is obvious.  Where are the charts which start with the majority of late manuscripts and climb back through diminishing generations of ancestors to the Neutral and Western Texts?  The answer is that they are nowhere.”) 
            Tors also mentioned that the advocates of the pro-Alexandrian school also cite the discovery of Egyptian papyri as a basis for their position – but before investigating that further, he turned to the just-listed canons, and asked the audience to see if they could detect the basic idea that they express.  That idea, he said, is the fundamental foundation of the reasoned eclectic case:  the idea that scribes altered the text on purpose.  Griesbach – the scholar in the late 1700’s and early 1800’s who developed these canons – believed this, not because he had conducted thorough analytical research about scribal tendencies, but because he embraced a rationalistic philosophy, and these ideas simply seemed to make sense.  His assumptions were accepted by textual critics for 200 years.  However, Tors continued, those assumptions do not align with most of the evidence.  Patristic writers consistently denounce those who altered the text of Scripture.  And one research-study after another – such as James Royse’s – show that the most common scribal error was omission.      
            Tors then turned his attention to the early papyri.  He pointed out that contrary to what James White has claimed, the papyri do not all support the Alexandrian Text.  There is considerable mixture in the text of some of the early papyri.  Tors then showed a graphic, summarizing data from Pickering, who in turn had extracted data from research by Klijn:
            P66:  agreed with Aleph 14 times, agreed with B 29 times, and agreed with TR 33 times.
            P75:  agreed with Aleph 9 times, agreed with B 33 times, and agreed with TR 29 times.
(In the debate, it did not seem very clear at all what Tors’ comparison was comparing, so I will explain:  those numbers are not derived from a consideration of the entire text of P66 and P75; they are part of an analysis (conducted by Klijn, and used by Pickering) of just the parts of John in which P45, P66, and P75 are all extant.  These statistics shouldn’t be relied upon as anything but a demonstration that P66’s text is significantly different from P75 – which is still a significant point, since if P66 and P75, with this much variation, are both Alexandrian, the term doesn’t mean much.  But this evidence-bundle looks carefully picked.  Nevertheless I don’t think anyone will contest Tors’ basic point that some of the papyri have texts that fail to display the patterns of readings displayed in the flagship-manuscripts of any text-type.  Aland acknowledged this, as Tors mentioned.)     
            After mentioning that the early papyri (better:  some of the early papyri) are not strongly aligned with the Alexandrian Text, Tors pointed out that 150 distinctly Byzantine readings have been found in the papyri.  (This finding by Harry Sturz is often belittled by defenders of the Alexandrian Text; a typical response is that early Byzantine readings do not show the existence of an early Byzantine Text.  And that was Costa’s response, almost verbatim.  But such a reaction overlooks the chief implication of Sturz’s research, which is that wherever these readings came from, it was not from a simple amalgamation of readings drawn from Western and Alexandrian exemplars, at least not as we know them.  If there was any such amalgamation-work, it must have involved a third source of readings – in which case, Hort’s main reason to categorically reject distinct Byzantine readings falls to pieces.) 
            Approaching his conclusion, Tors stated that in view of all this – the research that has undermined the “prefer the shorter reading” canon, the analyses that have shown that most scribes simply aspired to accurately reproduce their exemplars’ contents, and the discoveries of early distinctly Byzantine readings that weigh in against Hort’s theory of the origin of the Byzantine Text – “Nestle-Aland is dead.  They don’t admit it, but it is.”
            Tors’ final argument for the Majority Reading Approach in his opening statement consisted of an appeal to statistical analysis.  Using a mathematical model, Tors showed that if a manuscript were copied five times, then, all things equal, any error would have to be copied three times to be the majority reading.  And the number of times an error would have to be reproduced in each subsequent copying-generation would necessarily increase.  Thus the probability of any error being the majority reading is staggeringly low.  Tors said, “This is based on real-life numbers.”  
            (However, what are the numbers based on?  They are a hypothetical mathematical construct, not a reflection of historically verified circumstances – a grid, not a map.  Of course one can imagine a tree that grows 10 branches, with 10 twigs on each branch, and 10 fruits on each twig, but one can also walk outside and observe trees with branches hacked away, twigs broken off, widely different numbers of twigs on different branches, fruit plucked by birds and squirrels, and so forth.  If a reading’s status as part of a majority infallibly implied what Tors says it implies, the Eusebian Sections and chapter-divisions would also be part of the original text.  Still, aside from this, Tors presented some sound reasons why the Majority Reading Approach is more trustworthy than the pro-Alexandrian approach.)

Thus ended the opening statements.  Next:  Part 2:  The Debaters Respond. 

  

3 comments:

Daniel Buck said...

"White’s/Costa’s chart shows how many manuscripts have survived from each century."
Are you sure? Or is it just a chart that shows which manuscripts Aland consulted in preparing the Nestle-Aland text?

mom said...

Thanks for summarizing it for us since I wouldn't likely be taking in the 3 hour debate right now.

Daniel Buck said...

There has been some confusion about this chart, but I'm now satisfied that it reflects the dates of ALL extant manuscripts--with one caveat. Most of the famous Alexandrian codices are in fact block-mixed, to the point that the Byzantine portion of Codex Vaticanus has been given an entirely new identity as a different manuscript altogether (conveniently because it is in minuscule script, thus deriving its identification as 1957). Were we to divide up all the uncial codices this way--say, take A and L and divide each of them at the line of block-mixing--we would end up with two Byzantine manuscripts and two Alexandrian manuscripts, rather than zero of the one and two of the other. The more this is done, the less would be the numerical dominance of the Alexandrian.