Quantitative comparative linguistics

Quantitative comparative linguistics is the use of quantitative analysis as applied to comparative linguistics.

History

Statistical methods have been used for the purpose of quantitative analysis in comparative linguistics for more than a century. During the 1950s, the Swadesh list emerged: a standardised set of lexical concepts found in most languages, as words or phrases, that allow two or more languages to be compared and contrasted empirically.

Probably the first published quantitative historical linguistics study was by Sapir in 1916,[1] while Kroeber and Chretien in 1937 [2] investigated nine Indo-European (IE) languages using 74 morphological and phonological features (extended in 1939 by the inclusion of Hittite). Ross [3] in 1950 carried out an investigation into the theoretical basis for such studies. Swadesh, using word lists, developed lexicostatistics and glottochronology in a series of papers [4] published in the early 1950s but these methods were widely criticised [5] though some of the criticisms were seen as unjustified by other scholars. Embleton published a book on "Statistics in Historical Linguistics" in 1986 which reviewed previous work and extended the glottochronological method. Dyen, Kruskal and Black carried out a study of the lexicostatistical method on a large IE database in 1992.[6]

During the 1990s, there was renewed interest in the topic, based on the application of methods of computational phylogenetics and cladistics. Such projects often involved collaboration by linguistic scholars, and colleagues with expertise in information science and/or biological anthropology. These projects often sought to arrive at an optimal phylogenetic tree (or network), to represent a hypothesis about the evolutionary ancestry and perhaps its language contacts. Pioneers in these methods included the founders of CPHL: computational phylogenetics in historical linguistics (CPHL project): Donald Ringe, Tandy Warnow, Luay Nakhleh and Steven N. Evans.

In the mid-1990s a group at Pennsylvania University computerised the comparative method and used a different IE database with 20 ancient languages.[7] In the biological field several software programs were then developed which could have application to historical linguistics. In particular a group at the University of Auckland developed a method that gave controversially old dates for IE languages.[8] A conference on "Time-depth in Historical Linguistics" was held in August 1999 at which many applications of quantitative methods were discussed.[9] Subsequently many papers have been published on studies of various language groups as well as comparisons of the methods.

Greater media attention was generated in 2003 after the publication by anthropologists Russell Gray and Quentin Atkinson of a short study on Indo-European languages in Nature. Gray and Atkinson attempted to quantify, in a probabilistic sense, the age and relatedness of modern Indo-European languages and, sometimes, the preceding proto-languages.

The proceedings of an influential 2004 conference, Phylogenetic Methods and the Prehistory of Languages were published in 2006, edited by Peter Forster and Colin Renfrew.

Studied language families

Computational phylogenetic analyses have been performed for:

Background

The standard method for assessing language relationships has been the comparative method. However this has a number of limitations. Not all linguistic material is suitable as input and there are issues of the linguistic levels on which the method operates. The reconstructed languages are idealized and different scholars can produce different results. Language family trees are often used in conjunction with the method and "borrowings" must be excluded from the data, which is difficult when borrowing is within a family. It is often claimed that the method is limited in the time depth over which it can operate. The method is difficult to apply and there is no independent test.[28] Thus alternative methods have been sought that have a formalised method, quantify the relationships and can be tested.

A goal of comparative historical linguistics is to identify instances of genetic relatedness amongst languages.[29] The steps in quantitative analysis are (i) to devise a procedure based on theoretical grounds, on a particular model or on past experience, etc. (ii) to verify the procedure by applying it to some data where there exists a large body of linguistic opinion for comparison (this may lead to a revision of the procedure of stage (i) or at the extreme of its total abandonment) (iii) to apply the procedure to data where linguistic opinions have not yet been produced, have not yet been firmly established or perhaps are even in conflict.[30]

Applying phylogenetic methods to languages is a multi-stage process: (a) the encoding stage - getting from real languages to some expression of the relationships between them in the form of numerical or state data, so that those data can then be used as input to phylogenetic methods (b) the representation stage - applying phylogenetic methods to extract from those numerical and/or state data a signal that is converted into some useful form of representation, usually two dimensional graphical ones such as trees or networks, which synthesise and "collapse" what are often highly complex multi dimensional relationships in the signal (c) the interpretation stage - assessing those tree and network representations to extract from them what they actually mean for real languages and their relationships through time.[31]

Types of trees and networks

An output of a quantitative historical linguistic analysis is normally a tree or a network diagram. This allows summary visualisation of the output data but is not the complete result. A tree is a connected acyclic graph, consisting of a set of vertices (also known as "nodes") and a set of edges ("branches") each of which connects a pair of vertices.[32] An internal node represents a linguistic ancestor in a phylogenic tree or network. Each language is represented by a path, the paths showing the different states as it evolves. There is only one path between every pair of vertices. Unrooted trees plot the relationship between the input data without assumptions regarding their descent. A rooted tree explicitly identifies a common ancestor, often by specifying a direction of evolution or by including an "outgroup" that is known to be only distantly related to the set of languages being classified. Most trees are binary, that is a parent has two children. A tree can always be produced even though it is not always appropriate. A different sort of tree is that only based on language similarities / differences. In this case the internal nodes of the graph do not represent ancestors but are introduced to represent the conflict between the different splits ("bipartitions") in the data analysis. The "phenetic distance" is the sum of the weights (often represented as lengths) along the path between languages. Sometimes an additional assumption is made that these internal nodes do represent ancestors.

When languages converge, usually with word adoption ("borrowing"), a network model is more appropriate. There will be additional edges to reflect the dual parentage of a language. These edges will be bidirectional if both languages borrow from one another. A tree is thus a simple network, however there are many other types of network. A phylogentic network is one where the taxa are represented by nodes and their evolutionary relationships are represented by branches.[33] Another type is that based on splits, and is a combinatorial generalisation of the split tree. A given set of splits can have more than one representation thus internal nodes may not be ancestors and are only an "implicit" representation of evolutionary history as distinct from the "explicit" representation of phylogenetic networks. In a splits network the phrenetic distance is that of the shortest path between two languages. A further type is the reticular network which shows incompatibilities (due to for example to contact) as reticulations and its internal nodes do represent ancestors. A network may also be constructed by adding contact edges to a tree. The last main type is the consensus network formed from trees. These trees may be as a result of bootstrap analysis or samples from a posterior distribution.

Language change

Change happens continually to languages, but not usually at a constant rate,[34] with its cumulative effect producing splits into dialects, languages and language families. It is generally thought that morphology changes slowest and phonology the quickest. As change happens, less and less evidence of the original language remains. Finally there could be loss of any evidence of relatedness. Changes of one type may not affect other types, for example sound changes do not affect cognancy. Unlike biology, it cannot be assumed that languages all have a common origin and establishing relatedness is necessary. In modelling it is often assumed for simplicity that the characters change independently but this may not be the case. Besides borrowing, there can also be semantic shifts and polymorphism.

Analysis input

Data

Analysis can be carried out on the "characters" of languages or on the "distances" of the languages. In the former case the input to a language classification generally takes the form of a data matrix where the rows correspond to the various languages being analysed and the columns correspond to different features or characters by which each language may be described. These features are of two types cognates or typological data. Characters can take one or more forms (homoplasy) and can be lexical, morphological or phonological. Cognates are morphemes (lexical or grammatical) or larger constructions. Typological characters can come from any part of the grammar or lexicon. If there are gaps in the data these have to be coded.

In addition to the original database of (unscreened) data, in many studies subsets are formed for particular purposes (screened data).

In lexicostatistics the features are the meanings of words, or rather semantic slots. Thus the matrix entries are a series of glosses. As originally devised by Swadesh the single most common word for a slot was to be chosen, which can be difficult and subjective because of semantic shift. Later methods may allow more than one meaning to be incorporated.

Constraints

Some methods allow constraints to be placed on language contact geography (isolation by distance) and on sub-group split times.

Databases

Swadesh originally published a 200 word list but later refined it into a 100 word one.[35] A commonly used IE database is that by Dyen, Kruskal and Black which contains data for 95 languages, though the original is known to contain a few errors. Besides the raw data it also contains cognacy judgements. This is available online.[36] The database of Ringe, Warnow and Taylor has information on 24 IE languages, with 22 phonological characters, 15 morphological characters and 333 lexical characters. Gray and Atkinson used a database of 87 languages with 2449 lexical items, based on the Dyen set with the addition of three ancient languages. They incorporated the cognacy judgements of a number of scholars. Other databases have been drawn up for African, Australian and Andean language families, amongst others.

Coding of the data may be in binary form or in multistate form. The former is often used but does result in a bias. It has been claimed that there is a constant scale factor between the two coding methods, and that allowance can be made for this. However, another study suggests that the topology may change [37]

Word lists

The word slots are chosen to be as culture- and borrowing- free as possible. The original Swadesh lists are most commonly used but many others have been devised for particular purposes. Often these are shorter than Swadesh's preferred 100 item list. Kessler has written a book on "The Significance of Word Lists [38] while McMahon and McMahon carried out studies on the effects of reconstructability and retentiveness.[28] The effect of increasing the number of slots has been studied and a law of diminishing returns found, with about 80 being found satisfactory.[39] However some studies have used less than half this number.

Generally each cognate set is represented as a different character but differences between words can also be measured as a distance measurement by sound changes. Distances may also be measured letter by letter.

Morphological features

Traditionally these have been seen as more important than lexical ones and so some studies have put additional weighting on this type of character. Such features were included in the Ringe, Warnow and Taylor IE database for example. However other studies have omitted them.

Typological features

Examples of these features include glottalised constants, tone systems, accusative alignment in nouns, dual number, case number correspondence, object-verb order, and first person singular pronouns. These will be listed in the WALS database, though this is only sparsely populated for many languages yet.[40]

Probabilistic models

Some analysis methods incorporate a statistical model of language evolution and use the properties of the model to estimate the evolution history. Statistical models are also used for simulation of data for testing purposes. A stochastic process can be used to describe how a set of characters evolves within a language. The probability with which a character will change can depend on the branch but not all charters evolve together, nor is the rate identical on all branches. It is often assumed that each character evolves independently but this is not always the case. Within a model borrowing and parallel development (homoplasy) may also be modelled, as well as polymorphisms.

Effects of chance

Chance resemblances produce a level of noise against which the required signal of relatedness has to be found. A study was carried out by Ringe [41] into the effects of chance on the mass comparison method. This showed that chance resemblances were critical to the technique and that Greenberg's conclusions could not be justified, though the mathematical procedure used by Rimge was later criticised.

With small databases sampling errors can be important.

In some cases with a large database and exhaustive search of all possible trees or networks is not feasible because of running time limitations. Thus there is a chance that the optimum solution is not found by heuristic solution-space search methods.

Detection of borrowing

Loanwords can severely affect the topology of a tree so efforts are made to exclude borrowings. However, undetected ones sometimes still exist. McMahon and McMahon [42] showed that around 5% borrowing can affect the topology while 10% has significant effects. In networks borrowing produces reticulations. Minett and Wang [43] examined ways of detecting borrowing automatically.

Split dating

Dating of language splits can be determined if it is known how the characters evolve along each branch of a tree. The simplest assumption is that all characters evolve at a single constant rate with time and that this is independent of the tree branch. This was the assumption made in glottochronology. However, studies soon showed that there was variation between languages, some probably due to the presence of unrecognised borrowing.[44] A better approach is to allow rate variation, and the gamma distribution is usually used because of its mathematical convenience. Studies have also been carried out that show that the character replacement rate depends on the frequency of use.[45] Widespread borrowing can bias divergence time estimates by making languages seem more similar and hence younger. However, this also makes the ancestor's branch length longer so that the root is unaffected.[46]

This aspect is the most controversial part of quantitative comparative linguistics.

Types of analysis

There is a need to understand how a language classification method works in order to determine its assumptions and limitations. It may only be valid under certain conditions or be suitable for small databases. The methods differ in their data requirements, their complexity and running time. The methods also differ in their optimisation criteria.

Character based models

Maximum parsimony and maximum compatibility

These two methods are similar but the maximum parsimony method's objective is to find the tree (or network) in which the minimum number of evolutionary changes occurs. In some implementations the characters can be given weights and then the objective is to minimise the total weighted sum of the changes. The analysis produces unrooted trees unless an outgroup is used or directed characters. Heuristics are used to find the best tree but optimisation is not guaranteed. The method is often implemented using the programs PAUP or TNT.

Maximum compatibility also uses characters, with the objective of finding the tree on which the maximum number of characters evolve without homoplasy. Again the characters can be weighted and when this occurs the objective is to maximise the sum of the weights of compatible characters. It also produces unrooted trees unless additional information is incorporated. There are no readily available heuristics available that are accurate with large databases. This method has only been used by Ringe's group.[47]

In these two methods there are often several trees found with the same score so the usual practice is to find a consensus tree via an algorithm. A majority consensus has bipartitions in more than half of the input trees while a greedy consensus adds bipartitions to the majority tree. The strict consensus tree is the least resolved and contains those splits that are in every tree.

Bootstrapping (a statistical resampling strategy) is used to provide branch support values. The technique randomly picks characters from the input data matrix and then the same analysis is used. The support value is the fraction of the runs with that bipartition in the observed tree. However, bootstrapping is very time consuming.

Maximum likelihood and Bayesian analysis

Both of these methods use explicit evolution models. The maximum likelihood method optimises the probability of producing the observed data, while Bayesian analysis estimates the probability of each tree and so produces a probability distribution. A random walk is made through the "model-tree space". Both take an indeterminate time to run, and stopping may be arbitrary so a decision is a problem. However, both produce support information for each branch.

The assumptions of these methods are overt and are verifiable. The complexity of the model can be increased if required. The model parameters are estimated directly from the input data so assumptions about evolutionary rate are avoided.

Perfect Phylogenetic Networks

This method produces an explicit phylogenic network having an underlying tree with additional contact edges. Characters can be borrowed but evolve without homoplasy. To produce such networks, a graph-theoretic algorithm [48] has been used.

Gray and Atkinson's method

The input lexical data is coded in binary form, with one character for each state of the original multi-state character. The method allows homoplasy and constraints on split times. A likelihood-based analysis method is used, with evolution expressed as a rate matrix. Cognate gain and loss is modelled with a gamma distribution to allow rate variation and with rate smoothing. Because of the vast number of possible trees with many languages, Bayesian inference is used to search for the optimal tree. A Markov Chain Monte Carlo algorithm [49] generates a sample of trees as an approximation to the posterior probability distribution. A summary of this distribution can be provided as a greedy consensus tree or network with support values. The method also provides date estimates.

The method is accurate when the original characters are binary, and evolve identically and independently of each other under a rates-across-sites model with gamma distributed rates; the dates are accurate when the rate of change is constant. Understanding the performance of the method when the original characters are multi-state is more complicated, since the binary encoding produces characters that are not independent, while the method assumes independence.

Nicholls and Gray's method

This method [50] is an outgrowth of Gray and Atkinson's. Rather than having two parameters for a character, this method uses three. The birth rate, death rate of a cognate are specified and its borrowing rate. The birth rate is a Poisson random variable with a single birth of a cognate class but separate deaths of branches are allowed (Dollo parsimony). The method does not allow homoplasy but allows polymorphism and constraints. Its major problem is that it cannot handle missing data (this issue has since been resolved by Ryder and Nicholls.[51] Statistical techniques are used to fit the model to the data. Prior information may be incorporated and an MCMC research is made of possible reconstructions. The method has been applied to Gray and Nichol's database and seems to give similar results.

Distance based models

These use a triangular matrix of pairwise language comparisons. The input character matrix is used to compute the distance matrix either using the Hamming distance or the Levenshtein distance. The former measures the proportion of matching characters while the latter allows costs of the various possible transforms to be included. These methods are fast compared with wholly character based ones. However, these methods do result in information loss.

UPGMA

The "Unweighted Pairwise Group Method with Arithmetic-mean" (UPGMA) is a clustering technique which operates by repeatedly joining the two languages that have the smallest distance between them. It operates accurately with clock-like evolution but otherwise it can be in error. This is the method used in Swadesh's original lexicostatistics.

Split Decomposition

This is a technique for dividing data into natural groups.[52] The data could be characters but is more usually distance measures. The character counts or distances are used to generate the splits and to compute weights (branch lengths) for the splits. The weighted splits are then represented in a tree or network based on minimising the number of changes between each pair of taxa. There are fast algorithms for generating the collection of splits. The weights are determined from the taxon to taxon distances. Split decomposition is effective when the number of taxa is small or when the signal is not too complicated.

Neighbor joining

This method operates on distance data, computes a transformation of the input matrix and then computes the minimum distance of the pairs of languages.[53] It operates correctly even if the languages do not evolve with a lexical clock. A weighted version of the method may also be used. The method produces an output tree. It is claimed to be the closest method to manual techniques for tree construction.

Neighbor-net

It uses a similar algorithm to neighbor joining.[54] Unlike Split Decomposition it does not fuse nodes immediately but waits until a node has been paired a second time. The tree nodes are then replaced by two and the distance matrix reduced. It can handle large and complicated data sets. However, the output is a phenogram rather than a phylogram. This is the most popular network method.

Network

This was an early network method that has been used for some language analysis. It was originally developed for genetic sequences with more than one possible origin.[55] Network collapses the alternative trees into a single network. Where there are multiple histories a reticulation (a box shape) is drawn. It generates a list of characters incompatible with a tree.

ASP

This uses a declarative knowledge representation formalism and the methods of Answer Set Programming.[56] One such solver is CMODELS which can be used for small problems but larger ones require heuristics. Preprocessing is used to determine the informative characters. CMODELS transforms them into a propositional theory that uses a SAT solver to compute the models of this theory.

Fitch/Kitch

Fitch and Kitch are maximum likelihood based programs in PHYLIP that allow a tree to be rearranged after each addition, unlike NJ. Kitch differs from Fitch in assuming a constant rate of change throughout the tree while Fitch allows for different rates down each branch.[57]

Separation level method

Holm introduced a method in 2000 to deal with some known problems of lexicostatistical analysis. These are the "symplesiomorphy trap", where shared archaisms are difficult to distinguish from shared innovations, and the "proportionality "trap" when later changes can obscure early ones. Later he introduced a refined method, called SLD, to take account of the variable word distribution across languages.[58] The method does not assume aconstant rate of change.

Fast convergence methods

A number of fast converging analysis methods have been developed for use with large databases (>200 languages). One of these is the Disk Covering Method (DCM).[59] This has been combined with existing methods to give improved performance. A paper on the DCM-NJ+MP method is given by the same authors in "The performance of Phylogenetic Methods on Trees of Bounded Diameter", where it is compared with the NJ method.

Resemblance based models

These models compare the letters of words rather than their phonetics. Dunn et al. [60] studied 125 typological characters across 16 Austronesian and 15 Papuan languages. They compared their results to an MP tree and one constructed by traditional analysis. Significant differences were found. Similarly Wichmann and Saunders [61] used 96 characters to study 63 American languages.

Computerised mass comparison

A method that has been suggested for initial inspection of a set of languages to see if they are related was mass comparison. However, this has been severely criticised and fell into disuse. Recently Kessler has resurrected a computerised version of the method but using rigorous hypothesis testing.[62] The aim is to make use of similarities across more than two languages at a time. In another paper [63] various criteria for comparing word lists are evaluated. It was found that the IE and Uralic families could be reconstructed but there was no evidence for a joint super-family.

Nichol's method

This method uses stable lexical fields, such as stance verbs, to try to establish long-distance relationships.[64] Account is taken of convergence and semantic shifts to search for ancient cognates. A model is outlined and the results of a pilot study are presented.

ASJP

The Automated Similarity Judgment Program (ASJP) is similar to lexicostatistics, but the judgement of similarities is done by a computer program following a consistent set of rules.[65] Trees are generated using standard phylogenetic methods. ASJP uses 7 vowel symbols and 34 consonant symbols. There are also various modifiers. Two words are judged similar if at least two consecutive consonants in the respective words are identical while vowels are also taken into account. The proportion of words with the same meaning judged to be similar for a pair of languages is the Lexical Similarity Percentage (LSP). The Phonological Similarity Percentage (PSP) is also calculated. PSP is then subtracted from the LSP yielding the Subtracted Similarity Percentage (SSP) and the ASJP distance is 100-SSP. Currently there are data on over 4,500 languages and dialects in the ASJP database[66] from which a tree of the world's languages was generated.[67]

Serva and Petroni's method

This measures the orthographical distance between words to avoid the subjectivity of cognacy judgements.[68] It determines the minimum number of operations needed to transform one word into another, normalised by the length of the longer word. A tree is constructed from the distance data by the UPGMA technique.

Phonetic evaluation methods

Heggarty has proposed a means of providing a measure of the degrees of difference between cognates, rather than just yes/no answers.[69] This is based on examining many (>30) features of the phonetics of the glosses in comparison with the protolanguage. This could require a large amount of work but Heggarty claims that only a representative sample of sounds is necessary. He also examined the rate of change of the phonetics and found a large rate variation, so that it was unsuitable for glottochronology. A similar evaluation of the phonetics had earlier been carried out by Grimes and Agard for Romance languages, but this used only six points of comparison.[70]

Evaluation of methods

Metrics

Standard mathematical techniques are available for measuring the similarity/difference of two trees. For consensus trees the Consistency Index (CI) is a measure of homoplasy. For one character it is the ratio of the minimimum conceivable number of steps on any one tree (= 1 for binary trees) divided by the number of reconstructed steps on the tree. The CI of a tree is the sum of the character CIs divided by the number of characters.[71] It represents the proportion of patterns correctly assigned.

The Retention Index (RI) measures the amount of similarity in a character. It is the ratio (g - s) / (g - m) where g is the greatest number of steps of a character on any tree, m is the minimum number of steps on any tree, and s is the minimum steps on a particular tree. There is also a Rescaled CI which is the product of the CI and RI.

For binary trees the standard way of comparing their topology is to use the Robinson-Foulds metric.[72] This distance is the average of the number of false positives and false negatives in terms of branch occurrence. R-F rates above 10% are considered poor matches. For other sorts of trees and for networks there is yet no standard method of comparison.

Lists of incompatible characters are produced by some tree producing methods. These can be extremely helpful in analysing the output. Where heuristic methods are used repeatability is an issue. However, standard mathematical techniques are used to overcome this problem.

Comparison with previous analyses

In order to evaluate the methods a well understood family of languages is chosen, with a reliable dataset. This family is often the IE one but others have been used. After applying the methods to be compared to the database, the resulting trees are compared with the reference tree determined by traditional linguistic methods. The aim is to have no conflicts in topology, for example no missing sub-groups, and compatible dates. The families suggested for this analysis by Nichols and Warnow [73] are Germanic, Romance, Slavic, Common Turkic, Chinese, and Mixe Zoque as well as older groups such as Oceanic and IE.

Use of simulations

Although the use of real languages does add realism and provides real problems, the above method of validation suffers from the fact that the true evolution of the languages is unknown. By generating a set of data from a simulated evolution correct tree is known. However it will be a simplified version of reality. Thus both evaluation techniques should be used.

Sensitivity analysis

To assess the robustness of a solution it is desirable to vary the input data and constraints, and observe the output. Each variable is changed slightly in turn. This analysis has been carried out in a number of cases and the methods found to be robust, for example by Atkinson and Gray.[74]

Studies comparing methods

During the early 1990s, linguist Donald Ringe, with computer scientists Luay Nakhleh and Tandy Warnow, statistician Steven N. Evans and others, began collaborating on research in quantitative comparative linguistic projects. They later founded the CHPL project, the goals of which include: "producing and maintaining real linguistic datasets, in particular of Indo-European languages", "formulating statistical models that capture the evolution of historical linguistic data", "designing simulation tools and accuracy measures for generating synthetic data for studying the performance of reconstruction methods", and "developing and implementing statistically-based as well as combinatorial methods for reconstructing language phylogenies, including phylogenetic networks".[75]

A comparison of coding methods was carried out by Rexova et al. (2003).[76] They created a reduced data set from the Dyen database but with the addition of Hittite. They produced a standard multistate matrix where the 141 character states corresponds to individual cognate classes, allowing polymorphism. They also joined some cognate classes, to reduce subjectivity and polymorphic states were not allowed. Lastly they produced a binary matrix where each class of words was treated as a separate character. The matrices were analysed by PAUP. It was found that using the binary matrix produced changes near the root of the tree.

McMahon and McMahon (2003) used three PHYLIP programs (NJ, Fitch and Kitch) on the DKB dataset.[77] They found that the results produced were very similar. Bootstrapping was used to test the robustness of any part of the tree. Later they used subsets of the data to assess its retentiveness and reconstructability.[42] The outputs showed topological differences which were attributed to borrowing. They then also used Network, Split Decomposition, Neighbor-net and Splitstree on several data sets. Significant differences were found between the latter two methods. Neighbor-net was considered optimal for discerning language contact.

In 2005, Nakhleh, Warnow, Ringe and Evans carried out a comparison of six analysis methods using an Indo-European database.[78] The methods compared were UPGMA, NJ MP, MC, WMC and GA. The PAUP software package was used for UPGMA, NJ, and MC as well as computing the majority consensus trees. The RWT database was used but 40 characters were removed due to evidence of polymorphism. Then a screened database was produced excluding all characters that clearly exhibited parallel development, so eliminating 38 features. The trees were evaluated on the basis of the number of incompatible characters and on agreement with established sub-grouping results. They found that UPGMA was clearly worst but there was not a lot of difference between the other methods. The results depended on the data set used. It was found that weighting the characters was important, which requires linguistic judgement.

Saunders (2005) [79] compared NJ, MP, GA and Neighbor-Net on a combination of lexical and typological data. He recommended use of the GA method but Nichols and Warnow have some concerns about the study methodology.[80]

Cysouw et al. (2006) [81] compared Holm's original method with NJ, Fitch, MP and SD. They found Holm's method to be less accurate than the others.

In 2013, François Barbancon, Warnow, Evans, Ringe and Nakleh (2013) studied various tree reconstruction methods using simulated data.[82] Their simulated data varied in the number of contact edges, the degree of homoplasy, the deviation from a lexical clock, and the deviation from the rates-across-sites assumption. It was found that the accuracy of the unweighted methods (MP, NJ, UPGMA, and GA) were consistent in all the conditions studied, with MP being the best. The accuracy of the two weighted methods (WMC and WMP) depended on the appropriateness of the weighting scheme. With low homoplasy the weighted methods generally produced the more accurate results but inappropriate weighting could make these worse than MP or GA under moderate or high homoplasy levels.

Choosing the best model

Choice of an appropriate model is critical for the production of good phylogenetic analyses. Both underparameterised or overly restrictive models may produce aberrant behaviour when their underlying assumptions are violated, while overly complex or overparameterised models require long run times and their parameters may be overfit.[83] The most common method of model selection is the "Likelihood Ratio Test" which produces an estimate of the fit between the model and the data, but as an alternative the Akaike Information Criterion or the Bayesian Information Criterion can be used. Model selection computer programs are available.

gollark: Like how I fear C, and all heavy machinery ever.
gollark: It seems reasonable to fear powerful and highly footgun-y tools.
gollark: You're just assuming something is symmetric because you... have examples of values on both sides?
gollark: Don't do that, it's actually bad.
gollark: (I do not know enough population genetics to say and I'd be handwavily guessing half the parameters anyway)

See also

Notes

  1. Sapir, Edward (1916). "Time Perspective in Aboriginal American Culture: A Study in Method". Geological Survey Memoir 90, No. 13. Anthropological Series. Ottawa: Government Printing Bureau.
  2. Kroeber, A. L.; Chrétien, C. D. (1937). "Quantitative Classification of Indo-European Languages". Language. 13 (2): 83–103. doi:10.2307/408715. JSTOR 408715.
  3. Ross, Alan S. C. (1950). "Philological Probability Problems". Journal of the Royal Statistical Society. Series B (Methodological). 12 (1): 19–59. doi:10.1111/j.2517-6161.1950.tb00040.x. JSTOR 2983831.
  4. Swadesh, Morris (1952). "Lexico-Statistic Dating of Prehistoric Ethnic Contacts: With Special Reference to North American Indians and Eskimos". Proceedings of the American Philosophical Society. 96 (4): 452–463. JSTOR 3143802.
  5. Bergsland, Knut; Vogt, Hans (1962). "On the Validity of Glottochronology". Current Anthropology. 3 (2): 115–153. doi:10.1086/200264. JSTOR 2739527.
  6. Dyen, Isidore; Kruskal, Joseph B.; Black, Paul (1992). "An Indoeuropean Classification: A Lexicostatistical Experiment". Transactions of the American Philosophical Society. 82 (5): iii–132. doi:10.2307/1006517. JSTOR 1006517.
  7. Ringe, Don; Warnow, Tandy; Taylor, Ann (2002). "Indo‐European and Computational Cladistics". Transactions of the Philological Society. 100: 59–129. doi:10.1111/1467-968X.00091.
  8. Initially announced in Gray, Russell D.; Atkinson, Quentin D. (2003). "Language-tree divergence times support the Anatolian theory of Indo-European origin". Nature. 426 (6965): 435–439. Bibcode:2003Natur.426..435G. doi:10.1038/nature02029. PMID 14647380.
  9. Published by Renfrew, McMahon and Trask in 2000
  10. Bouckaert, R.; Lemey, P.; Dunn, M.; Greenhill, S. J.; Alekseyenko, A. V.; Drummond, A. J.; Gray, R. D.; Suchard, M. A.; Atkinson, Q. D. (2012). "Mapping the Origins and Expansion of the Indo-European Language Family". Science. 337 (6097): 957–960. Bibcode:2012Sci...337..957B. doi:10.1126/science.1219669. PMC 4112997. PMID 22923579.
  11. Honkola, T.; Vesakoski, O.; Korhonen, K.; Lehtinen, J.; Syrjänen, K.; Wahlberg, N. (2013). "Cultural and climatic changes shape the evolutionary history of the Uralic languages". Journal of Evolutionary Biology. 26 (6): 1244–1253. doi:10.1111/jeb.12107. PMID 23675756.
  12. Hruschka, Daniel J.; Branford, Simon; Smith, Eric D.; Wilkins, Jon; Meade, Andrew; Pagel, Mark; Bhattacharya, Tanmoy (2015). "Detecting Regular Sound Changes in Linguistics as Events of Concerted Evolution". Current Biology. 25 (1): 1–9. doi:10.1016/j.cub.2014.10.064. PMC 4291143. PMID 25532895.
  13. Kolipakam, Vishnupriya; Jordan, Fiona M.; Dunn, Michael; Greenhill, Simon J.; Bouckaert, Remco; Gray, Russell D.; Verkerk, Annemarie (2018). "A Bayesian phylogenetic study of the Dravidian language family". Royal Society Open Science. 5 (3): 171504. Bibcode:2018RSOS....571504K. doi:10.1098/rsos.171504. PMC 5882685. PMID 29657761.
  14. Sidwell, Paul. 2015. A comprehensive phylogenetic analysis of the Austroasiatic languages. Presented at Diversity Linguistics: Retrospect and Prospect, 1–3 May 2015 (Leipzig, Germany), Closing conference of the Department of Linguistics at the Max Planck Institute for Evolutionary Anthropology.
  15. Gray, R. D.; Drummond, A. J.; Greenhill, S. J. (2009). "Language Phylogenies Reveal Expansion Pulses and Pauses in Pacific Settlement". Science. 323 (5913): 479–483. Bibcode:2009Sci...323..479G. doi:10.1126/science.1166858. PMID 19164742.
  16. Bowern, Claire and Atkinson, Quentin, 2012. Computational Phylogenetics and the Internal Structure of Pama-Nyungan. Language, Vol. 88, 817-845.
  17. Bouckaert, Remco R.; Bowern, Claire; Atkinson, Quentin D. (2018). "The origin and expansion of Pama–Nyungan languages across Australia". Nature Ecology & Evolution. 2 (4): 741–749. doi:10.1038/s41559-018-0489-3. PMID 29531347.
  18. Currie, Thomas E.; Meade, Andrew; Guillon, Myrtille; Mace, Ruth (2013). "Cultural phylogeography of the Bantu Languages of sub-Saharan Africa". Proceedings of the Royal Society B: Biological Sciences. 280 (1762): 20130695. doi:10.1098/rspb.2013.0695. PMC 3673054. PMID 23658203.
  19. Grollemund, Rebecca; Branford, Simon; Bostoen, Koen; Meade, Andrew; Venditti, Chris; Pagel, Mark (2015). "Bantu expansion shows that habitat alters the route and pace of human dispersals". Proceedings of the National Academy of Sciences. 112 (43): 13296–13301. Bibcode:2015PNAS..11213296G. doi:10.1073/pnas.1503793112. PMC 4629331. PMID 26371302.
  20. Kitchen, Andrew; Ehret, Christopher; Assefa, Shiferaw; Mulligan, Connie J. (2009). "Bayesian phylogenetic analysis of Semitic languages identifies an Early Bronze Age origin of Semitic in the Near East". Proceedings of the Royal Society B: Biological Sciences. 276 (1668): 2703–2710. doi:10.1098/rspb.2009.0408. PMC 2839953. PMID 19403539.
  21. Sicoli, Mark A.; Holton, Gary (2014). "Linguistic Phylogenies Support Back-Migration from Beringia to Asia". PLOS One. 9 (3): e91722. Bibcode:2014PLoSO...991722S. doi:10.1371/journal.pone.0091722. PMC 3951421. PMID 24621925.
  22. Wheeler, Ward C.; Whiteley, Peter M. (2015). "Historical linguistics as a sequence optimization problem: The evolution and biogeography of Uto-Aztecan languages" (PDF). Cladistics. 31 (2): 113–125. doi:10.1111/cla.12078.
  23. Atkinson, Q. D. (2006). From Species to Languages – a phylogenetic approach to human history. PhD thesis, University of Auckland, Auckland.
  24. Walker, Robert S.; Ribeiro, Lincoln A. (2011). "Bayesian phylogeography of the Arawak expansion in lowland South America". Proceedings of the Royal Society B: Biological Sciences. 278 (1718): 2562–2567. doi:10.1098/rspb.2010.2579. PMC 3136831. PMID 21247954.
  25. Michael, Lev, Natalia Chousou-Polydouri, Keith Bartolomei, Erin Donnelly, Vivian Wauters, Sérgio Meira, Zachary O'Hagan. 2015. A Bayesian Phylogenetic Classification of Tupí-Guaraní. LIAMES 15(2):193-221.
  26. Zhang, Menghan; Yan, Shi; Pan, Wuyun; Jin, Li (2019). "Phylogenetic evidence for Sino-Tibetan origin in northern China in the Late Neolithic". Nature. 569 (7754): 112–115. Bibcode:2019Natur.569..112Z. doi:10.1038/s41586-019-1153-z. PMID 31019300.
  27. Sagart, Laurent; Jacques, Guillaume; Lai, Yunfan; Ryder, Robin; Thouzeau, Valentin; Greenhill, Simon J.; List, Johann-Mattis (2019). "Dated language phylogenies shed light on the ancestry of Sino-Tibetan". Proceedings of the National Academy of Sciences of the United States of America. 116 (21): 10317–10322. doi:10.1073/pnas.1817972116. PMC 6534992. PMID 31061123.
  28. McMahon, April M. S.; McMahon, Robert (2005). Language Classification by Numbers. ISBN 978-0199279029.
  29. Harrison, S. P. (2003). "On the Limits of the Comparative Method". In Brian D. Joseph; Richard D. Janda (eds.). The Handbook of Historical Linguistics. Blackwell Publishing. doi:10.1002/9781405166201.ch2 (inactive 2020-04-05). ISBN 9781405166201.
  30. Embleton, Sheila M (1986). Statistics in Historical Linguistics. Brockmeyer. ISBN 9783883395371.
  31. Heggarty, Paul (2006). "Interdiscipline Indiscipline" (PDF). In Peter Forster; Colin Renfrew (eds.). Phylogenetic Methods and the Prehistory of Languages. McDonald Institute Monographs. McDonald Institute for Archaeological Research.
  32. Nichols, Johanna; Warnow, Tandy (2008). "Tutorial on Computational Linguistic Phylogeny". Language and Linguistics Compass. 2 (5): 760–820. doi:10.1111/j.1749-818X.2008.00082.x.
  33. Huson, Daniel H.; Bryant, David (2006). "Application of Phylogenetic Networks in Evolutionary Studies". Molecular Biology and Evolution. 23 (2): 254–267. doi:10.1093/molbev/msj030. PMID 16221896.
  34. Atkinson, Q. D.; Meade, A.; Venditti, C.; Greenhill, S. J.; Pagel, M. (2008). "Languages Evolve in Punctuational Bursts". Science. 319 (5863): 588. doi:10.1126/science.1149683. PMID 18239118.
  35. Swadesh, Morris (1955). "Towards Greater Accuracy in Lexicostatistic Dating". International Journal of American Linguistics. 21 (2): 121–137. doi:10.1086/464321. JSTOR 1263939.
  36. At http://www.idc.upenn.edu%5B%5D
  37. Rexova, K. (2003). "Cladistic analysis of languages: Indo-European classification based on lexicostatistical data". Cladistics. 19 (2): 120–127. doi:10.1016/S0748-3007(02)00147-0.
  38. CSLI Publications, 2001
  39. Holman, Eric W.; Wichmann, Søren; Brown, Cecil H.; Velupillai, Viveka; Müller, André; Bakker, Dik (2008). "Explorations in automated language classification". Folia Linguistica. 42 (3–4). doi:10.1515/FLIN.2008.331.
  40. Haspelmath et al., World Atlas of Language Structures, OUP 2005
  41. On calculating the factor of chance in language comparison, Transactions of the American Philosophical Society 82 (1992)
  42. Language Classification by Numbers
  43. On detection of borrowing, Diachronia 20/2 (2003)
  44. see for example Bergsland and Vogt
  45. For example, Pagel, Atkinson and Meade, Frequency of word-use predicts rates of lexical evolution throughout Indo-European history, Nature 449, 11 Oct 2007
  46. Atkinson and Gray, How old is the Indo-European family (in Phylogenetic Methods and the Prehistory of Languages, Forster and Renfrew, 2006
  47. Indo-European and Computational Cladistics, Transactions of the Philosophical Society 100/1 (2002)
  48. Nakhleh et al. Perfect Phylogenic networks, Language 81 (2005)
  49. Metropolis et al. 1953
  50. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.799.8282&rep=rep1&type=pdf
  51. Ryder, Robin; Nicholls, Geoff (2011), "Missing data in a stochastic Dollo model for cognate data, and its application to the dating of Proto-Indo-European", Journal of the Royal Statistical Society, Series C, 60 (1): 71–92, doi:10.1111/j.1467-9876.2010.00743.x
  52. Bandelt and Dress 1992
  53. Saitou and Nei (1987)
  54. Bryant and Moulton : Neighbor-net, an agglomerative method for the construction of phylogenetic networks - Molecular Biology and Evolution 21 (2003)
  55. Bandelt et al. 1995
  56. Brooks, Erdem. Minett and Ringe : Character-based cladistics and answer set programming
  57. McMahon and McMahon
  58. Holm : The new arboretum of Indo-European trees - Journal of Quantitative Linguistics 14 (2007)
  59. Nakhleh, Roshan, St John, Sun and Warnow : Designing fast converging phylogentic methods - Bioinfomatics, OUP 2001
  60. Structural Phylogenetics and the reconstruction of ancient language history, Science 309, 2072 (2005)
  61. How to use typological databases in historical linguistic research, Diachronica 24, 373 (2007)
  62. See for example The Mathematical Assessment of Long Range Linguistic Relationships - Language and Linguistics Compass 2/5 (2008)
  63. Kessler and Lehtonen : Multilateral Comparison and Significance Testing
  64. Nichols : Quasi-cognates and Lexical Type Shifts (in Phylogenetics and the Prehistory of Languages, Forster and Renfrew, 2006)
  65. Brown et al. : Automated classification of the world's languages, Sprachtypologie und Universalienforschung, 61.4: 285-308, 2008 Archived June 23, 2010, at the Wayback Machine
  66. ASJP processed languages Archived May 11, 2010, at the Wayback Machine (March 15, 2010)
  67. Müller, A., S. Wichmann, V. Velupillai et al. 2010. ASJP World Language Tree of Lexical Similarity: Version 3 (July 2010). Archived July 30, 2010, at the Wayback Machine
  68. Indo-European language tree by Levenstein distance
  69. Quantifying change over time in phonetics (in Time-depth in Historical Linguistics, Renfrew, McMahon and Trask, 2001)
  70. Linguistic diversity in Romance Languages, Language 35 1959
  71. Kluge and Farris, Systematic Zoology 18, 1-32 (1969)
  72. Robinson and Foulds : Comparison of phylogenetic trees - Mathematical Biosciences - 53 (1981)
  73. Tutorial on Computational Linguistic Phylogeny, Language and Linguistic Compass 2/5 (2008)
  74. How old is the Indo-European language family? (in Phylogenic Methods and the Prehistory of Languages, Forster and Renfrew, 2006)
  75. CPHL: Computational Phylogenetics in Historical Linguistics (homepage), 2009 (17 October 2017).
  76. Cladistic analysis of languages, Cladistics 19/2 (2003)
  77. Finding Families, quantitative methods in language classification. Transactions of the Philological Society 101 (2003)
  78. Nakhleh, Warnow, Ringe and Evans, "A Comparison of Phylogenetic Reconstruction Methods on an IE Dataset" (2005)
  79. Linguistic Phylogenetics for three Austronesian family, BA Thesis Swarthmore College (2005)
  80. Tutorial on Computational Linguistic Phylogeny
  81. A critique of the separation base method for genealogical subgrouping, with data from Mixe-Zoquean, Journal of Quantitative Linguistics 13, 225 (2006)
  82. Barbancon, Warnow, Evans, Ringe and Nakhleh, An Experimental Study Comparing Linguistic Phylogenetic Reconstruction Methods
  83. Sullivan and Joyce, Model selection in phylogenetics, Annual Review of Ecology, Evolution and Systematics 36 (2005)

Bibliography

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.