Bigram

A bigram or digram is a sequence of two adjacent elements from a string of tokens, which are typically letters, syllables, or words. A bigram is an n-gram for n=2. The frequency distribution of every bigram in a string is commonly used for simple statistical analysis of text in many applications, including in computational linguistics, cryptography, speech recognition, and so on.

Gappy bigrams or skipping bigrams are word pairs which allow gaps (perhaps avoiding connecting words, or allowing some simulation of dependencies, as in a dependency grammar).

Head word bigrams are gappy bigrams with an explicit dependency relationship.

Bigrams help provide the conditional probability of a token given the preceding token, when the relation of the conditional probability is applied:

That is, the probability of a token given the preceding token is equal to the probability of their bigram, or the co-occurrence of the two tokens , divided by the probability of the preceding token.

Applications

Bigrams are used in most successful language models for speech recognition.[1] They are a special case of N-gram.

Bigram frequency attacks can be used in cryptography to solve cryptograms. See frequency analysis.

Bigram frequency is one approach to statistical language identification.

Some activities in logology or recreational linguistics involve bigrams. These include attempts to find English words beginning with every possible bigram,[2] or words containing a string of repeated bigrams, such as logogogue.[3]

Bigram frequency in the English language

The frequency of the most common letter bigrams in a small English corpus is:[4]

th 1.52       en 0.55       ng 0.18
he 1.28       ed 0.53       of 0.16
in 0.94       to 0.52       al 0.09
er 0.94       it 0.50       de 0.09
an 0.82       ou 0.50       se 0.08
re 0.68       ea 0.47       le 0.08
nd 0.63       hi 0.46       sa 0.06
at 0.59       is 0.46       si 0.05
on 0.57       or 0.43       ar 0.04
nt 0.56       ti 0.34       ve 0.04
ha 0.56       as 0.33       ra 0.04
es 0.56       te 0.27       ld 0.02
st 0.55       et 0.19       ur 0.02

Complete bigram frequencies for a larger corpus are available.[5][6]


gollark: Also, it seems that the suggestions/requests trading hub feedback thread just became ***HOT*** because of the new discussion about acceptable `Wants: ` contents.
gollark: Then what?
gollark: Maybe something like "eggs can't die from sickness unless they've been sick for [6-24] hours".
gollark: Probably.
gollark: I really enjoy having my eggs capable of being harmed drastically on someone else's whim.

See also

References

  1. Collins, Michael John (1996-06-24). "A new statistical parser based on bigram lexical dependencies". Proceedings of the 34th annual meeting on Association for Computational Linguistics -. Association for Computational Linguistics. pp. 184–191. arXiv:cmp-lg/9605012. doi:10.3115/981863.981888. Retrieved 2018-10-09.
  2. Cohen, Philip M. (1975). "Initial Bigrams". Word Ways. 8 (2). Retrieved 11 September 2016.
  3. Corbin, Kyle (1989). "Double, Triple, and Quadruple Bigrams". Word Ways. 22 (3). Retrieved 11 September 2016.
  4. Cornell Math Explorer's Project Substitution Ciphers
  5. Jones, Michael N; D J K Mewhort (August 2004). "Case-sensitive letter and bigram frequency counts from large-scale English corpora". Behavior Research Methods, Instruments, and Computers. 36 (3): 388–396. doi:10.3758/bf03195586. ISSN 0743-3808. PMID 15641428.
  6. "English Letter Frequency Counts: Mayzner Revisited or ETAOIN SRHLDCU". norvig.com. Retrieved 2019-10-28.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.