You can use this page to generate n-grams for your texts. This site was made by Alex Reuneker. For a little demonstration and how to, see https://youtu.be/4aRD_NwuHW4. For questions, see contact details at https://www.reuneker.nl. If you use this site for your research, please cite it as follows.
Reuneker, A. (2019). N-gram generator. Retrieved ..., from https://www.reuneker.nl/ngram.
Choose preferred settings, or leave at default.
Paste a text to analyze below.
Results will be presented here after you clicked 'generate n-grams'... Please realise that large texts may take a while and may cause your browser to slow down. In such cases, it is wise to not set 'results per page' to 'unlimited'.
The ngram function used was written using Vanilla Javascript, and your text is not uploaded to any server. Your computer itself does all the work. Small texts are processed very quickly. Longer texts take a bit longer, although getting all bigrams from the King James Bible (4.2MB; almost 800.000 words) took my laptop three seconds with multiple tabs and other applications open. Please note that retrieving a long or (virtually) unlimited list of results may slow down or even crash your browser due to memory limitations of your computer and browser architecture.
An n-gram is just a sequence of n words. So, 'cat' is a unigram, 'my cat' is a bi-gram, and 'my sleepy cat' is a tri-gram. There's more interesting things to know about n-grams. For a short explanation of frequencies, probabilities and my (rather unusual) 'strength' measure, click here.
This site does not only generate all n-grams and their frequencies from a text you provide, but it also calculates the conditional probability of the n-gram given the first word. Let's take the text 'I see a cat. I see a cat and a dog.' The n-grams 'I see', 'see a' and 'a cat' all occur twice and have a frequency of 2. If we take the word 'I' in the text, it occurs twice and is followed twice by 'see'. I am thus quite certain that when I encounter the word 'I', the next word will be 'see'. The probability is therefore 1 and is calculated by the number of times 'I see' occurs, divided by the number of times 'I' occurs. 'I see' occurs two times and 'I' occurs two times, so 2/2=1. If we do the same for 'see' in 'see a', which both also occur two times, we get the same result: 2/2=1. But take 'a' in 'a cat'. We have already seen 'a cat' occurs twice, but 'a' occurs three time times ('a cat', 'a cat' and 'a dog'). We therefore divide 2 (frequency of 'a cat') by 3 and get 2/3=0.67. This is the probability that, given the word 'a', the next word is 'cat'. The fun thing is that the remaining 1-0.67=0.33 is exactly the probability of 'dog' being followed by 'a', namely 'a dog' occurring only once, while 'a' occurs three times, so 1/3=0.33.
While there is lots more to ngrams than what I wrote above, I've added one more feature that I can introduce without to much theorizing. One of the problems of the aforementioned probabilities is that infrequent ngrams involving low-frequency words can have high probabilities. Take 'unladylike girls' in Alcott's 'Little Women'. The n-gram occurs only once, but the adjective 'unladylike' also occurs only once. This means the probability of 'girls' given 'unladylike' is 1. Now take 'Mrs. March', which occurs 141 times in the novel. There are other 'Mesdames' (the plural of mrs.), like 'Mrs. Gardiner' and 'Mrs. King', but none is as frequent as 'Mrs. March'. The probability, however, is 'only' 0.59, because of the other mesdames. I however find this n-gram more interesting than an n-gram that has high probability, but only occurs one or a few times. Therefore, the strength-measure I introduce here takes both frequency and probability into account. To do so, it takes the natural logarithm of the n-gram's frequency and multiplies it by its probability. The resulting number isn't really meaningful in itself, but only relative to that of the other n-grams, which makes it great for sorting and finding those n-grams that are have the right balance between frequency and probability. Check, for instance, Dickens' 'A Christmas Carol'. Sorting n-grams on frequency places 'in the' on top, probability places 'piece of', and strength places 'Tiny Tim' on number 1. As for informativeness, I'd take 'Tiny Tim'.
Update: Added option to exclude not only numbers, but also words containing numbers. (2024-01-29)
Update: Slight efficiency rewrite of output rendering. (2024-01-26)
Update: Added feature for respecting or ignoring sentence boundaries. (2024-01-25)
Update: Added feature for including or excluding numbers. (2024-01-25)
Update: Added top limits above 1.000 (2.000, 3.000, 4.000, 5.000, 10.000) to respect or ignore sentence boundaries. (2024-01-25)
Update: Added feature for (virtually) unlimited results. (2024-01-22)
Update: Added feature for unigrams. (2024-01-22)
Update: Added feature to export results to CSV. (2022-08-31)
Update: Added feature to exclude words before processing text. (2022-08-31)
Update: Fixed newline problem resulting in n+1-grams. (2021-01-17)
Update: Added strength measure. (2019-12-11)
Update: Added conditional probabilities. (2019-12-10)