Natural Language Processing
Go to [[Week 2 - Introduction]] or back to the [[Main AI Page]]
Tagging of sentence parts forms the basis of most NLP functions.
Rules-based doesn't scale to the enormous datasets present today, so statistic-based tagging and tokenisation must be used.
Methods like an [[Hiden Markov Model]] (HMM) are used independently of humans by the computer to work out the context of each word within a sentence to identify the approproate part of speech, tag, or lemma for that word, depending on the task.
These HMMs use bigrams or n-grams of the words around the masked word to determine the probality of what that word might be. There are multiple methods of masking, some more effective than others. These are especially useful in text generation:
>>> sentence = "the man we saw saw a saw"
>>> tokens = nltk.word_tokenize( sentence )
>>> list(nltk.bigrams( tokens ) )
[('the', 'man'), ('man', 'we'), ('we', 'saw'), ('saw', 'saw'),
('saw', 'a'), ('a', 'saw')]
>>>
Extension Pages
natural language processing
Go back to the [[AI Glossary]]
Natural Language Processing is the domain within machine learning dealing with the tasks required for a machine to work with natural language at a level of abstraction close to that, or at, which humans use it.
These tasks include:
- tokenisation
- tagging
- lemmatisation
- dependancy identification and parsing trees
- shape identification
- part-of-speech recognition
- named entity recognition (NER)
Technologies famous within NLP include:
#ToDo Find a way to use [[Natural Language Generation]] to generate my own [[articles]], most likely about [[Economics]].
- public document at doc.anagora.org/natural-language-processing
- video call at meet.jit.si/natural-language-processing