[ACCEPTED]-In Natural language processing, what is the purpose of chunking?-nlp
Chunking is also called shallow parsing 25 and it's basically the identification of 24 parts of speech and short phrases (like 23 noun phrases). Part of speech tagging tells you whether words 22 are nouns, verbs, adjectives, etc, but it 21 doesn't give you any clue about the structure 20 of the sentence or phrases in the sentence. Sometimes 19 it's useful to have more information than 18 just the parts of speech of words, but you 17 don't need the full parse tree that you 16 would get from parsing.
An example of when 15 chunking might be preferable is Named Entity Recognition. In NER, your 14 goal is to find named entities, which tend 13 to be noun phrases (though aren't always), so 12 you would want to know that President Barack Obama is in the following 11 sentence:
President Barack Obama criticized insurance companies 10 and banks as he urged supporters to pressure 9 Congress to back his moves to revamp the 8 health-care system and overhaul financial 7 regulations. (source)
But you wouldn't necessarily 6 care that he is the subject of the sentence.
Chunking 5 has also been fairly commonly used as a 4 preprocessing step for other tasks like 3 example-based machine translation, natural 2 language understanding, speech generation, and 1 others.
For "text chunking" in natural 7 language processing, see here (you probably 6 want all the lectures in this series as 5 a kind of "NLP 101"...): it spans 4 a series of tasks such as finding noun groups, finding 3 verb groups, and complete partitioning sentence 2 -> chunks of several types. The lecture 1 whose URL I quoted goes into more details!
Grouping words into syntactically correlated 2 phrases (chunks). NB: IOB labelling can 1 be used to indicate chunk boundaries.
More Related questions