A quick excercise on how I process a short text from Wikipedia using a pre-trained English and Thai language models to break down the text into individual sentences and words. (Tokenization) I then use functions in spaCy to do Part of Speech (POS) tagging and Syntactic Dependency parsing and visualize it. A feature called Named Entity Recognition is also added and labelled.
-
Notifications
You must be signed in to change notification settings - Fork 0
San-Maansson/NLP
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Natural Language Processing with spaCy and pre-trained English and Thai language model
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published