Involves building a search engine on the Wikipedia Data Dump using the data dump of 2013 of size 43 GB. The search results returns in real time.
-
Updated
May 23, 2014 - Python
Involves building a search engine on the Wikipedia Data Dump using the data dump of 2013 of size 43 GB. The search results returns in real time.
Corpus creator for Chinese Wikipedia
A complete Python text analytics package that allows users to search for a Wikipedia article, scrape it, conduct basic text analytics and integrate it to a data pipeline without writing excessive code.
Practical ML and NLP with examples.
Reading the data from OPIEC - an Open Information Extraction corpus
Python package for working with MediaWiki XML content dumps
Wikipedia text corpus for self-supervised NLP model training
Collects a multimodal dataset of Wikipedia articles and their images
Convert WIKI dumped XML (Chinese) to human readable documents in markdown and txt.
A desktop application that searches through a set of Wikipedia articles using Apache Lucene.
Code and data for the paper 'Unsupervised Word Polysemy Quantification with Multiresolution Grids of Contextual Embeddings'
(Ongoing module in development) Getting Wikipedia articles parsed content. Created for getting text corpuses data fast and easy. But can be freely used for other purpuses too
Some Faroese language statistics taken from fo.wikipedia.org content dump
Builds Wikipedia corpora in I5 (a TEI-based format)
A Search Engine built based on Wikipedia dump of 75GB. Involves creation of Index file and returns search results in real time
Clustering of Spanish Wikipedia articles.
A search engine trained from a corpus of wikipedia articles to provide efficient query results.
Add a description, image, and links to the wikipedia-corpus topic page so that developers can more easily learn about it.
To associate your repository with the wikipedia-corpus topic, visit your repo's landing page and select "manage topics."