Skip to content

Links to my repositories, where I implement a wide variety of Natural Language Processing models using TensorFlow and Hugging Face.

Notifications You must be signed in to change notification settings

JersonGB22/NaturalLanguageProcessing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 

Repository files navigation

Natural Language Processing Models

This repository contains a collection of links to my repositories, which showcase implementations of natural language processing (NLP) models in Python, using TensorFlow and Hugging Face's Transformers library.

What is Natural Language Processing?

Natural Language Processing is a field of artificial intelligence that focuses on the interaction between computers and humans through natural language. It involves the development of algorithms and models to enable computers to understand, interpret, and generate human language.

Implemented Models

The following are the natural language processing models that I have implemented to date:

  1. Text Classification: Text classification is a common NLP task that assigns a label or class to a piece of text based on its content. It is used for tasks such as sentiment analysis, spam detection, and topic categorization.

  2. Token Classification: Token classification involves assigning labels to individual tokens in a sentence or sequence of text. It is commonly used for tasks such as named entity recognition, part-of-speech tagging, and syntactic chunking.

  3. Question Answering: Question answering models aim to generate a relevant answer given a question and context. They are used in applications such as chatbots, virtual assistants, and search engines.

  4. Causal Language Modeling: Causal language modeling predicts the next token in a sequence of tokens, with the model only attending to tokens to the left. These models are frequently used for text generation tasks, where maintaining coherence and context is essential. GPT-2 is an example of a causal language model.

  5. Translation: Translation models convert text from one language to another, enabling cross-lingual communication and content localization. They can also be used for tasks such as speech-to-text and text-to-speech conversion.

  6. Summarization: Summarization models generate concise summaries of longer texts while retaining essential information. They are valuable for tasks such as document summarization, news article extraction, and content recommendation.

Contributions

Contributions to this repository are welcome. If you have any questions or suggestions, please do not hesitate to contact me.

Technological Stack

Python TensorFlow Hugging Face Scikit-learn Plotly

Contact

Gmail LinkedIn GitHub