Repository for word reordering task in Sanskrit for poetry and prose texts
-
Updated
Dec 20, 2017 - Python
Repository for word reordering task in Sanskrit for poetry and prose texts
Yet another tensorflow implementation of "Attention is all you need" (a.k.a. Transformer)
Multi heads attention for image classification
Attention Is All You Need (https://arxiv.org/abs/1706.03762)
Implementattion of Attention is all you need paper by Vaswani et. al.
A recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Simple Tensorflow Implementation of Transformer introduced by "Attention is All You Need" (NIPS 2017)
Transformer Based SeqGAN for Language Generation
Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN
A simple TensorFlow implementation of the Transformer
Implementation of 'Transformer' in the paper 'Attention is all you need' with keras
A PyTorch implementation of the Transformer model in "Attention is All You Need".
Attention Is All You Need
Implementation of the Transformer architecture described by Vaswani et al. in "Attention Is All You Need"
Code related to jigsaw-unintended-bias-in-toxicity-classification based on https://github.com/huggingface/pytorch-pretrained-BERT/
Add a description, image, and links to the attention-is-all-you-need topic page so that developers can more easily learn about it.
To associate your repository with the attention-is-all-you-need topic, visit your repo's landing page and select "manage topics."