Skip to content

Implementing a transformer model for the SCAN compositionality tasks.

License

Notifications You must be signed in to change notification settings

SondreWold/EquiScan

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

On the Compositional Skills of Sequence-to-Sequence Transformers

This repository implements a standard encoder-decoder transformer model for the SCAN task, as put forth in Brendan Lake & Marco Beroni, 2018

Data

The data can be downloaded by running: chmod u+x ./download_data.sh ./download_data.sh

Implementation

Models were tuned with an informal hyperparameter search on the "simple" task. In the paper, it seem like they use a batch size of 1 for training, so this is also the default in this repository. The best configuration was:

Hyperparameter Value
lr 0.0003
layers 2
hidden_size 128
attention_heads 2
epochs 5

This gives a model of approximately 850 000 parameters, roughly the same as the LSTM models used in the paper. The models could easily run on CPU on the Apple M1.

Results


Results of the "simple" task for different dataset sizes..

For the length productivity task, the model got a 82.27% accuracy.

About

Implementing a transformer model for the SCAN compositionality tasks.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages