Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could you elaborate more on the extrinsic evaluation? #4

Open
kaniblu opened this issue Jul 31, 2020 · 3 comments
Open

Could you elaborate more on the extrinsic evaluation? #4

kaniblu opened this issue Jul 31, 2020 · 3 comments

Comments

@kaniblu
Copy link

kaniblu commented Jul 31, 2020

You mentioned in the paper that you randomly sampled 1% of the training set and 5 of each class for the validation set. I tried to replicate the baseline results on SST-2 by fine-tuning bert-base-uncased (as mentioned in the paper), but the results are much higher than the target numbers.

Your Paper: 59.08 (5.59) [15 trials]
My Attempt: 72.89 (6.36) [9 trials]

I could probably increase the number of trials to see if I was just unlucky, but it is unlikely that statistical variance could deviate the numbers that much. Could you provide more details about your experiments? Did you sample the datasets with different seeds for each trial?

BTW I am using the dataset provided by the authors of CBERT (training set size 6,228). Thanks in advance.

@kaniblu
Copy link
Author

kaniblu commented Aug 5, 2020

For SNIPS, the classification accuracy (bert-base-uncased) using randomly sampled 1% of the training set is as follows.

{
    "acc": {
      "mean": 0.9297142857142857,
      "std": 0.02000145767282727,
      "raw": [
        0.9042857142857142,
        0.9285714285714286,
        0.9542857142857143,
        0.95,
        0.9414285714285714,
        0.9385714285714286,
        0.9085714285714286,
        0.9328571428571428,
        0.9171428571428571,
        0.9028571428571428,
        0.96,
        0.9085714285714286,
        0.9071428571428571,
        0.9457142857142857,
        0.9457142857142857
      ]
    },
    "scarcity": 0.01
  }

@varunkumar-dev
Copy link
Owner

varunkumar-dev commented Aug 12, 2020

Sorry for the late reply. Here is how we sampled data.

We took initial dataset and randomly sampled data 15 times (both training and dev set). With 1% data, 92% accuracy with 0.02 std for SNIPS looks too good to be true. You should observe a much larger variance with 1% experiment

@kaniblu
Copy link
Author

kaniblu commented Aug 19, 2020

Thanks for the reply. The SNIPS results of 92% accuracy at 1% data level (around 20 examples per class) are definitely plausible. As an indirect evidence, you can check out FSI experiments in this paper, which claims that "BERT generalizes well with just 30 examples" hence they went with 10 seed examples per class.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants