We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
- paddlepaddle: 2.5.2 - paddlepaddle-gpu: 2.5.2 - paddlenlp: 2.8.0
UNIMO模型的resize_token_embeddings方法不会修改decoder的vocab_size,导致input_embeddings_size和output_embeddings_size没法对齐
tokenizer = UNIMOTokenizer.from_pretrained('./unimo-text-1.0-large') model.resize_token_embeddings(len(tokenizer)) print(model.get_input_embeddings().weight.shape, model.lm_head.weight.shape)
The text was updated successfully, but these errors were encountered:
GPT2模型也有类似问题,但是他已经被修复了,参考link,我使用类似方法修改unimo/modeling.py后可以修复,后续会提个PR。
Sorry, something went wrong.
lugimzzz
No branches or pull requests
软件环境
重复问题
错误描述
稳定复现步骤 & 代码
The text was updated successfully, but these errors were encountered: