Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] gptj, mpt support. #518

Open
DongqiShen opened this issue Jul 5, 2023 · 0 comments
Open

[Question] gptj, mpt support. #518

DongqiShen opened this issue Jul 5, 2023 · 0 comments

Comments

@DongqiShen
Copy link

Hi, there.
I looked through the readme file and didn't find out-of-the-box support for these models. Although they have a similar structure to GPT2, it is still relatively hard for an LLM engineer to write cuda. I tried Faster Transformer to speed up MOSS which is extremely fast and I look forward to using lightseq.
Also, I think you should update the readMe file since I saw LLAMA supported which is very important because BAICHUAN have almost the same structure as llama, especially for Chinese OSS.
Got a question here, is that possible to implement flash attention here to support more Nvidia cards like v100? I saw a collaborator comment that said it is not supported for v100 in the original implementation. From my naive understanding, flash attention is just an Engineering problem, and the key is sharing memory?
Thanks for your great work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant