-
Notifications
You must be signed in to change notification settings - Fork 903
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: Could not automatically map llama3 to a tokeniser. Please use tiktoken.get_encoding
to explicitly get the tokeniser you expect.
#365
Comments
tiktoken.get_encoding
to explicitly get the tokeniser you expect.'tiktoken.get_encoding
to explicitly get the tokeniser you expect.
I have been able to fix this with the following code update to the entity_extraction_prompt.py:
Should I make a PR to fix the code in the project? Or would you address this in a different manner... My fix might be a bit on the rough side ;-) You will notice I also set the encoding to utf-8 because I ran in an error writing the prompt output without that. |
tiktoken.get_encoding
to explicitly get the tokeniser you expect.tiktoken.get_encoding
to explicitly get the tokeniser you expect.
@bmaltais |
When trying to use the
graphrag.prompt_tune
withpython -m graphrag.prompt_tune --root . --no-entity-types
using the following settings.yaml:I get:
Look like the use of a local model via ollama is not expected in the code.
The text was updated successfully, but these errors were encountered: