Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

please how to call it locally #185

Open
NanshaNansha opened this issue Jul 4, 2024 · 1 comment
Open

please how to call it locally #185

NanshaNansha opened this issue Jul 4, 2024 · 1 comment

Comments

@NanshaNansha
Copy link

ae406fbe183e2be0bec4dc8fcf8e4f7

@Siddharth-Latthe-07
Copy link

@NanshaNansha To load a model locally using the PeftModel class from a pretrained model, you need to ensure that the base_model and other required files are available locally.
Try out these steps and let me know, if it works

  1. Install the dependencies:- pip install transformers peft
  2. Prepare Local Paths: Set the local paths where your pretrained model and cache directory are located.
    example snippet:-
from peft import PeftModel

# Define the local path to the base model and the cache directory
base_model_path = 'path/to/your/base_model'
model_id = 'path/to/your/local_model_directory'
cache_dir = 'path/to/your/cache_directory'

# Load the model from the local path
model = PeftModel.from_pretrained(base_model_path, model_id=model_id, cache_dir=cache_dir)

# Example: Use the model for inference
# Make sure you have the tokenizer and other necessary components
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained(base_model_path)
input_text = "Your input text here"

inputs = tokenizer(input_text, return_tensors="pt")
outputs = model(**inputs)

print(outputs)

Let me know, if it works
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants