We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
transformers
@muellerzr @SunMarc
examples
Execute the following code using accelerate with MPS accelerator.
from transformers import TrainingArguments training_args = TrainingArguments(output_dir = "tmp_trainer") print(training_args.device)
I expect to see mps will be displayed. However, I see cpu.
mps
cpu
The text was updated successfully, but these errors were encountered:
TrainingArguments
Successfully merging a pull request may close this issue.
System Info
transformers
version: 4.43.0.dev0- distributed_type: NO
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
Who can help?
@muellerzr @SunMarc
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Execute the following code using accelerate with MPS accelerator.
Expected behavior
I expect to see
mps
will be displayed. However, I seecpu
.The text was updated successfully, but these errors were encountered: