Skip to content

Issues: pytorch/torchtune

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Model merging scripts?
#1179 opened Jul 15, 2024 by suraj-srinivas
Mask eos token for packed dataset
#1177 opened Jul 15, 2024 by iankur
ALLGATHER_BASE timeout error
#1165 opened Jul 11, 2024 by aknvictor
[feature request] Saving / Loading packed dataset enhancement New feature or request help wanted Extra attention is needed
#1149 opened Jul 8, 2024 by ScottHoang
generate is correct but generate from quantization get error: help wanted Extra attention is needed question Further information is requested
#1148 opened Jul 8, 2024 by artisanclouddev
Resize token embedding. help wanted Extra attention is needed question Further information is requested
#1145 opened Jul 7, 2024 by hungphongtrn
safe_torch_load failed when resume from checkpoint bug Something isn't working question Further information is requested
#1142 opened Jul 3, 2024 by ScottHoang
text_completion_dataset removed? question Further information is requested
#1140 opened Jul 3, 2024 by wiiiktor
Quantization for Llama-70b raises CUDA OOM question Further information is requested
#1128 opened Jun 27, 2024 by lulmer
Improve dataset documentation
#1123 opened Jun 26, 2024 by RdoubleA
Save intermediate checkpoints during training question Further information is requested
#1107 opened Jun 21, 2024 by l3utterfly
want dora and nef-tune supports! enhancement New feature or request
#1100 opened Jun 20, 2024 by jeffchy
ProTip! Follow long discussions with comments:>50.