Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: multiple LoRa in prompt with PonyXL #16150

Open
3 of 6 tasks
DenSckriva opened this issue Jul 4, 2024 · 2 comments
Open
3 of 6 tasks

[Bug]: multiple LoRa in prompt with PonyXL #16150

DenSckriva opened this issue Jul 4, 2024 · 2 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@DenSckriva
Copy link

DenSckriva commented Jul 4, 2024

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

Hello !

I'm coming back to you for a problem that I can't resolve. I can only use one LoRa in my prompt.

If I put two, only the last LoRa in the prompt is used. I can no longer use several LoRa. I would like to point out that I currently use PonyXL models.

I of course tried to change the weight of each of them. But here again, only the modification made to the latest LoRa has any effect.

I have no problem with SD1.5. If you have a solution or ideas, I'm interested!

Steps to reproduce the problem

1 Type your positive prompt like : text, text, text, lora:yourlora:1, text text, text, lora:yourlora2:1, text
2 Type your negative : text, text, text, (text:1.1), text
3 Generate

What should have happened?

Generation of an image with the two lora mentioned in the prompt

What browsers do you use to access the UI ?

Other

Sysinfo

sysinfo-2024-07-04-20-57.json

Console logs

Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.9.4
Commit hash: feee37d75f1b168768014e4634dcb156ee649c05
Launching Web UI with arguments: --medvram-sdxl --xformers --api --gradio-allowed-path G:\StabilityMatrix-win-x64\Data\Images
2024-07-04 23:09:33.756382: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
WARNING:tensorflow:From G:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

Loading weights [c762d00ce6] from G:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\models\Stable-diffusion\zavyfantasiaxlPDXL_v10.safetensors
Creating model from config: G:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 11.4s (prepare environment: 1.8s, import torch: 2.4s, import gradio: 0.6s, setup paths: 4.4s, initialize shared: 0.2s, other imports: 0.4s, load scripts: 0.5s, create ui: 0.5s, gradio launch: 0.2s, add APIs: 0.1s).
G:\StabilityMatrix-win-x64\Data\Packages\Stable Diffusion WebUI\venv\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Applying attention optimization: xformers... done.
Model loaded in 6.0s (load weights from disk: 0.2s, create model: 1.6s, apply weights to model: 2.2s, apply fp8: 1.6s, calculate empty prompt: 0.3s).
100%|██████████| 40/40 [00:13<00:00,  2.93it/s]
Total progress: 100%|██████████| 40/40 [00:10<00:00,  3.75it/s]s]
100%|██████████| 40/40 [00:10<00:00,  3.82it/s]
Total progress: 100%|██████████| 40/40 [00:10<00:00,  3.69it/s]s]

Additional information

No response

@DenSckriva DenSckriva added the bug-report Report of a bug, yet to be confirmed label Jul 4, 2024
@nekoworkshop
Copy link

Sounds like #15995

@DenSckriva Could you try again with FP8 disabled?

@DenSckriva
Copy link
Author

It seems to work but I have to switch the VAE from "full" to "TAESD". Without FP8 it generates NaN errors and nothing else

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

2 participants