-
Notifications
You must be signed in to change notification settings - Fork 439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]Can't open inference server #354
Comments
I think you need CUDA device... |
Does it mean I can't run it without Nvidia graphics card? |
OK. I set the CUDA_PATH environment variable as "C:\Users\username\Downloads\fish-speech\fishenv\env\lib\site-packages\triton\backends\nvidia". It doesn't show that error in triton. However, later, I saw the error;
I guess it does require a nvidia graphics card. Can you confirm? If yes, maybe you can use the issue to improve the requirement part of the documentation and mention this explicitly. |
Describe the bug
Can't open inference server.
To Reproduce
Expected behavior
The inference web UI should be shown at http://127.0.0.1:7860
Actual behavior
No inference web UI is show. The inference service is not running
Screenshots / log
It seems that it complains that the triton can't find the CUDA lib. However, according to nvidia's doc: https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/getting_started/quickstart.html#run-on-cpu-only-system. The triton should be able to run without GPU as well.
The python stacktrace appeared twice in the following log. The first happens when starting the webui. I can see that webpage. But When I go to the "inference" tab on the page, and click "open inference server", it shows the same stacktrace and the inference server webpage is not shown.
Additional context
Windows 11. Intel integrated graphics card. Use lastest master code.
The text was updated successfully, but these errors were encountered: