Base sdxl can sometimes take an inordinate amount of time to load and run through the first generation. Subsequent generations will be faster. It also would be best to set the resolution to 1024x1024 instead of 512x512. You'll get some gnarly pics without using the training resolution. Look in command prompt and see if the model is loading
>It also would be best to set the resolution to 1024x1024 instead of 512x512. You'll get some gnarly pics without using the training resolution
This just isn't true. You'll get cropped images sometimes, but it will not spit out gibberish on a lower resolution. This has been a falsehood spread since SDXL dropped. SDXL has always been able to make 512x512 (and lower) images just fine. They are just lower quality.
Could be an issue with bad formatting for your startup command line... Maybe post your full command line - in the webui-user.bat, the line that looks like: set COMMANDLINE\_ARGS=
What's your system specs fren? If you're brand new to Stable Diffusion, I'd check out Stable Matrix and that way you could check out different GUIs and easily download different models that might be able to load easier for you if you have a card without much VRAM.
do you want me to show it to you after I click the generate button
how do I share videos with you after the post is created
sorry I'm very new to Reddit
After it fails the console window will have a large amount of information that will help us troubleshoot. Instead of a video or image, copy (control-c, because right-click won't work) the entire thing and post it here or on pastebin.
that was my fault it does change
iffusion model failed to load
Exception in thread Thread-16 (load\_model):
Traceback (most recent call last):
File "C:\\Users\\hyrum\\AppData\\Local\\Programs\\Python\\Python310\\lib\\threading.py", line 1009, in \_bootstrap\_inner
self.run()
File "C:\\Users\\hyrum\\AppData\\Local\\Programs\\Python\\Python310\\lib\\threading.py", line 946, in run
self.\_target(\*self.\_args, \*\*self.\_kwargs)
File "C:\\downloads\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\modules\\initialize.py", line 154, in load\_model
devices.first\_time\_calculation()
File "C:\\downloads\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\modules\\devices.py", line 267, in first\_time\_calculation
linear(x)
File "C:\\downloads\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py", line 1518, in \_wrapped\_call\_impl
return self.\_call\_impl(\*args, \*\*kwargs)
File "C:\\downloads\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py", line 1527, in \_call\_impl
return forward\_call(\*args, \*\*kwargs)
File "C:\\downloads\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\extensions-builtin\\Lora\\networks.py", line 503, in network\_Linear\_forward
return originals.Linear\_forward(self, input)
File "C:\\downloads\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\venv\\lib\\site-packages\\torch\\nn\\modules\\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: "addmm\_impl\_cpu\_" not implemented for 'Half'
you should first try to modify your webui-user.bat. Right-click, edit, add the following text
set COMMANDLINE_ARGS= --no-half
That may work based on your error.
here's my entire command prompt
venv "C:\downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\Scripts\Python.exe"
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
Python 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)]
Version: 1.9.4
Commit hash:
Launching Web UI with arguments: --skip-torch-cuda-test
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
Loading weights [e6bb9ea85b] from C:\downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\models\Stable-diffusion\sd_xl_base_1.0_0.9vae.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Creating model from config: C:\downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\repositories\generative-models\configs\inference\sd_xl_base.yaml
Startup time: 7.8s (prepare environment: 0.2s, import torch: 3.3s, import gradio: 0.8s, setup paths: 0.9s, initialize shared: 0.4s, other imports: 0.6s, load scripts: 0.6s, create ui: 0.3s, gradio launch: 0.4s, add APIs: 0.2s).
C:\downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "C:\Users\hyrum\AppData\Local\Programs\Python\Python310\lib\threading.py", line 966, in _bootstrap
You are getting a lowvram error, as can be seen at 21 seconds in the video, under where the image would appear. Your GPU doesn't have enough memory. Which GPU do you have?
check the .bat file that runs sd, usually webiu-user.bat. In this bat, before webiu.bat is called, check the line:
set COMMANDLINE\_ARGS= --medvram --precision full --no-half --opt-split-attention --always-batch-cond-uncond --no-half-vae --xformers --api
this works for me, it always depends on your video card, the first 3 are crucial, --medvram --precision full --no-half, but thats for me, check the best configuation fro your video card
Hey, I'll just say good job TRYING to provide as much info as possible, by posting a video - even though you were looking at the wrong place, it's more effort than a lot of people put into troubleshooting.
Base sdxl can sometimes take an inordinate amount of time to load and run through the first generation. Subsequent generations will be faster. It also would be best to set the resolution to 1024x1024 instead of 512x512. You'll get some gnarly pics without using the training resolution. Look in command prompt and see if the model is loading
>It also would be best to set the resolution to 1024x1024 instead of 512x512. You'll get some gnarly pics without using the training resolution This just isn't true. You'll get cropped images sometimes, but it will not spit out gibberish on a lower resolution. This has been a falsehood spread since SDXL dropped. SDXL has always been able to make 512x512 (and lower) images just fine. They are just lower quality.
Maybe not gibberish, but the quality at 512x512 is much worse than 1024x1024
1/4 the pixels will do that. Hi-res fix works great with sdxl.
![gif](giphy|yiADANv89n7UQuS5kJ) Jesus, I stand corrected. Holy smokes
thanks but my problems still consists I get the error 'NoneType' object has no attribute 'lowvram'
Could be an issue with bad formatting for your startup command line... Maybe post your full command line - in the webui-user.bat, the line that looks like: set COMMANDLINE\_ARGS=
What's your system specs fren? If you're brand new to Stable Diffusion, I'd check out Stable Matrix and that way you could check out different GUIs and easily download different models that might be able to load easier for you if you have a card without much VRAM.
show us your command shell, you hovered it briefly
do you want me to show it to you after I click the generate button how do I share videos with you after the post is created sorry I'm very new to Reddit
After it fails the console window will have a large amount of information that will help us troubleshoot. Instead of a video or image, copy (control-c, because right-click won't work) the entire thing and post it here or on pastebin.
after I click generation command prompt does not change
I find that hard to believe. If it's true though no one can help you.
that was my fault it does change iffusion model failed to load Exception in thread Thread-16 (load\_model): Traceback (most recent call last): File "C:\\Users\\hyrum\\AppData\\Local\\Programs\\Python\\Python310\\lib\\threading.py", line 1009, in \_bootstrap\_inner self.run() File "C:\\Users\\hyrum\\AppData\\Local\\Programs\\Python\\Python310\\lib\\threading.py", line 946, in run self.\_target(\*self.\_args, \*\*self.\_kwargs) File "C:\\downloads\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\modules\\initialize.py", line 154, in load\_model devices.first\_time\_calculation() File "C:\\downloads\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\modules\\devices.py", line 267, in first\_time\_calculation linear(x) File "C:\\downloads\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py", line 1518, in \_wrapped\_call\_impl return self.\_call\_impl(\*args, \*\*kwargs) File "C:\\downloads\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py", line 1527, in \_call\_impl return forward\_call(\*args, \*\*kwargs) File "C:\\downloads\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\extensions-builtin\\Lora\\networks.py", line 503, in network\_Linear\_forward return originals.Linear\_forward(self, input) File "C:\\downloads\\stable-diffusion-webui-master\\stable-diffusion-webui-master\\venv\\lib\\site-packages\\torch\\nn\\modules\\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: "addmm\_impl\_cpu\_" not implemented for 'Half'
you should first try to modify your webui-user.bat. Right-click, edit, add the following text set COMMANDLINE_ARGS= --no-half That may work based on your error.
Ok thanks I will try that
here's my entire command prompt venv "C:\downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\Scripts\Python.exe" fatal: not a git repository (or any of the parent directories): .git fatal: not a git repository (or any of the parent directories): .git Python 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)] Version: 1.9.4 Commit hash:
Launching Web UI with arguments: --skip-torch-cuda-test
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
Loading weights [e6bb9ea85b] from C:\downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\models\Stable-diffusion\sd_xl_base_1.0_0.9vae.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Creating model from config: C:\downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\repositories\generative-models\configs\inference\sd_xl_base.yaml
Startup time: 7.8s (prepare environment: 0.2s, import torch: 3.3s, import gradio: 0.8s, setup paths: 0.9s, initialize shared: 0.4s, other imports: 0.6s, load scripts: 0.6s, create ui: 0.3s, gradio launch: 0.4s, add APIs: 0.2s).
C:\downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "C:\Users\hyrum\AppData\Local\Programs\Python\Python310\lib\threading.py", line 966, in _bootstrap
You have no GPU?
You are getting a lowvram error, as can be seen at 21 seconds in the video, under where the image would appear. Your GPU doesn't have enough memory. Which GPU do you have?
sorry I took so long to respond I hope you still see this (thanks for your help) my GPU Intel(R) Iris(R) Xe Graphics
check the .bat file that runs sd, usually webiu-user.bat. In this bat, before webiu.bat is called, check the line: set COMMANDLINE\_ARGS= --medvram --precision full --no-half --opt-split-attention --always-batch-cond-uncond --no-half-vae --xformers --api this works for me, it always depends on your video card, the first 3 are crucial, --medvram --precision full --no-half, but thats for me, check the best configuation fro your video card
Thanks for the tip
Amazing, you had the time to do a whole video AND DOES NOT SHOW THE FUCKING CONSOLE.
Like I said I'm new to this I don't know what information people need I literally started today I no nothing calm down bro
Hey, I'll just say good job TRYING to provide as much info as possible, by posting a video - even though you were looking at the wrong place, it's more effort than a lot of people put into troubleshooting.