I'm getting out of memory CUDA the second time when trying to generate an image, it works the first time. What could be? #293
Unanswered
dilectiogames
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I start webui, then I prepare controlnet depth preprocessor + control_sd15_depth model
I add the image, add prompts (txtToImageMode) then I use width 576x768 and I use Hirez to 960x1280
I hit generate and it creates the image correctly.
but then if I try to generate it again it fails. I get CUDA out of memory. till I restart the whole webui thing
sorry for the mess of code but I have no idea why it pasted like this
as far as I know I should have everything updated as of today, bothj webui and controlnet (but this error I get is not new)
Traceback (most recent call last): File "E:\AI\stable-diffusion-webui\modules\call_queue.py", line 56, in fres = list(func(*args, **kwargs)) File "E:\AI\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, **kwargs) File "E:\AI\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img processed = process_images(p) File "E:\AI\stable-diffusion-webui\modules\processing.py", line 486, in process_images res = process_images_inner(p) File "E:\AI\stable-diffusion-webui\modules\processing.py", line 636, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "E:\AI\stable-diffusion-webui\modules\processing.py", line 889, in sample samples = self.sd_model.get_first_stage_encoding(self.sd_model.encode_first_stage(decoded_samples)) File "E:\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda> setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "E:\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__ return self.__orig_func(*args, **kwargs) File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "E:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage return self.first_stage_model.encode(x) File "E:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode h = self.encoder(x) File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "E:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 526, in forward h = self.down[i_level].block[i_block](hs[-1], temb) File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "E:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 138, in forward h = self.norm2(h) File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward return F.group_norm( File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2528, in group_norm return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 600.00 MiB (GPU 0; 6.00 GiB total capacity; 3.70 GiB already allocated; 0 bytes free; 3.96 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Beta Was this translation helpful? Give feedback.
All reactions