Skip to content

Add multi-gpu support #5997

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 41 commits into
base: main
Choose a base branch
from
Draft

Add multi-gpu support #5997

wants to merge 41 commits into from

Conversation

lstein
Copy link
Collaborator

@lstein lstein commented Mar 20, 2024

Summary

This adds support for systems that have multiple GPUs. On CUDA systems, it will automatically detect when a system has more than one GPU and configure the model cache and the session processor to take advantage of them, keeping track of which GPUs are busy and which are available, and rendering batches of images in parallel. It works at the session processor level by placing each session into a thread-safe queue that is monitored by multiple threads. Each thread reserves a GPU at entry, processes the entire invocation, and then releases the GPU to be used by other pending requests.

This PR is no longer being maintained. Multi-GPU support can be found in a forked repository at: https://github.com/lstein/InvokeAI-MGPU

Demo

cinnamon-2024-04-16T152651-0400.webm

How it works

In addition to changes in the session processor, this PR adds a few calls to the model manager's RAM cache to reserve and release GPUs in a thread-safe way, and extends the TorchDevice class to support dynamic device selection without changing its API. The PR also improves how models are moved from RAM to VRAM to increase load speed modestly. During debugging, I discovered that uuid.uuid4() does not appear to be thread-safe on Windows platforms (https://stackoverflow.com/questions/2759644/python-multiprocessing-doesnt-play-nicely-with-uuid-uuid4), and this was borking the latent caching system. I worked around this by adding the current thread ID to the cache object's name.

There are two new options for the config file:

  • max_threads -- specify the maximum number of session processing threads that can run at the same time. If not defined, will set this equal to the number of GPU devices.
  • devices -- a list of devices to use for acceleration. If not defined, this will be dynamically calculated to use all CUDA GPUs found.

Example:

max_threads: 3
devices:
  - cuda:0
  - cuda:1
  - cuda:4

Note that there is no problem if max_threads does not match the number of GPU devices (even on single-GPU systems), but there won't be any benefit to defining more threads than GPUs.

The code is currently tested and working using multiple threads on a 6-GPU Windows machine.

To test

First, buy yourself two RTX 4090s :-).

Seriously, though, the best thing to do is to do ensure that this doesn't crash single-GPU systems. Exercise the linear and graph workflows. Try different models, loras, IP adapters, upscalers, etc. Run a couple large batches and make sure that they can be paused, resumed and cancelled as usual.

If you have access to a system that has an integrated GPU as well as a discrete one, you can test out the multi-GPU processing simply by queueing up a series of 2 or more generation jobs.

QA Instructions

Squash merge when approved.

Merge Plan

Checklist

  • The PR has a short but descriptive title
  • Tests added / updated
  • Documentation added / updated

@lstein lstein marked this pull request as draft March 20, 2024 03:31
@github-actions github-actions bot added python PRs that change python files backend PRs that change backend files services PRs that change app services python-tests PRs that change python tests labels Mar 20, 2024
@psychedelicious
Copy link
Collaborator

Lincoln, please stop tempting me to buy another RTX 4090.

@github-actions github-actions bot added invocations PRs that change invocations docs PRs that change docs labels Mar 31, 2024
psychedelicious and others added 7 commits April 1, 2024 07:45
Should be waiting on the resume event instead of checking it in a loop
Prefer an early return/continue to reduce the indentation of the processor loop. Easier to read.

There are other ways to improve its structure but at first glance, they seem to involve changing the logic in scarier ways.
@github-actions github-actions bot added the api label Apr 1, 2024
@makemefeelgr8
Copy link

@lstein You're my hero! Can you hide it behind a checkbox, a setting or the env variable? Just to merge this feature and prevent @psychedelicious from worrying too much.

@psychedelicious
Copy link
Collaborator

@makemefeelgr8 Sorry but it's not that simple. This change needs to wait until we can allocate resources to do thorough testing.

@lstein lstein force-pushed the lstein/feat/multi-gpu branch from b6e026b to 589a795 Compare June 3, 2024 01:29
@lstein
Copy link
Collaborator Author

lstein commented Jun 3, 2024

Wheel of commit 589a795 InvokeAI-4.2.3-py3-none-any.whl.zip

@GoldenWRaft
Copy link

This is awesome, I've been trying to find an AI interface, which has multiple GPU support. I have 2 3070s, and I can only use 1 at a time.

Would like to see this implemented in the future on Invoke

@raldone01
Copy link

raldone01 commented Jun 14, 2024

I built invoke from 589a7959c019bc56e23ad3d989d015e443ffa20b.
After invokeai is started the first model I choose always works.
However, sometimes when I switch the model, generation fails with: Cannot copy out of meta tensor; no data!.
If I change the model back to the first one used after invoke was started I can generate again.
No other model will work until invoke is restarted.

I have observed lots of these warnings:

invoke_ai-1  |   warnings.warn(f'for {key}: copying from a non-meta parameter in the checkpoint to a meta '
invoke_ai-1  | /opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py:2025: UserWarning: for text_model.final_layer_norm.bias: copying from a non-meta parameter in the checkpoint to a meta parameter in the current model, which is a no-op. (Did you mean to pass `assign=True` to assign items in the state dictionary to their corresponding key in the module instead of copying them in place?)
invoke_ai-1  |   warnings.warn(f'for {key}: copying from a non-meta parameter in the checkpoint to a meta '

The following is the final error:

invoke_ai-1  |   warnings.warn(f'for {key}: copying from a non-meta parameter in the checkpoint to a meta '
invoke_ai-1  | /opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py:2025: UserWarning: for text_model.final_layer_norm.bias: copying from a non-meta parameter in the checkpoint to a meta parameter in the current model, which is a no-op. (Did you mean to pass `assign=True` to assign items in the state dictionary to their corresponding key in the module instead of copying them in place?)
invoke_ai-1  |   warnings.warn(f'for {key}: copying from a non-meta parameter in the checkpoint to a meta '
invoke_ai-1  | [2024-06-14 14:03:35,323]::[InvokeAI]::ERROR --> Error while invoking session 8a779a60-6986-428b-b2df-89dbf9171c09, invocation 80756805-3038-404e-b40c-bb436e6620b2 (compel): Cannot copy out of meta tensor; no data!
invoke_ai-1  | [2024-06-14 14:03:35,323]::[InvokeAI]::ERROR --> Traceback (most recent call last):
invoke_ai-1  |   File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
invoke_ai-1  |     output = invocation.invoke_internal(context=context, services=self._services)
invoke_ai-1  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
invoke_ai-1  |     output = self.invoke(context)
invoke_ai-1  |              ^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
invoke_ai-1  |     return func(*args, **kwargs)
invoke_ai-1  |            ^^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/invokeai/invokeai/app/invocations/compel.py", line 82, in invoke
invoke_ai-1  |     with (
invoke_ai-1  |   File "/opt/invokeai/invokeai/backend/model_manager/load/load_base.py", line 31, in __enter__
invoke_ai-1  |     return self._locker.lock()
invoke_ai-1  |            ^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/invokeai/invokeai/backend/model_manager/load/model_cache/model_locker.py", line 63, in lock
invoke_ai-1  |     model_in_gpu.to(self._execution_device)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2724, in to
invoke_ai-1  |     return super().to(*args, **kwargs)
invoke_ai-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1152, in to
invoke_ai-1  |     return self._apply(convert)
invoke_ai-1  |            ^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 802, in _apply
invoke_ai-1  |     module._apply(fn)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 802, in _apply
invoke_ai-1  |     module._apply(fn)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 802, in _apply
invoke_ai-1  |     module._apply(fn)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 825, in _apply
invoke_ai-1  |     param_applied = fn(param)
invoke_ai-1  |                     ^^^^^^^^^
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1150, in convert
invoke_ai-1  |     return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
invoke_ai-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  | NotImplementedError: Cannot copy out of meta tensor; no data!
invoke_ai-1  | 
invoke_ai-1  | [2024-06-14 14:03:35,474]::[uvicorn.access]::INFO --> 172.24.13.33:13906 - "GET /api/v1/images/i/e32c9a5e-25fb-4da6-a119-ac10ac0b6302.png/thumbnail HTTP/1.1" 200
invoke_ai-1  | [2024-06-14 14:03:35,527]::[InvokeAI]::INFO --> Graph stats: 8a779a60-6986-428b-b2df-89dbf9171c09
invoke_ai-1  |                           Node   Calls   Seconds  VRAM Used
invoke_ai-1  |              main_model_loader       1    0.003s     1.864G
invoke_ai-1  |                      clip_skip       1    0.001s     1.864G
invoke_ai-1  |                         compel       1    0.183s     1.934G
invoke_ai-1  | TOTAL GRAPH EXECUTION TIME:   0.187s
invoke_ai-1  | TOTAL GRAPH WALL TIME:   0.194s
invoke_ai-1  | RAM used by InvokeAI process: 7.34G (+0.007G)
invoke_ai-1  | RAM used to load models: 0.23G
invoke_ai-1  | VRAM in use: 1.864G
invoke_ai-1  | RAM cache statistics:
invoke_ai-1  |    Model cache hits: 2
invoke_ai-1  |    Model cache misses: 2
invoke_ai-1  |    Models cached: 7
invoke_ai-1  |    Models cleared from cache: 0
invoke_ai-1  |    Cache high water mark: 2.22/64.00G
invoke_ai-1  | 
invoke_ai-1  | [2024-06-14 14:03:35,527]::[ModelManagerService]::INFO --> Released torch device cuda:0
invoke_ai-1  | [2024-06-14 14:03:35,547]::[ModelManagerService]::INFO --> Reserved torch device cuda:0 for execution thread 138839213475520
invoke_ai-1  | [2024-06-14 14:03:35,606]::[uvicorn.access]::INFO --> 172.24.13.33:13906 - "GET /api/v1/images/i/e32c9a5e-25fb-4da6-a119-ac10ac0b6302.png/metadata HTTP/1.1" 200
invoke_ai-1  | [2024-06-14 14:03:35,704]::[uvicorn.access]::INFO --> 172.24.13.33:13906 - "GET /api/v1/images/i/ef42ce9d-6f05-41ac-95aa-48a141b73544.png HTTP/1.1" 200
invoke_ai-1  | [2024-06-14 14:03:35,724]::[InvokeAI]::INFO --> Graph stats: 85531e5f-4773-496f-8d0a-f3258030ca5c
invoke_ai-1  |                           Node   Calls   Seconds  VRAM Used
invoke_ai-1  |              main_model_loader       1    0.001s     0.000G
invoke_ai-1  |                      clip_skip       1    0.001s     0.000G
invoke_ai-1  |                         compel       2    2.137s     0.314G
invoke_ai-1  |                        collect       2    0.001s     0.314G
invoke_ai-1  |                          noise       1    0.009s     0.314G
invoke_ai-1  |                denoise_latents       1   29.703s     1.864G
invoke_ai-1  |                  core_metadata       1    0.001s     1.864G
invoke_ai-1  |                            l2i       1    2.092s     1.934G
invoke_ai-1  | TOTAL GRAPH EXECUTION TIME:  33.945s
invoke_ai-1  | TOTAL GRAPH WALL TIME:  33.962s
invoke_ai-1  | RAM used by InvokeAI process: 7.34G (+6.567G)
invoke_ai-1  | RAM used to load models: 0.39G
invoke_ai-1  | VRAM in use: 1.864G
invoke_ai-1  | RAM cache statistics:
invoke_ai-1  |    Model cache hits: 6
invoke_ai-1  |    Model cache misses: 5
invoke_ai-1  |    Models cached: 5
invoke_ai-1  |    Models cleared from cache: 0
invoke_ai-1  |    Cache high water mark: 1.99/64.00G
invoke_ai-1  | 
invoke_ai-1  | [2024-06-14 14:03:35,725]::[ModelManagerService]::INFO --> Released torch device cuda:1
invoke_ai-1  | [2024-06-14 14:03:35,752]::[uvicorn.access]::INFO --> 172.24.13.33:13906 - "GET /api/v1/images/i/ef42ce9d-6f05-41ac-95aa-48a141b73544.png/full HTTP/1.1" 200
invoke_ai-1  | [2024-06-14 14:03:35,807]::[InvokeAI]::ERROR --> Error while invoking session e3567ef0-81cc-4b0f-bd71-3c801bca7f62, invocation e98b1208-a9d0-4b7e-a5c7-6a93586769e4 (compel): Cannot copy out of meta tensor; no data!
invoke_ai-1  | [2024-06-14 14:03:35,808]::[InvokeAI]::ERROR --> Traceback (most recent call last):
invoke_ai-1  |   File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
invoke_ai-1  |     output = invocation.invoke_internal(context=context, services=self._services)
invoke_ai-1  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
invoke_ai-1  |     output = self.invoke(context)
invoke_ai-1  |              ^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
invoke_ai-1  |     return func(*args, **kwargs)
invoke_ai-1  |            ^^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/invokeai/invokeai/app/invocations/compel.py", line 82, in invoke
invoke_ai-1  |     with (
invoke_ai-1  |   File "/opt/invokeai/invokeai/backend/model_manager/load/load_base.py", line 31, in __enter__
invoke_ai-1  |     return self._locker.lock()
invoke_ai-1  |            ^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/invokeai/invokeai/backend/model_manager/load/model_cache/model_locker.py", line 63, in lock
invoke_ai-1  |     model_in_gpu.to(self._execution_device)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2724, in to
invoke_ai-1  |     return super().to(*args, **kwargs)
invoke_ai-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1152, in to
invoke_ai-1  |     return self._apply(convert)
invoke_ai-1  |            ^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 802, in _apply
invoke_ai-1  |     module._apply(fn)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 802, in _apply
invoke_ai-1  |     module._apply(fn)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 802, in _apply
invoke_ai-1  |     module._apply(fn)
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 825, in _apply
invoke_ai-1  |     param_applied = fn(param)
invoke_ai-1  |                     ^^^^^^^^^
invoke_ai-1  |   File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1150, in convert
invoke_ai-1  |     return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
invoke_ai-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
invoke_ai-1  | NotImplementedError: Cannot copy out of meta tensor; no data!
invoke_ai-1  | 
invoke_ai-1  | [2024-06-14 14:03:35,862]::[uvicorn.access]::INFO --> 172.24.13.33:13906 - "GET /api/v1/images/i/ef42ce9d-6f05-41ac-95aa-48a141b73544.png/thumbnail HTTP/1.1" 200
invoke_ai-1  | [2024-06-14 14:03:35,955]::[InvokeAI]::INFO --> Graph stats: e3567ef0-81cc-4b0f-bd71-3c801bca7f62
invoke_ai-1  |                           Node   Calls   Seconds  VRAM Used
invoke_ai-1  |              main_model_loader       1    0.014s     1.864G
invoke_ai-1  |                      clip_skip       1    0.001s     1.864G
invoke_ai-1  |                         compel       1    0.193s     1.934G
invoke_ai-1  | TOTAL GRAPH EXECUTION TIME:   0.209s
invoke_ai-1  | TOTAL GRAPH WALL TIME:   0.212s
invoke_ai-1  | RAM used by InvokeAI process: 7.34G (+0.000G)
invoke_ai-1  | RAM used to load models: 0.23G
invoke_ai-1  | VRAM in use: 1.864G
invoke_ai-1  | RAM cache statistics:
invoke_ai-1  |    Model cache hits: 2
invoke_ai-1  |    Model cache misses: 0
invoke_ai-1  |    Models cached: 7
invoke_ai-1  |    Models cleared from cache: 0
invoke_ai-1  |    Cache high water mark: 2.22/64.00G
invoke_ai-1  | 
invoke_ai-1  | [2024-06-14 14:03:35,955]::[ModelManagerService]::INFO --> Released torch device cuda:0
invoke_ai-1  | [2024-06-14 14:03:36,046]::[uvicorn.access]::INFO --> 172.24.13.33:13906 - "GET /api/v1/images/i/ef42ce9d-6f05-41ac-95aa-48a141b73544.png/metadata HTTP/1.1" 200

I have two P40s with 24GB vram each. My server has 250GiB of ram with lots of free space.

All models and switching work properly on the main branch.

EDIT after further testing:

To test the issue I switched the model between every invoke.
I also switched between sd-1 and sdxl.
I only created 512x512 images to avoid the high-res code.

I did not encounter the above error when I only queued up single invokes.
If the batch size is 2 or more the error occurs.

Also the GPU devices are not released properly if the above error occurs

@lstein lstein requested a review from ebr as a code owner June 16, 2024 23:50
@raldone01
Copy link

raldone01 commented Jun 17, 2024

When the invokeai.yaml is migrated from 4.0.1 to 4.0.2 all user settings seem to be cleared.
(NotImplementedError: Cannot copy out of meta tensor; no data! still occurs when switching models with 7088d5610b6a7eb192f10842596d8d8f506757f9 on my machine.)

I have also just encountered the following error:

[2024-06-18 07:25:57,265]::[InvokeAI]::ERROR --> Error while invoking session dfb34112-6c21-4ee3-a14b-51728be42953, invocation 031544f1-4918-4a76-b7e4-584ef0dd8edc (esrgan): PytorchStreamReader failed reading zip archive: failed finding central directory
[2024-06-18 07:25:57,265]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
    output = self.invoke(context)
             ^^^^^^^^^^^^^^^^^^^^
  File "/opt/invokeai/invokeai/app/invocations/upscale.py", line 105, in invoke
    upscaler = RealESRGAN(
               ^^^^^^^^^^^
  File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 70, in __init__
    loadnet = torch.load(model_path, map_location=torch.device("cpu"))
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/serialization.py", line 1005, in load
    with _open_zipfile_reader(opened_file) as opened_zipfile:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/serialization.py", line 457, in __init__
    super().__init__(torch._C.PyTorchFileReader(name_or_buffer))
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

It happened when I queued up many single invokes on different models with 1024x1024 on sd-1 models. The above error only rarely happens most of the time it works. Something with the concurrent access to the upscaling models might not be 100% thread safe.

@lstein
Copy link
Collaborator Author

lstein commented Jun 23, 2024

@raldone01 Thank you so much for giving the PR a try and your valuable feedback. I think I know where the meta tensor bug is occurring and should have a fix soon.

@lstein
Copy link
Collaborator Author

lstein commented Jun 23, 2024

@raldone01 I've fixed what I believe to be the bug with changing models. Unfortunately I don't have access to a multi-GPU system at the moment, and have only tested it in a single-GPU environments. Give it a whirl and let me know how it goes.

@raldone01
Copy link

raldone01 commented Jun 23, 2024

I tested your branch again.
Model switching still errors out but only sometimes, with Cannot copy out of meta tensor; no data! but invoke can recover without having to restart the server.

Also new errors appeared:

  • RuntimeError: Input type (float) and bias type (c10::Half) should be the same
  • Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument index in method wrapper_CUDA__index_select)
Full Logs
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.

0it [00:00, ?it/s]
0it [00:00, ?it/s]
[2024-06-23 18:33:54,188]::[InvokeAI]::INFO --> Patchmatch initialized�[0m
[2024-06-23 18:33:55,493]::[InvokeAI]::INFO --> Using torch device: Tesla P40�[0m
[2024-06-23 18:33:55,699]::[InvokeAI]::INFO --> cuDNN version: 8902�[0m
[2024-06-23 18:33:55,719]::[uvicorn.error]::INFO --> Started server process [7]�[0m
[2024-06-23 18:33:55,719]::[uvicorn.error]::INFO --> Waiting for application startup.�[0m
[2024-06-23 18:33:55,720]::[InvokeAI]::INFO --> InvokeAI version 4.2.4�[0m
[2024-06-23 18:33:55,720]::[InvokeAI]::INFO --> Root directory = /invokeai�[0m
[2024-06-23 18:33:55,720]::[InvokeAI]::INFO --> Initializing database at /invokeai/databases/invokeai.db�[0m
[2024-06-23 18:33:55,723]::[ModelManagerService]::INFO --> Using rendering device(s): cuda:0, cuda:1�[0m
[2024-06-23 18:33:55,755]::[ModelInstallService]::WARNING --> Missing model file: image_encoder at any/clip_vision/image_encoder�[0m
[2024-06-23 18:33:56,079]::[InvokeAI]::INFO --> Cleaned database (freed 0.12MB)�[0m
[2024-06-23 18:33:56,079]::[uvicorn.error]::INFO --> Application startup complete.�[0m
[2024-06-23 18:33:56,080]::[uvicorn.error]::INFO --> Uvicorn running on http://0.0.0.0:9090 (Press CTRL+C to quit)�[0m
[2024-06-23 18:33:56,330]::[uvicorn.access]::INFO --> 172.24.13.33:51705 - "GET / HTTP/1.1" 200�[0m
[2024-06-23 18:33:56,455]::[uvicorn.access]::INFO --> 172.24.13.33:51705 - "GET /assets/index--24GrIy3.js HTTP/1.1" 200�[0m
[2024-06-23 18:33:56,885]::[uvicorn.access]::INFO --> 172.24.13.33:51705 - "GET /assets/ThemeLocaleProvider-C00Wxn4y.js HTTP/1.1" 200�[0m
[2024-06-23 18:33:56,887]::[uvicorn.access]::INFO --> 172.24.13.33:51706 - "GET /assets/ThemeLocaleProvider-DzjsLZSc.css HTTP/1.1" 200�[0m
[2024-06-23 18:33:56,904]::[uvicorn.access]::INFO --> 172.24.13.33:51705 - "GET /assets/images/invoke-favicon.svg HTTP/1.1" 200�[0m
[2024-06-23 18:33:57,012]::[uvicorn.access]::INFO --> 172.24.13.33:51705 - "GET /locales/en.json HTTP/1.1" 200�[0m
[2024-06-23 18:33:57,046]::[uvicorn.access]::INFO --> 172.24.13.33:51706 - "GET /assets/App-DEu4J2pT.css HTTP/1.1" 200�[0m
[2024-06-23 18:33:57,056]::[uvicorn.access]::INFO --> 172.24.13.33:51705 - "GET /assets/App-D-nTCJ_n.js HTTP/1.1" 200�[0m
[2024-06-23 18:33:57,298]::[uvicorn.access]::INFO --> 172.24.13.33:51705 - "GET /assets/inter-latin-wght-normal-BgVq2Tq4.woff2 HTTP/1.1" 200�[0m
[2024-06-23 18:33:57,312]::[uvicorn.access]::INFO --> 172.24.13.33:51705 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P16O4og HTTP/1.1" 200�[0m
[2024-06-23 18:33:57,383]::[uvicorn.access]::INFO --> 172.24.13.33:51705 - "GET /api/v1/app/invocation_cache/status HTTP/1.1" 200�[0m
[2024-06-23 18:33:57,387]::[uvicorn.access]::INFO --> 172.24.13.33:51706 - "GET /api/v1/queue/default/list HTTP/1.1" 200�[0m
[2024-06-23 18:33:57,389]::[uvicorn.access]::INFO --> 172.24.13.33:51705 - "GET /api/v1/app/version HTTP/1.1" 200�[0m
[2024-06-23 18:33:57,451]::[uvicorn.access]::INFO --> 172.24.13.33:51706 - "GET /api/v2/models/ HTTP/1.1" 200�[0m
[2024-06-23 18:33:57,455]::[uvicorn.access]::INFO --> 172.24.13.33:51705 - "GET /api/v1/images/intermediates HTTP/1.1" 200�[0m
[2024-06-23 18:33:57,457]::[uvicorn.access]::INFO --> 172.24.13.33:51707 - "GET /api/v1/queue/default/status HTTP/1.1" 200�[0m
[2024-06-23 18:33:57,470]::[uvicorn.access]::INFO --> 172.24.13.33:51708 - "GET /api/v1/boards/?all=true HTTP/1.1" 200�[0m
/opt/venv/invokeai/lib/python3.11/site-packages/fastapi/openapi/utils.py:207: UserWarning: Duplicate Operation ID get_image_full for function get_image_full at /opt/invokeai/invokeai/app/api/routers/images.py
warnings.warn(message, stacklevel=1)
[2024-06-23 18:33:58,478]::[uvicorn.access]::INFO --> 172.24.13.33:51709 - "GET /openapi.json HTTP/1.1" 200�[0m
[2024-06-23 18:33:58,480]::[uvicorn.access]::INFO --> 172.24.13.33:51710 - "GET /api/v1/app/config HTTP/1.1" 200�[0m
[2024-06-23 18:33:58,482]::[uvicorn.access]::INFO --> 172.24.13.33:51706 - "POST /ws/socket.io/?EIO=4&transport=polling&t=P16O4ph&sid=erSe8MROHdcXee6eAAAA HTTP/1.1" 200�[0m
[2024-06-23 18:33:58,497]::[uvicorn.access]::INFO --> 172.24.13.33:51707 - "GET /api/v1/app/app_deps HTTP/1.1" 200�[0m
[2024-06-23 18:33:58,499]::[uvicorn.error]::INFO --> ('172.24.13.33', 51711) - "WebSocket /ws/socket.io/?EIO=4&transport=websocket&sid=erSe8MROHdcXee6eAAAA" [accepted]�[0m
[2024-06-23 18:33:58,499]::[uvicorn.access]::INFO --> 172.24.13.33:51705 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P16O4ph.0&sid=erSe8MROHdcXee6eAAAA HTTP/1.1" 200�[0m
[2024-06-23 18:33:58,500]::[uvicorn.error]::INFO --> connection open�[0m
[2024-06-23 18:33:58,562]::[uvicorn.access]::INFO --> 172.24.13.33:51705 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P16O56L&sid=erSe8MROHdcXee6eAAAA HTTP/1.1" 200�[0m
[2024-06-23 18:33:58,570]::[uvicorn.access]::INFO --> 172.24.13.33:51705 - "POST /ws/socket.io/?EIO=4&transport=polling&t=P16O56N&sid=erSe8MROHdcXee6eAAAA HTTP/1.1" 200�[0m
[2024-06-23 18:33:58,584]::[uvicorn.access]::INFO --> 172.24.13.33:51705 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P16O56d&sid=erSe8MROHdcXee6eAAAA HTTP/1.1" 200�[0m
[2024-06-23 18:33:58,585]::[uvicorn.access]::INFO --> 172.24.13.33:51706 - "POST /ws/socket.io/?EIO=4&transport=polling&t=P16O56d.0&sid=erSe8MROHdcXee6eAAAA HTTP/1.1" 200�[0m
[2024-06-23 18:34:38,733]::[uvicorn.error]::INFO --> Shutting down�[0m
[2024-06-23 18:34:38,734]::[uvicorn.error]::INFO --> connection closed�[0m
[2024-06-23 18:34:38,834]::[uvicorn.error]::INFO --> Waiting for application shutdown.�[0m
[2024-06-23 18:34:39,739]::[ModelInstallService]::INFO --> Installer thread 128739316860608 exiting�[0m
[2024-06-23 18:34:39,774]::[uvicorn.error]::INFO --> Application shutdown complete.�[0m
[2024-06-23 18:34:39,774]::[uvicorn.error]::INFO --> Finished server process [7]�[0m
[2024-06-23 18:34:51,360]::[InvokeAI]::INFO --> Patchmatch initialized�[0m
[2024-06-23 18:34:52,562]::[InvokeAI]::INFO --> Using torch device: Tesla P40�[0m
[2024-06-23 18:34:52,771]::[InvokeAI]::ERROR --> Can't start `--dev_reload` because jurigged is not found; `pip install -e ".[dev]"` to include development dependencies.�[0m
Traceback (most recent call last):
File "/opt/invokeai/invokeai/app/api_app.py", line 172, in invoke_api
  import jurigged
ModuleNotFoundError: No module named 'jurigged'
[2024-06-23 18:34:52,773]::[InvokeAI]::INFO --> cuDNN version: 8902�[0m
[2024-06-23 18:34:52,791]::[uvicorn.error]::INFO --> Started server process [7]�[0m
[2024-06-23 18:34:52,791]::[uvicorn.error]::INFO --> Waiting for application startup.�[0m
[2024-06-23 18:34:52,791]::[InvokeAI]::INFO --> InvokeAI version 4.2.4�[0m
[2024-06-23 18:34:52,791]::[InvokeAI]::INFO --> Root directory = /invokeai�[0m
[2024-06-23 18:34:52,792]::[InvokeAI]::INFO --> Initializing database at /invokeai/databases/invokeai.db�[0m
[2024-06-23 18:34:52,795]::[ModelManagerService]::INFO --> Using rendering device(s): cuda:0, cuda:1�[0m
[2024-06-23 18:34:52,826]::[ModelInstallService]::WARNING --> Missing model file: image_encoder at any/clip_vision/image_encoder�[0m
[2024-06-23 18:34:53,113]::[uvicorn.error]::INFO --> Application startup complete.�[0m
[2024-06-23 18:34:53,113]::[uvicorn.error]::INFO --> Uvicorn running on http://0.0.0.0:9090 (Press CTRL+C to quit)�[0m
[2024-06-23 18:34:53,329]::[uvicorn.access]::INFO --> 172.24.13.33:51728 - "GET / HTTP/1.1" 200�[0m
[2024-06-23 18:34:53,387]::[uvicorn.access]::INFO --> 172.24.13.33:51728 - "GET /assets/index--24GrIy3.js HTTP/1.1" 200�[0m
[2024-06-23 18:34:53,790]::[uvicorn.access]::INFO --> 172.24.13.33:51727 - "GET /assets/ThemeLocaleProvider-DzjsLZSc.css HTTP/1.1" 200�[0m
[2024-06-23 18:34:53,791]::[uvicorn.access]::INFO --> 172.24.13.33:51728 - "GET /assets/ThemeLocaleProvider-C00Wxn4y.js HTTP/1.1" 200�[0m
[2024-06-23 18:34:53,893]::[uvicorn.access]::INFO --> 172.24.13.33:51728 - "GET /locales/en.json HTTP/1.1" 200�[0m
[2024-06-23 18:34:53,919]::[uvicorn.access]::INFO --> 172.24.13.33:51727 - "GET /assets/App-DEu4J2pT.css HTTP/1.1" 200�[0m
[2024-06-23 18:34:53,929]::[uvicorn.access]::INFO --> 172.24.13.33:51728 - "GET /assets/App-D-nTCJ_n.js HTTP/1.1" 200�[0m
[2024-06-23 18:34:54,151]::[uvicorn.access]::INFO --> 172.24.13.33:51728 - "GET /assets/inter-latin-wght-normal-BgVq2Tq4.woff2 HTTP/1.1" 200�[0m
[2024-06-23 18:34:54,159]::[uvicorn.access]::INFO --> 172.24.13.33:51728 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P16OIh0 HTTP/1.1" 200�[0m
[2024-06-23 18:34:54,210]::[uvicorn.access]::INFO --> 172.24.13.33:51728 - "GET /api/v1/app/version HTTP/1.1" 200�[0m
[2024-06-23 18:34:54,213]::[uvicorn.access]::INFO --> 172.24.13.33:51727 - "GET /api/v1/queue/default/status HTTP/1.1" 200�[0m
[2024-06-23 18:34:54,217]::[uvicorn.access]::INFO --> 172.24.13.33:51728 - "GET /api/v1/queue/default/list HTTP/1.1" 200�[0m
[2024-06-23 18:34:54,218]::[uvicorn.access]::INFO --> 172.24.13.33:51727 - "GET /api/v1/app/config HTTP/1.1" 200�[0m
[2024-06-23 18:34:54,221]::[uvicorn.access]::INFO --> 172.24.13.33:51729 - "GET /api/v1/app/invocation_cache/status HTTP/1.1" 200�[0m
[2024-06-23 18:34:54,240]::[uvicorn.access]::INFO --> 172.24.13.33:51730 - "GET /api/v1/boards/?all=true HTTP/1.1" 200�[0m
/opt/venv/invokeai/lib/python3.11/site-packages/fastapi/openapi/utils.py:207: UserWarning: Duplicate Operation ID get_image_full for function get_image_full at /opt/invokeai/invokeai/app/api/routers/images.py
warnings.warn(message, stacklevel=1)
[2024-06-23 18:34:55,195]::[uvicorn.access]::INFO --> 172.24.13.33:51731 - "GET /openapi.json HTTP/1.1" 200�[0m
[2024-06-23 18:34:55,230]::[uvicorn.access]::INFO --> 172.24.13.33:51732 - "GET /api/v2/models/ HTTP/1.1" 200�[0m
[2024-06-23 18:34:55,233]::[uvicorn.access]::INFO --> 172.24.13.33:51728 - "GET /api/v1/images/intermediates HTTP/1.1" 200�[0m
[2024-06-23 18:34:55,246]::[uvicorn.access]::INFO --> 172.24.13.33:51727 - "GET /api/v1/app/app_deps HTTP/1.1" 200�[0m
[2024-06-23 18:34:55,248]::[uvicorn.access]::INFO --> 172.24.13.33:51729 - "POST /ws/socket.io/?EIO=4&transport=polling&t=P16OIh-&sid=vknx2yxNLrM7QfzAAAAA HTTP/1.1" 200�[0m
[2024-06-23 18:34:55,249]::[uvicorn.access]::INFO --> 172.24.13.33:51730 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P16OIh_&sid=vknx2yxNLrM7QfzAAAAA HTTP/1.1" 200�[0m
[2024-06-23 18:34:55,250]::[uvicorn.error]::INFO --> ('172.24.13.33', 51733) - "WebSocket /ws/socket.io/?EIO=4&transport=websocket&sid=vknx2yxNLrM7QfzAAAAA" [accepted]�[0m
[2024-06-23 18:34:55,251]::[uvicorn.error]::INFO --> connection open�[0m
[2024-06-23 18:34:55,443]::[uvicorn.access]::INFO --> 172.24.13.33:51728 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P16OI_6&sid=vknx2yxNLrM7QfzAAAAA HTTP/1.1" 200�[0m
[2024-06-23 18:34:55,448]::[uvicorn.access]::INFO --> 172.24.13.33:51728 - "POST /ws/socket.io/?EIO=4&transport=polling&t=P16OI_8&sid=vknx2yxNLrM7QfzAAAAA HTTP/1.1" 200�[0m
[2024-06-23 18:34:55,463]::[uvicorn.access]::INFO --> 172.24.13.33:51728 - "GET /ws/socket.io/?EIO=4&transport=polling&t=P16OI_O&sid=vknx2yxNLrM7QfzAAAAA HTTP/1.1" 200�[0m
[2024-06-23 18:34:55,465]::[uvicorn.access]::INFO --> 172.24.13.33:51732 - "POST /ws/socket.io/?EIO=4&transport=polling&t=P16OI_O.0&sid=vknx2yxNLrM7QfzAAAAA HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,738]::[uvicorn.access]::INFO --> 172.24.13.33:51738 - "GET /api/v1/images/?board_id=none&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,746]::[uvicorn.access]::INFO --> 172.24.13.33:51738 - "GET /api/v1/images/?board_id=bace653f-b1ad-4695-a16b-ce5efef09fd0&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,749]::[uvicorn.access]::INFO --> 172.24.13.33:51739 - "GET /api/v1/images/?board_id=none&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,752]::[uvicorn.access]::INFO --> 172.24.13.33:51738 - "GET /api/v1/images/i/f35e021a-2eb6-4465-bc75-4cacb711ea09.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,754]::[uvicorn.access]::INFO --> 172.24.13.33:51740 - "GET /api/v1/images/?board_id=bace653f-b1ad-4695-a16b-ce5efef09fd0&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,762]::[uvicorn.access]::INFO --> 172.24.13.33:51739 - "GET /api/v1/images/?board_id=767b2ca6-cb69-4d37-8296-3557f73bdf36&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,764]::[uvicorn.access]::INFO --> 172.24.13.33:51738 - "GET /api/v1/images/?board_id=767b2ca6-cb69-4d37-8296-3557f73bdf36&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,766]::[uvicorn.access]::INFO --> 172.24.13.33:51740 - "GET /api/v1/images/i/9fe70033-7c45-4db1-9de6-2ab383c840f8.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,773]::[uvicorn.access]::INFO --> 172.24.13.33:51739 - "GET /api/v1/images/?board_id=9b2b65cc-70f1-43bd-ba9b-569a373c9e87&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,775]::[uvicorn.access]::INFO --> 172.24.13.33:51738 - "GET /api/v1/images/?board_id=9b2b65cc-70f1-43bd-ba9b-569a373c9e87&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,777]::[uvicorn.access]::INFO --> 172.24.13.33:51740 - "GET /api/v1/images/i/f00d8ba7-5f5f-4c8e-84e9-287b97642f2c.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,779]::[uvicorn.access]::INFO --> 172.24.13.33:51739 - "GET /api/v1/images/i/fa5e550e-c39b-4445-85a1-dd89b00efcaa.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,786]::[uvicorn.access]::INFO --> 172.24.13.33:51738 - "GET /api/v1/images/?board_id=5c6fd805-0ac0-4b18-8b44-82a3b4ba155d&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,792]::[uvicorn.access]::INFO --> 172.24.13.33:51740 - "GET /api/v1/images/?board_id=acf93c42-3ab2-443b-8154-41fd4f6e720b&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,794]::[uvicorn.access]::INFO --> 172.24.13.33:51739 - "GET /api/v1/images/?board_id=acf93c42-3ab2-443b-8154-41fd4f6e720b&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,800]::[uvicorn.access]::INFO --> 172.24.13.33:51741 - "GET /api/v1/images/?board_id=5ceb7bc7-0b6b-4fe5-b3fd-095e65fa991d&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,801]::[uvicorn.access]::INFO --> 172.24.13.33:51742 - "GET /api/v1/images/?board_id=5ceb7bc7-0b6b-4fe5-b3fd-095e65fa991d&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,805]::[uvicorn.access]::INFO --> 172.24.13.33:51743 - "GET /api/v1/images/?board_id=e2c4add9-95d8-4c79-a728-307279ae69ae&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,806]::[uvicorn.access]::INFO --> 172.24.13.33:51744 - "GET /api/v1/images/?board_id=e2c4add9-95d8-4c79-a728-307279ae69ae&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,808]::[uvicorn.access]::INFO --> 172.24.13.33:51738 - "GET /api/v1/images/?board_id=2cc19c64-c05f-4dbf-bcdb-ba578cd853de&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,812]::[uvicorn.access]::INFO --> 172.24.13.33:51745 - "GET /api/v1/images/?board_id=fa069a98-ee1b-4b8b-894f-c1f364037dc1&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,814]::[uvicorn.access]::INFO --> 172.24.13.33:51739 - "GET /api/v1/images/?board_id=991968b3-2727-4c90-b4f3-d5bede1d6561&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,815]::[uvicorn.access]::INFO --> 172.24.13.33:51740 - "GET /api/v1/images/?board_id=35501603-e65e-48d8-9f74-5b3dac48c1a7&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,817]::[uvicorn.access]::INFO --> 172.24.13.33:51741 - "GET /api/v1/images/?board_id=7d41d3c5-e282-43c1-995f-752662b68e58&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,818]::[uvicorn.access]::INFO --> 172.24.13.33:51742 - "GET /api/v1/images/?board_id=fa069a98-ee1b-4b8b-894f-c1f364037dc1&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,822]::[uvicorn.access]::INFO --> 172.24.13.33:51743 - "GET /api/v1/images/?board_id=7d41d3c5-e282-43c1-995f-752662b68e58&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,826]::[uvicorn.access]::INFO --> 172.24.13.33:51738 - "GET /api/v1/images/?board_id=35501603-e65e-48d8-9f74-5b3dac48c1a7&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,827]::[uvicorn.access]::INFO --> 172.24.13.33:51744 - "GET /api/v1/images/i/62023945-f188-4983-8cfd-4da14f6906d7.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,828]::[uvicorn.access]::INFO --> 172.24.13.33:51739 - "GET /api/v1/images/i/bb558319-b8c6-4170-9354-e01cb386f437.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,829]::[uvicorn.access]::INFO --> 172.24.13.33:51745 - "GET /api/v1/images/i/033a4dc5-7408-4553-b2f9-65bcb147e532.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,836]::[uvicorn.access]::INFO --> 172.24.13.33:51740 - "GET /api/v1/images/?board_id=none&categories=general&is_intermediate=false&limit=100&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,840]::[uvicorn.access]::INFO --> 172.24.13.33:51741 - "GET /api/v1/images/?board_id=991968b3-2727-4c90-b4f3-d5bede1d6561&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,841]::[uvicorn.access]::INFO --> 172.24.13.33:51743 - "GET /api/v1/images/i/fc8e1777-d2a2-40b1-8dcd-93a80c04fc44.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,842]::[uvicorn.access]::INFO --> 172.24.13.33:51738 - "GET /api/v1/images/i/b7bee6fa-e597-4743-902f-480ed6a9e888.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,844]::[uvicorn.access]::INFO --> 172.24.13.33:51742 - "GET /api/v1/images/?board_id=5c6fd805-0ac0-4b18-8b44-82a3b4ba155d&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,848]::[uvicorn.access]::INFO --> 172.24.13.33:51739 - "GET /api/v1/images/?board_id=6034e597-60c6-474c-b0a9-cb8ebfc5c81b&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,849]::[uvicorn.access]::INFO --> 172.24.13.33:51745 - "GET /api/v1/images/?board_id=d1096ede-98cc-4e55-9d1a-56e5240499f2&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,850]::[uvicorn.access]::INFO --> 172.24.13.33:51744 - "GET /api/v1/images/i/10768f5f-b934-4648-b474-c192b424ff10.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,854]::[uvicorn.access]::INFO --> 172.24.13.33:51740 - "GET /api/v1/images/?board_id=d1096ede-98cc-4e55-9d1a-56e5240499f2&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,855]::[uvicorn.access]::INFO --> 172.24.13.33:51741 - "GET /api/v1/images/i/9f6fb9eb-bc4c-4372-9502-248ab19ea3f6.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,856]::[uvicorn.access]::INFO --> 172.24.13.33:51738 - "GET /api/v1/images/i/494ecad0-7c7e-4d20-810a-05747ac701ad.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,857]::[uvicorn.access]::INFO --> 172.24.13.33:51743 - "GET /api/v1/images/?board_id=6034e597-60c6-474c-b0a9-cb8ebfc5c81b&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,858]::[uvicorn.access]::INFO --> 172.24.13.33:51739 - "GET /api/v1/images/?board_id=a186c71f-354e-40b7-be3a-14a2f7863034&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,860]::[uvicorn.access]::INFO --> 172.24.13.33:51745 - "GET /api/v1/images/?board_id=63e5e47c-8596-4c27-b633-f8d58fe06888&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,864]::[uvicorn.access]::INFO --> 172.24.13.33:51744 - "GET /api/v1/images/?board_id=a186c71f-354e-40b7-be3a-14a2f7863034&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,865]::[uvicorn.access]::INFO --> 172.24.13.33:51742 - "GET /api/v1/images/i/ef38bc15-da99-4ec6-9e77-07258c239938.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,866]::[uvicorn.access]::INFO --> 172.24.13.33:51741 - "GET /api/v1/images/i/ca679c9a-0aa9-46f9-8ded-5715cbfcb6de.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,870]::[uvicorn.access]::INFO --> 172.24.13.33:51738 - "GET /api/v1/images/?board_id=63e5e47c-8596-4c27-b633-f8d58fe06888&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,871]::[uvicorn.access]::INFO --> 172.24.13.33:51743 - "GET /api/v1/images/i/8f0aa5a2-faed-4aaa-ac76-ecdd27ffa19f.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,875]::[uvicorn.access]::INFO --> 172.24.13.33:51739 - "GET /api/v1/images/?board_id=2cc19c64-c05f-4dbf-bcdb-ba578cd853de&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,876]::[uvicorn.access]::INFO --> 172.24.13.33:51745 - "GET /api/v1/images/?board_id=04532f37-d7d1-4019-8ce4-e0255b1b6036&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,877]::[uvicorn.access]::INFO --> 172.24.13.33:51744 - "GET /api/v1/images/i/55b223a5-2229-41a7-8e3e-7b57bfad9cfa.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,881]::[uvicorn.access]::INFO --> 172.24.13.33:51742 - "GET /api/v1/images/?board_id=04532f37-d7d1-4019-8ce4-e0255b1b6036&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,885]::[uvicorn.access]::INFO --> 172.24.13.33:51741 - "GET /api/v1/images/?board_id=4b99d423-6adb-4322-979a-84b123af9b47&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,886]::[uvicorn.access]::INFO --> 172.24.13.33:51738 - "GET /api/v1/images/i/e9136a8d-e4bf-4052-a7d2-0ead87fbad48.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,888]::[uvicorn.access]::INFO --> 172.24.13.33:51743 - "GET /api/v1/images/?board_id=4b99d423-6adb-4322-979a-84b123af9b47&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,889]::[uvicorn.access]::INFO --> 172.24.13.33:51739 - "GET /api/v1/images/?board_id=eef04c98-a3db-4a6f-8732-4dc3d89c91e6&categories=control&categories=mask&categories=user&categories=other&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,889]::[uvicorn.access]::INFO --> 172.24.13.33:51745 - "GET /api/v1/images/i/bfdb2db8-e391-4042-8023-05336fe3eb63.png HTTP/1.1" 200�[0m
[2024-06-23 18:35:13,893]::[uvicorn.access]::INFO --> 172.24.13.33:51744 - "GET /api/v1/images/?board_id=eef04c98-a3db-4a6f-8732-4dc3d89c91e6&categories=general&is_intermediate=false&limit=0&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:15,984]::[uvicorn.access]::INFO --> 172.24.13.33:51745 - "GET /api/v1/images/?board_id=767b2ca6-cb69-4d37-8296-3557f73bdf36&categories=general&is_intermediate=false&limit=100&offset=0 HTTP/1.1" 200�[0m
[2024-06-23 18:35:16,530]::[uvicorn.access]::INFO --> 172.24.13.33:51745 - "GET /api/v1/images/i/9fe70033-7c45-4db1-9de6-2ab383c840f8.png/metadata HTTP/1.1" 200�[0m
[2024-06-23 18:35:22,827]::[uvicorn.access]::INFO --> 172.24.13.33:51747 - "GET /api/v1/images/i/e0b9b354-39b1-46cc-b8dd-d09fef54545e.png/metadata HTTP/1.1" 200�[0m
[2024-06-23 18:35:40,426]::[uvicorn.access]::INFO --> 172.24.13.33:51748 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.1" 200�[0m
[2024-06-23 18:35:40,468]::[uvicorn.access]::INFO --> 172.24.13.33:51748 - "GET /api/v1/queue/default/status HTTP/1.1" 200�[0m
[2024-06-23 18:35:40,508]::[uvicorn.access]::INFO --> 172.24.13.33:51748 - "GET /api/v1/queue/default/list HTTP/1.1" 200�[0m
[2024-06-23 18:35:40,527]::[ModelManagerService]::INFO --> Reserved torch device cuda:0 for execution thread 132909407143616�[0m
[2024-06-23 18:35:40,736]::[ModelManagerService]::INFO --> Reserved torch device cuda:1 for execution thread 132909396657856�[0m
[2024-06-23 18:35:40,803]::[uvicorn.access]::INFO --> 172.24.13.33:51748 - "GET /assets/images/invoke-alert-favicon.svg HTTP/1.1" 200�[0m
[2024-06-23 18:35:46,940]::[uvicorn.access]::INFO --> 172.24.13.33:51750 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.1" 200�[0m
[2024-06-23 18:35:47,036]::[uvicorn.access]::INFO --> 172.24.13.33:51750 - "GET /api/v1/queue/default/status HTTP/1.1" 200�[0m
[2024-06-23 18:35:47,103]::[uvicorn.access]::INFO --> 172.24.13.33:51752 - "GET /api/v1/queue/default/list HTTP/1.1" 200�[0m
[2024-06-23 18:35:48,554]::[InvokeAI]::ERROR --> Error while invoking session 2325cf60-1525-43b3-93d5-60f2ee0b5d51, invocation c6b44ed5-e59c-44d7-8104-243781a28ca0 (compel): Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument index in method wrapper_CUDA__index_select)�[0m
[2024-06-23 18:35:48,554]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
  output = invocation.invoke_internal(context=context, services=self._services)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
  output = self.invoke(context)
           ^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  return func(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/compel.py", line 114, in invoke
  c, _options = compel.build_conditioning_tensor_for_conjunction(conjunction)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/compel/compel.py", line 186, in build_conditioning_tensor_for_conjunction
  this_conditioning, this_options = self.build_conditioning_tensor_for_prompt_object(p)
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/compel/compel.py", line 218, in build_conditioning_tensor_for_prompt_object
  return self._get_conditioning_for_flattened_prompt(prompt), {}
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/compel/compel.py", line 282, in _get_conditioning_for_flattened_prompt
  return self.conditioning_provider.get_embeddings_for_weighted_prompt_fragments(
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/compel/embeddings_provider.py", line 120, in get_embeddings_for_weighted_prompt_fragments
  base_embedding = self.build_weighted_embedding_tensor(tokens, per_token_weights, mask, device=device)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/compel/embeddings_provider.py", line 357, in build_weighted_embedding_tensor
  empty_z = self._encode_token_ids_to_embeddings(empty_token_ids)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/compel/embeddings_provider.py", line 390, in _encode_token_ids_to_embeddings
  text_encoder_output = self.text_encoder(token_ids,
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py", line 807, in forward
  return self.text_model(
         ^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py", line 699, in forward
  hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py", line 219, in forward
  inputs_embeds = self.token_embedding(input_ids)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/sparse.py", line 163, in forward
  return F.embedding(
         ^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/functional.py", line 2237, in embedding
  return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument index in method wrapper_CUDA__index_select)
�[0m
[2024-06-23 18:35:48,684]::[InvokeAI]::INFO --> Graph stats: 2325cf60-1525-43b3-93d5-60f2ee0b5d51
                        Node   Calls   Seconds  VRAM Used
           main_model_loader       1    0.002s     0.000G
                   clip_skip       1    0.002s     0.000G
                      compel       1    7.738s     0.687G
TOTAL GRAPH EXECUTION TIME:   7.742s
TOTAL GRAPH WALL TIME:   7.748s
RAM used by InvokeAI process: 2.38G (+1.600G)
RAM used to load models: 0.23G
VRAM in use: 0.687G
RAM cache statistics:
 Model cache hits: 3
 Model cache misses: 4
 Models cached: 2
 Models cleared from cache: 0
 Cache high water mark: 0.23/64.00G
�[0m
[2024-06-23 18:35:48,684]::[ModelManagerService]::INFO --> Released torch device cuda:1�[0m
[2024-06-23 18:35:48,685]::[ModelManagerService]::INFO --> Reserved torch device cuda:1 for execution thread 132909396657856�[0m

0% 0/100 [00:00<?, ?it/s]�[38;20m[2024-06-23 18:36:02,833]::[uvicorn.access]::INFO --> 172.24.13.33:51753 - "GET /api/v2/models/i/e7ea11d3-d507-48ca-aaca-263dbf15d501 HTTP/1.1" 200�[0m

1% 1/100 [00:00<00:46,  2.15it/s]
2% 2/100 [00:00<00:35,  2.77it/s]
3% 3/100 [00:01<00:31,  3.06it/s]
4% 4/100 [00:01<00:29,  3.21it/s]
5% 5/100 [00:01<00:28,  3.29it/s]
6% 6/100 [00:01<00:28,  3.35it/s]
7% 7/100 [00:02<00:27,  3.39it/s]
8% 8/100 [00:02<00:26,  3.42it/s]
9% 9/100 [00:02<00:26,  3.44it/s]
10% 10/100 [00:03<00:26,  3.44it/s]
11% 11/100 [00:03<00:25,  3.45it/s]
12% 12/100 [00:03<00:25,  3.46it/s]�[38;20m[2024-06-23 18:36:06,600]::[uvicorn.access]::INFO --> 172.24.13.33:51753 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.1" 200�[0m
[2024-06-23 18:36:06,675]::[uvicorn.access]::INFO --> 172.24.13.33:51753 - "GET /api/v1/queue/default/status HTTP/1.1" 200�[0m

13% 13/100 [00:03<00:25,  3.41it/s]�[38;20m[2024-06-23 18:36:06,731]::[uvicorn.access]::INFO --> 172.24.13.33:51754 - "GET /api/v1/queue/default/list HTTP/1.1" 200�[0m

14% 14/100 [00:04<00:25,  3.42it/s]
15% 15/100 [00:04<00:24,  3.43it/s]

0% 0/100 [00:00<?, ?it/s]�[A
16% 16/100 [00:04<00:25,  3.24it/s]
17% 17/100 [00:05<00:25,  3.31it/s]

1% 1/100 [00:00<00:46,  2.11it/s]�[A
18% 18/100 [00:05<00:24,  3.36it/s]

2% 2/100 [00:00<00:35,  2.73it/s]�[A
19% 19/100 [00:05<00:23,  3.39it/s]

3% 3/100 [00:01<00:32,  3.02it/s]�[A
20% 20/100 [00:06<00:23,  3.42it/s]

4% 4/100 [00:01<00:30,  3.19it/s]�[A
21% 21/100 [00:06<00:22,  3.44it/s]

5% 5/100 [00:01<00:28,  3.29it/s]�[A
22% 22/100 [00:06<00:22,  3.46it/s]

6% 6/100 [00:01<00:28,  3.35it/s]�[A
23% 23/100 [00:06<00:22,  3.47it/s]

7% 7/100 [00:02<00:27,  3.39it/s]�[A
24% 24/100 [00:07<00:22,  3.40it/s]

8% 8/100 [00:02<00:26,  3.42it/s]�[A
25% 25/100 [00:07<00:22,  3.40it/s]

9% 9/100 [00:02<00:26,  3.43it/s]�[A
26% 26/100 [00:07<00:21,  3.43it/s]

10% 10/100 [00:03<00:26,  3.45it/s]�[A
27% 27/100 [00:08<00:21,  3.44it/s]

11% 11/100 [00:03<00:25,  3.45it/s]�[A
28% 28/100 [00:08<00:20,  3.45it/s]

12% 12/100 [00:03<00:25,  3.46it/s]�[A
29% 29/100 [00:08<00:20,  3.46it/s]

13% 13/100 [00:03<00:25,  3.46it/s]�[A
30% 30/100 [00:08<00:20,  3.47it/s]

14% 14/100 [00:04<00:24,  3.46it/s]�[A
31% 31/100 [00:09<00:19,  3.47it/s]

15% 15/100 [00:04<00:24,  3.47it/s]�[A
32% 32/100 [00:09<00:19,  3.48it/s]

16% 16/100 [00:04<00:24,  3.47it/s]�[A
33% 33/100 [00:09<00:19,  3.48it/s]

17% 17/100 [00:05<00:23,  3.47it/s]�[A
34% 34/100 [00:10<00:18,  3.48it/s]

18% 18/100 [00:05<00:23,  3.48it/s]�[A
35% 35/100 [00:10<00:18,  3.48it/s]

19% 19/100 [00:05<00:23,  3.48it/s]�[A
36% 36/100 [00:10<00:18,  3.48it/s]�[38;20m[2024-06-23 18:36:13,441]::[uvicorn.access]::INFO --> 172.24.13.33:51755 - "GET /api/v2/models/i/ab87673f-b021-45fd-acd0-4891522a1510 HTTP/1.1" 200�[0m


20% 20/100 [00:05<00:22,  3.48it/s]�[A
37% 37/100 [00:10<00:18,  3.48it/s]

21% 21/100 [00:06<00:22,  3.48it/s]�[A
38% 38/100 [00:11<00:17,  3.48it/s]

22% 22/100 [00:06<00:22,  3.48it/s]�[A
39% 39/100 [00:11<00:17,  3.48it/s]

23% 23/100 [00:06<00:22,  3.48it/s]�[A
40% 40/100 [00:11<00:17,  3.48it/s]

24% 24/100 [00:07<00:21,  3.48it/s]�[A
41% 41/100 [00:12<00:16,  3.48it/s]

25% 25/100 [00:07<00:22,  3.27it/s]�[A�[38;20m[2024-06-23 18:36:15,000]::[uvicorn.access]::INFO --> 172.24.13.33:51755 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.1" 200�[0m
[2024-06-23 18:36:15,110]::[uvicorn.access]::INFO --> 172.24.13.33:51755 - "GET /api/v1/queue/default/status HTTP/1.1" 200�[0m
[2024-06-23 18:36:15,182]::[uvicorn.access]::INFO --> 172.24.13.33:51756 - "GET /api/v1/queue/default/list HTTP/1.1" 200�[0m

42% 42/100 [00:12<00:20,  2.88it/s]

26% 26/100 [00:07<00:26,  2.83it/s]�[A
43% 43/100 [00:12<00:18,  3.04it/s]

27% 27/100 [00:08<00:24,  3.00it/s]�[A
44% 44/100 [00:13<00:17,  3.16it/s]

28% 28/100 [00:08<00:22,  3.13it/s]�[A
45% 45/100 [00:13<00:16,  3.25it/s]

29% 29/100 [00:08<00:22,  3.23it/s]�[A
46% 46/100 [00:13<00:16,  3.31it/s]

30% 30/100 [00:09<00:21,  3.30it/s]�[A
47% 47/100 [00:13<00:15,  3.36it/s]

31% 31/100 [00:09<00:20,  3.35it/s]�[A
48% 48/100 [00:14<00:15,  3.39it/s]

32% 32/100 [00:09<00:20,  3.39it/s]�[A
49% 49/100 [00:14<00:14,  3.42it/s]

33% 33/100 [00:09<00:19,  3.42it/s]�[A
50% 50/100 [00:14<00:14,  3.44it/s]

34% 34/100 [00:10<00:19,  3.44it/s]�[A
51% 51/100 [00:15<00:14,  3.45it/s]

35% 35/100 [00:10<00:18,  3.45it/s]�[A
52% 52/100 [00:15<00:13,  3.46it/s]

36% 36/100 [00:10<00:18,  3.46it/s]�[A�[38;20m[2024-06-23 18:36:18,377]::[uvicorn.access]::INFO --> 172.24.13.33:51755 - "GET /api/v2/models/i/6f39b96c-c315-49d3-8c4f-c5672155285d HTTP/1.1" 200�[0m

53% 53/100 [00:15<00:13,  3.47it/s]

37% 37/100 [00:11<00:18,  3.47it/s]�[A
54% 54/100 [00:16<00:13,  3.47it/s]

38% 38/100 [00:11<00:17,  3.48it/s]�[A
55% 55/100 [00:16<00:12,  3.47it/s]

39% 39/100 [00:11<00:18,  3.22it/s]�[A�[38;20m[2024-06-23 18:36:19,266]::[uvicorn.access]::INFO --> 172.24.13.33:51755 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.1" 200�[0m
[2024-06-23 18:36:19,339]::[uvicorn.access]::INFO --> 172.24.13.33:51755 - "GET /api/v1/queue/default/status HTTP/1.1" 200�[0m
[2024-06-23 18:36:19,370]::[uvicorn.access]::INFO --> 172.24.13.33:51756 - "GET /api/v1/queue/default/list HTTP/1.1" 200�[0m

56% 56/100 [00:16<00:13,  3.35it/s]

40% 40/100 [00:12<00:21,  2.83it/s]�[A
57% 57/100 [00:16<00:12,  3.36it/s]

41% 41/100 [00:12<00:19,  2.99it/s]�[A
58% 58/100 [00:17<00:12,  3.35it/s]

42% 42/100 [00:12<00:18,  3.13it/s]�[A
59% 59/100 [00:17<00:12,  3.38it/s]

43% 43/100 [00:13<00:17,  3.23it/s]�[A
60% 60/100 [00:17<00:11,  3.41it/s]

44% 44/100 [00:13<00:16,  3.30it/s]�[A
61% 61/100 [00:18<00:11,  3.43it/s]

45% 45/100 [00:13<00:16,  3.35it/s]�[A
62% 62/100 [00:18<00:11,  3.45it/s]

46% 46/100 [00:13<00:15,  3.39it/s]�[A
63% 63/100 [00:18<00:10,  3.46it/s]

47% 47/100 [00:14<00:15,  3.41it/s]�[A
64% 64/100 [00:18<00:10,  3.46it/s]

48% 48/100 [00:14<00:15,  3.43it/s]�[A
65% 65/100 [00:19<00:10,  3.47it/s]

49% 49/100 [00:14<00:14,  3.44it/s]�[A
66% 66/100 [00:19<00:09,  3.47it/s]

50% 50/100 [00:15<00:14,  3.45it/s]�[A
67% 67/100 [00:19<00:09,  3.47it/s]

51% 51/100 [00:15<00:14,  3.46it/s]�[A
68% 68/100 [00:20<00:09,  3.47it/s]

52% 52/100 [00:15<00:13,  3.47it/s]�[A
69% 69/100 [00:20<00:08,  3.47it/s]

53% 53/100 [00:15<00:13,  3.47it/s]�[A
70% 70/100 [00:20<00:08,  3.48it/s]

54% 54/100 [00:16<00:13,  3.48it/s]�[A�[38;20m[2024-06-23 18:36:23,743]::[uvicorn.access]::INFO --> 172.24.13.33:51755 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.1" 200�[0m

71% 71/100 [00:20<00:08,  3.35it/s]�[38;20m[2024-06-23 18:36:23,845]::[uvicorn.access]::INFO --> 172.24.13.33:51755 - "GET /api/v1/queue/default/status HTTP/1.1" 200�[0m
[2024-06-23 18:36:23,908]::[uvicorn.access]::INFO --> 172.24.13.33:51756 - "GET /api/v1/queue/default/list HTTP/1.1" 200�[0m


55% 55/100 [00:16<00:14,  3.02it/s]�[A
72% 72/100 [00:21<00:09,  2.95it/s]

56% 56/100 [00:16<00:13,  3.14it/s]�[A
73% 73/100 [00:21<00:08,  3.09it/s]

57% 57/100 [00:17<00:13,  3.23it/s]�[A
74% 74/100 [00:21<00:08,  3.19it/s]

58% 58/100 [00:17<00:12,  3.31it/s]�[A
75% 75/100 [00:22<00:07,  3.27it/s]

59% 59/100 [00:17<00:12,  3.36it/s]�[A
76% 76/100 [00:22<00:07,  3.33it/s]

60% 60/100 [00:18<00:11,  3.40it/s]�[A
77% 77/100 [00:22<00:06,  3.37it/s]

61% 61/100 [00:18<00:11,  3.42it/s]�[A
78% 78/100 [00:23<00:06,  3.40it/s]

62% 62/100 [00:18<00:11,  3.43it/s]�[A
79% 79/100 [00:23<00:06,  3.42it/s]

63% 63/100 [00:18<00:10,  3.45it/s]�[A
80% 80/100 [00:23<00:05,  3.43it/s]

64% 64/100 [00:19<00:10,  3.46it/s]�[A
81% 81/100 [00:24<00:05,  3.45it/s]

65% 65/100 [00:19<00:10,  3.46it/s]�[A
82% 82/100 [00:24<00:05,  3.46it/s]

66% 66/100 [00:19<00:09,  3.47it/s]�[A
83% 83/100 [00:24<00:04,  3.47it/s]

67% 67/100 [00:20<00:09,  3.48it/s]�[A
84% 84/100 [00:24<00:04,  3.47it/s]

68% 68/100 [00:20<00:09,  3.48it/s]�[A
85% 85/100 [00:25<00:04,  3.48it/s]

69% 69/100 [00:20<00:08,  3.48it/s]�[A
86% 86/100 [00:25<00:04,  3.48it/s]

70% 70/100 [00:20<00:08,  3.48it/s]�[A
87% 87/100 [00:25<00:03,  3.48it/s]

71% 71/100 [00:21<00:08,  3.48it/s]�[A
88% 88/100 [00:26<00:03,  3.48it/s]

72% 72/100 [00:21<00:08,  3.48it/s]�[A
89% 89/100 [00:26<00:03,  3.48it/s]�[38;20m[2024-06-23 18:36:29,221]::[uvicorn.access]::INFO --> 172.24.13.33:51758 - "GET /api/v2/models/i/85f4c348-b3f8-4381-8045-fa9c7084eafc HTTP/1.1" 200�[0m


73% 73/100 [00:21<00:07,  3.48it/s]�[A
90% 90/100 [00:26<00:02,  3.48it/s]

74% 74/100 [00:22<00:07,  3.48it/s]�[A
91% 91/100 [00:26<00:02,  3.48it/s]

75% 75/100 [00:22<00:07,  3.48it/s]�[A
92% 92/100 [00:27<00:02,  3.48it/s]�[38;20m[2024-06-23 18:36:30,125]::[uvicorn.access]::INFO --> 172.24.13.33:51758 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.1" 200�[0m
[2024-06-23 18:36:30,204]::[uvicorn.access]::INFO --> 172.24.13.33:51758 - "GET /api/v1/queue/default/status HTTP/1.1" 200�[0m


76% 76/100 [00:22<00:07,  3.31it/s]�[A�[38;20m[2024-06-23 18:36:30,257]::[uvicorn.access]::INFO --> 172.24.13.33:51759 - "GET /api/v1/queue/default/list HTTP/1.1" 200�[0m

93% 93/100 [00:27<00:02,  3.43it/s]

77% 77/100 [00:23<00:06,  3.29it/s]�[A
94% 94/100 [00:27<00:01,  3.40it/s]

78% 78/100 [00:23<00:06,  3.35it/s]�[A
95% 95/100 [00:28<00:01,  3.40it/s]

79% 79/100 [00:23<00:06,  3.39it/s]�[A
96% 96/100 [00:28<00:01,  3.38it/s]

80% 80/100 [00:23<00:05,  3.41it/s]�[A
97% 97/100 [00:28<00:00,  3.39it/s]

81% 81/100 [00:24<00:05,  3.43it/s]�[A
98% 98/100 [00:28<00:00,  3.37it/s]

82% 82/100 [00:24<00:05,  3.45it/s]�[A
99% 99/100 [00:29<00:00,  3.38it/s]

83% 83/100 [00:24<00:04,  3.46it/s]�[A
100% 100/100 [00:29<00:00,  3.36it/s]
100% 100/100 [00:29<00:00,  3.38it/s]


84% 84/100 [00:25<00:04,  3.32it/s]�[A

85% 85/100 [00:25<00:05,  2.65it/s]�[A

86% 86/100 [00:25<00:04,  2.86it/s]�[A

87% 87/100 [00:26<00:04,  3.02it/s]�[A

88% 88/100 [00:26<00:03,  3.07it/s]�[A

89% 89/100 [00:26<00:03,  3.16it/s]�[A

90% 90/100 [00:27<00:03,  3.14it/s]�[A

91% 91/100 [00:27<00:02,  3.22it/s]�[A�[38;20m[2024-06-23 18:36:34,988]::[uvicorn.access]::INFO --> 172.24.13.33:51759 - "GET /api/v1/images/i/063413b8-e1aa-456b-9294-ba83bdb37073.png HTTP/1.1" 200�[0m


92% 92/100 [00:27<00:02,  3.29it/s]�[A�[38;20m[2024-06-23 18:36:35,453]::[uvicorn.access]::INFO --> 172.24.13.33:51759 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.1" 200�[0m


93% 93/100 [00:27<00:02,  3.34it/s]�[A�[38;20m[2024-06-23 18:36:35,609]::[uvicorn.access]::INFO --> 172.24.13.33:51759 - "GET /api/v1/queue/default/status HTTP/1.1" 200�[0m
[2024-06-23 18:36:35,662]::[uvicorn.access]::INFO --> 172.24.13.33:51760 - "GET /api/v1/queue/default/list HTTP/1.1" 200�[0m


94% 94/100 [00:28<00:01,  3.07it/s]�[A

95% 95/100 [00:28<00:01,  3.18it/s]�[A

96% 96/100 [00:28<00:01,  3.26it/s]�[A

97% 97/100 [00:29<00:00,  3.32it/s]�[A

98% 98/100 [00:29<00:00,  3.36it/s]�[A

99% 99/100 [00:29<00:00,  3.39it/s]�[A
Upscaling:   0% 0/4 [00:00<?, ?it/s]
Upscaling:   0% 0/4 [00:00<?, ?it/s]
[2024-06-23 18:36:37,589]::[InvokeAI]::ERROR --> Error while invoking session af8622ed-8dad-4d71-a4c8-a60f24439893, invocation 25a2cd3c-6004-4393-8292-b3baf3ba2b0b (esrgan): Input type (float) and bias type (c10::Half) should be the same�[0m
[2024-06-23 18:36:37,589]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
  output = invocation.invoke_internal(context=context, services=self._services)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
  output = self.invoke(context)
           ^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/upscale.py", line 110, in invoke
  upscaled_image = upscaler.upscale(cv2_image)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  return func(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 231, in upscale
  self.tile_process()
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 159, in tile_process
  output_tile = self.model(input_tile)
                ^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/basicsr/rrdbnet_arch.py", line 118, in forward
  feat = self.conv_first(feat)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 460, in forward
  return self._conv_forward(input, self.weight, self.bias)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
  return F.conv2d(input, weight, bias, self.stride,
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Input type (float) and bias type (c10::Half) should be the same
�[0m


100% 100/100 [00:30<00:00,  3.28it/s]�[A
100% 100/100 [00:30<00:00,  3.32it/s]
[2024-06-23 18:36:37,739]::[InvokeAI]::INFO --> Graph stats: af8622ed-8dad-4d71-a4c8-a60f24439893
                        Node   Calls   Seconds  VRAM Used
           main_model_loader       1    0.076s     0.000G
                   clip_skip       1    0.022s     0.000G
                      compel       2    2.917s     0.314G
                     collect       2    0.001s     0.244G
                       noise       1    0.002s     0.244G
             denoise_latents       1   48.866s     2.313G
                         l2i       1    2.270s     2.902G
                      esrgan       1    2.720s     2.236G
TOTAL GRAPH EXECUTION TIME:  56.876s
TOTAL GRAPH WALL TIME:  56.967s
RAM used by InvokeAI process: 6.30G (+5.520G)
RAM used to load models: 2.22G
VRAM in use: 2.152G
RAM cache statistics:
 Model cache hits: 7
 Model cache misses: 4
 Models cached: 11
 Models cleared from cache: 0
 Cache high water mark: 3.82/64.00G
�[0m
[2024-06-23 18:36:37,741]::[ModelManagerService]::INFO --> Released torch device cuda:0�[0m
[2024-06-23 18:36:37,741]::[ModelManagerService]::INFO --> Reserved torch device cuda:0 for execution thread 132909407143616�[0m
[2024-06-23 18:36:37,875]::[InvokeAI]::ERROR --> Error while invoking session 201bd03b-0296-4236-a1da-8b5590a0e90b, invocation dabdd2b1-d96b-4e31-81e2-7d9efcd81326 (denoise_latents): Cannot copy out of meta tensor; no data!�[0m
[2024-06-23 18:36:37,876]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
  output = invocation.invoke_internal(context=context, services=self._services)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
  output = self.invoke(context)
           ^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  return func(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/contextlib.py", line 81, in inner
  return func(*args, **kwds)
         ^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/denoise_latents.py", line 725, in invoke
  with (
File "/usr/lib/python3.11/contextlib.py", line 137, in __enter__
  return next(self.gen)
         ^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/model_manager/load/load_base.py", line 77, in model_on_device
  locked_model = self._locker.lock()
                 ^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/model_manager/load/model_cache/model_locker.py", line 43, in lock
  self._cache.offload_unlocked_models(self._cache_entry.size)
File "/opt/invokeai/invokeai/backend/model_manager/load/model_cache/model_cache_default.py", line 301, in offload_unlocked_models
  self.move_model_to_device(cache_entry, self.storage_device)
File "/opt/invokeai/invokeai/backend/model_manager/load/model_cache/model_cache_default.py", line 356, in move_model_to_device
  raise e
File "/opt/invokeai/invokeai/backend/model_manager/load/model_cache/model_cache_default.py", line 352, in move_model_to_device
  cache_entry.model.to(target_device, non_blocking=True)
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1152, in to
  return self._apply(convert)
         ^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 802, in _apply
  module._apply(fn)
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 802, in _apply
  module._apply(fn)
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 825, in _apply
  param_applied = fn(param)
                  ^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1150, in convert
  return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
NotImplementedError: Cannot copy out of meta tensor; no data!
�[0m
[2024-06-23 18:36:38,206]::[InvokeAI]::INFO --> Graph stats: 201bd03b-0296-4236-a1da-8b5590a0e90b
                        Node   Calls   Seconds  VRAM Used
           main_model_loader       1    0.002s     2.152G
                   clip_skip       1    0.002s     2.152G
                      compel       2    0.001s     2.152G
                     collect       2    0.001s     2.152G
                       noise       1    0.003s     2.152G
             denoise_latents       1    0.064s     2.152G
TOTAL GRAPH EXECUTION TIME:   0.072s
TOTAL GRAPH WALL TIME:   0.080s
RAM used by InvokeAI process: 6.32G (+0.019G)
RAM used to load models: 1.60G
VRAM in use: 1.858G
RAM cache statistics:
 Model cache hits: 1
 Model cache misses: 1
 Models cached: 11
 Models cleared from cache: 0
 Cache high water mark: 3.82/64.00G
�[0m
[2024-06-23 18:36:38,207]::[ModelManagerService]::INFO --> Released torch device cuda:0�[0m
[2024-06-23 18:36:38,222]::[ModelManagerService]::INFO --> Reserved torch device cuda:0 for execution thread 132909407143616�[0m
[2024-06-23 18:36:41,660]::[uvicorn.access]::INFO --> 172.24.13.33:51761 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.1" 200�[0m
[2024-06-23 18:36:41,775]::[uvicorn.access]::INFO --> 172.24.13.33:51761 - "GET /api/v1/queue/default/status HTTP/1.1" 200�[0m
[2024-06-23 18:36:41,930]::[uvicorn.access]::INFO --> 172.24.13.33:51762 - "GET /api/v1/queue/default/list HTTP/1.1" 200�[0m
[2024-06-23 18:36:42,868]::[uvicorn.access]::INFO --> 172.24.13.33:51762 - "GET /api/v1/images/i/aa44969b-b2a5-4e17-96c1-ef6937f2f1c3.png HTTP/1.1" 200�[0m

Upscaling:   0% 0/4 [00:00<?, ?it/s]
Upscaling:   0% 0/4 [00:00<?, ?it/s]
[2024-06-23 18:36:44,322]::[InvokeAI]::ERROR --> Error while invoking session b078e861-e127-4bac-b2bb-d473125dde0a, invocation bd8bd5f0-283b-42e7-9aaa-1a59fa9f117c (esrgan): Input type (float) and bias type (c10::Half) should be the same�[0m
[2024-06-23 18:36:44,322]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
  output = invocation.invoke_internal(context=context, services=self._services)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
  output = self.invoke(context)
           ^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/upscale.py", line 110, in invoke
  upscaled_image = upscaler.upscale(cv2_image)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  return func(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 231, in upscale
  self.tile_process()
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 159, in tile_process
  output_tile = self.model(input_tile)
                ^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/basicsr/rrdbnet_arch.py", line 118, in forward
  feat = self.conv_first(feat)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 460, in forward
  return self._conv_forward(input, self.weight, self.bias)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
  return F.conv2d(input, weight, bias, self.stride,
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Input type (float) and bias type (c10::Half) should be the same
�[0m
[2024-06-23 18:36:44,499]::[InvokeAI]::INFO --> Graph stats: b078e861-e127-4bac-b2bb-d473125dde0a
                        Node   Calls   Seconds  VRAM Used
           main_model_loader       1    0.001s     0.690G
                   clip_skip       1    0.000s     0.690G
                      compel       2    2.370s     0.956G
                     collect       2    0.001s     0.956G
                       noise       1    0.002s     0.956G
             denoise_latents       1   46.602s     2.236G
                         l2i       1    5.063s     2.107G
                      esrgan       1    1.511s     2.103G
TOTAL GRAPH EXECUTION TIME:  55.550s
TOTAL GRAPH WALL TIME:  55.582s
RAM used by InvokeAI process: 7.98G (+5.575G)
RAM used to load models: 2.06G
VRAM in use: 2.103G
RAM cache statistics:
 Model cache hits: 9
 Model cache misses: 5
 Models cached: 14
 Models cleared from cache: 0
 Cache high water mark: 4.05/64.00G
�[0m
[2024-06-23 18:36:44,499]::[ModelManagerService]::INFO --> Released torch device cuda:1�[0m
[2024-06-23 18:36:44,503]::[ModelManagerService]::INFO --> Reserved torch device cuda:1 for execution thread 132909396657856�[0m
[2024-06-23 18:36:48,944]::[InvokeAI]::ERROR --> Error while invoking session 4c45b53f-737d-4d2e-8112-985f97db77a2, invocation efbeab90-8a3a-4a06-9db7-208c97cf2e3d (sdxl_compel_prompt): Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument index in method wrapper_CUDA__index_select)�[0m
[2024-06-23 18:36:48,944]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
  output = invocation.invoke_internal(context=context, services=self._services)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
  output = self.invoke(context)
           ^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  return func(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/compel.py", line 275, in invoke
  c2, c2_pooled = self.run_clip_compel(context, self.clip2, self.style, True, "lora_te2_", zero_on_empty=True)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/compel.py", line 219, in run_clip_compel
  c_pooled = compel.conditioning_provider.get_pooled_embeddings([prompt])
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/compel/embeddings_provider.py", line 243, in get_pooled_embeddings
  text_encoder_output = self.text_encoder(token_ids, attention_mask, return_dict=True)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py", line 1217, in forward
  text_outputs = self.text_model(
                 ^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py", line 699, in forward
  hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py", line 219, in forward
  inputs_embeds = self.token_embedding(input_ids)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/sparse.py", line 163, in forward
  return F.embedding(
         ^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/functional.py", line 2237, in embedding
  return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument index in method wrapper_CUDA__index_select)
�[0m
[2024-06-23 18:36:49,299]::[InvokeAI]::INFO --> Graph stats: 4c45b53f-737d-4d2e-8112-985f97db77a2
                        Node   Calls   Seconds  VRAM Used
           sdxl_model_loader       1    0.001s     2.103G
          sdxl_compel_prompt       1    4.416s     2.114G
TOTAL GRAPH EXECUTION TIME:   4.417s
TOTAL GRAPH WALL TIME:   4.418s
RAM used by InvokeAI process: 9.31G (+1.248G)
RAM used to load models: 1.52G
VRAM in use: 1.313G
RAM cache statistics:
 Model cache hits: 5
 Model cache misses: 1
 Models cached: 15
 Models cleared from cache: 0
 Cache high water mark: 5.34/64.00G
�[0m
[2024-06-23 18:36:49,300]::[ModelManagerService]::INFO --> Released torch device cuda:1�[0m
[2024-06-23 18:36:49,300]::[ModelManagerService]::INFO --> Reserved torch device cuda:1 for execution thread 132909396657856�[0m

0% 0/100 [00:00<?, ?it/s]
1% 1/100 [00:00<00:28,  3.49it/s]
2% 2/100 [00:00<00:27,  3.61it/s]
3% 3/100 [00:00<00:26,  3.65it/s]
4% 4/100 [00:01<00:26,  3.67it/s]
5% 5/100 [00:01<00:25,  3.68it/s]
6% 6/100 [00:01<00:25,  3.69it/s]
7% 7/100 [00:01<00:25,  3.69it/s]
8% 8/100 [00:02<00:24,  3.69it/s]
9% 9/100 [00:02<00:24,  3.69it/s]
10% 10/100 [00:02<00:24,  3.70it/s]
11% 11/100 [00:02<00:24,  3.70it/s]
12% 12/100 [00:03<00:23,  3.70it/s]
13% 13/100 [00:03<00:23,  3.70it/s]
14% 14/100 [00:03<00:23,  3.70it/s]
15% 15/100 [00:04<00:22,  3.70it/s]
16% 16/100 [00:04<00:22,  3.70it/s]
17% 17/100 [00:04<00:22,  3.70it/s]
18% 18/100 [00:04<00:22,  3.70it/s]
19% 19/100 [00:05<00:22,  3.64it/s]
20% 20/100 [00:05<00:26,  3.05it/s]
21% 21/100 [00:05<00:24,  3.22it/s]

0% 0/100 [00:00<?, ?it/s]�[A
22% 22/100 [00:06<00:23,  3.35it/s]
23% 23/100 [00:06<00:22,  3.45it/s]
24% 24/100 [00:06<00:21,  3.52it/s]
25% 25/100 [00:06<00:20,  3.58it/s]
26% 26/100 [00:07<00:20,  3.61it/s]
27% 27/100 [00:07<00:20,  3.63it/s]
28% 28/100 [00:07<00:19,  3.66it/s]

1% 1/100 [00:01<03:12,  1.95s/it]�[A
29% 29/100 [00:08<00:19,  3.67it/s]
30% 30/100 [00:08<00:18,  3.68it/s]
31% 31/100 [00:08<00:18,  3.69it/s]
32% 32/100 [00:08<00:18,  3.69it/s]
33% 33/100 [00:09<00:18,  3.69it/s]
34% 34/100 [00:09<00:17,  3.69it/s]
35% 35/100 [00:09<00:17,  3.70it/s]

2% 2/100 [00:03<03:10,  1.94s/it]�[A
36% 36/100 [00:09<00:17,  3.70it/s]
37% 37/100 [00:10<00:16,  3.71it/s]
38% 38/100 [00:10<00:16,  3.70it/s]
39% 39/100 [00:10<00:16,  3.70it/s]
40% 40/100 [00:11<00:16,  3.70it/s]
41% 41/100 [00:11<00:15,  3.70it/s]
42% 42/100 [00:11<00:15,  3.70it/s]

3% 3/100 [00:05<03:07,  1.93s/it]�[A
43% 43/100 [00:11<00:15,  3.71it/s]
44% 44/100 [00:12<00:15,  3.71it/s]
45% 45/100 [00:12<00:14,  3.71it/s]
46% 46/100 [00:12<00:14,  3.71it/s]
47% 47/100 [00:12<00:14,  3.71it/s]
48% 48/100 [00:13<00:14,  3.70it/s]
49% 49/100 [00:13<00:13,  3.70it/s]

4% 4/100 [00:07<03:04,  1.93s/it]�[A
50% 50/100 [00:13<00:13,  3.70it/s]
51% 51/100 [00:13<00:13,  3.70it/s]
52% 52/100 [00:14<00:12,  3.70it/s]
53% 53/100 [00:14<00:12,  3.70it/s]
54% 54/100 [00:14<00:12,  3.70it/s]
55% 55/100 [00:15<00:12,  3.70it/s]
56% 56/100 [00:15<00:11,  3.70it/s]

5% 5/100 [00:09<03:02,  1.93s/it]�[A
57% 57/100 [00:15<00:11,  3.70it/s]
58% 58/100 [00:15<00:11,  3.70it/s]
59% 59/100 [00:16<00:11,  3.70it/s]
60% 60/100 [00:16<00:10,  3.70it/s]
61% 61/100 [00:16<00:10,  3.70it/s]
62% 62/100 [00:16<00:10,  3.70it/s]
63% 63/100 [00:17<00:09,  3.71it/s]

6% 6/100 [00:11<03:00,  1.93s/it]�[A
64% 64/100 [00:17<00:09,  3.71it/s]
65% 65/100 [00:17<00:09,  3.71it/s]
66% 66/100 [00:18<00:09,  3.70it/s]
67% 67/100 [00:18<00:08,  3.70it/s]
68% 68/100 [00:18<00:08,  3.70it/s]
69% 69/100 [00:18<00:08,  3.70it/s]
70% 70/100 [00:19<00:08,  3.70it/s]
71% 71/100 [00:19<00:07,  3.70it/s]

7% 7/100 [00:13<02:59,  1.93s/it]�[A
72% 72/100 [00:19<00:07,  3.71it/s]
73% 73/100 [00:19<00:07,  3.70it/s]
74% 74/100 [00:20<00:07,  3.70it/s]
75% 75/100 [00:20<00:06,  3.70it/s]
76% 76/100 [00:20<00:06,  3.70it/s]
77% 77/100 [00:21<00:06,  3.70it/s]
78% 78/100 [00:21<00:05,  3.70it/s]

8% 8/100 [00:15<02:57,  1.93s/it]�[A
79% 79/100 [00:21<00:05,  3.71it/s]
80% 80/100 [00:21<00:05,  3.71it/s]
81% 81/100 [00:22<00:05,  3.70it/s]
82% 82/100 [00:22<00:04,  3.70it/s]
83% 83/100 [00:22<00:04,  3.70it/s]
84% 84/100 [00:22<00:04,  3.71it/s]
85% 85/100 [00:23<00:04,  3.70it/s]

9% 9/100 [00:17<02:55,  1.93s/it]�[A
86% 86/100 [00:23<00:03,  3.71it/s]
87% 87/100 [00:23<00:03,  3.70it/s]
88% 88/100 [00:23<00:03,  3.70it/s]
89% 89/100 [00:24<00:02,  3.70it/s]
90% 90/100 [00:24<00:02,  3.70it/s]
91% 91/100 [00:24<00:02,  3.70it/s]
92% 92/100 [00:25<00:02,  3.70it/s]

10% 10/100 [00:19<02:53,  1.93s/it]�[A
93% 93/100 [00:25<00:01,  3.70it/s]
94% 94/100 [00:25<00:01,  3.69it/s]
95% 95/100 [00:25<00:01,  3.69it/s]
96% 96/100 [00:26<00:01,  3.69it/s]
97% 97/100 [00:26<00:00,  3.70it/s]
98% 98/100 [00:26<00:00,  3.70it/s]
99% 99/100 [00:26<00:00,  3.70it/s]

11% 11/100 [00:21<02:51,  1.92s/it]�[A
100% 100/100 [00:27<00:00,  3.70it/s]
100% 100/100 [00:27<00:00,  3.67it/s]


12% 12/100 [00:23<02:54,  1.98s/it]�[A�[38;20m[2024-06-23 18:38:03,607]::[uvicorn.access]::INFO --> 172.24.13.33:51779 - "GET /api/v1/images/i/543bf40d-4868-4073-907d-4564da83cc48.png HTTP/1.1" 200�[0m


13% 13/100 [00:25<02:53,  2.00s/it]�[A

14% 14/100 [00:27<02:50,  1.98s/it]�[A
Upscaling:   0% 0/4 [00:00<?, ?it/s]
Upscaling:   0% 0/4 [00:00<?, ?it/s]
[2024-06-23 18:38:06,193]::[InvokeAI]::ERROR --> Error while invoking session e9ea8285-bbec-489e-a4a6-f71f66be13bf, invocation 4872a6b4-37d0-4ef7-86b2-cab6a398dd93 (esrgan): Input type (float) and bias type (c10::Half) should be the same�[0m
[2024-06-23 18:38:06,193]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
  output = invocation.invoke_internal(context=context, services=self._services)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
  output = self.invoke(context)
           ^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/upscale.py", line 110, in invoke
  upscaled_image = upscaler.upscale(cv2_image)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  return func(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 231, in upscale
  self.tile_process()
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 159, in tile_process
  output_tile = self.model(input_tile)
                ^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/basicsr/rrdbnet_arch.py", line 118, in forward
  feat = self.conv_first(feat)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 460, in forward
  return self._conv_forward(input, self.weight, self.bias)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
  return F.conv2d(input, weight, bias, self.stride,
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Input type (float) and bias type (c10::Half) should be the same
�[0m
[2024-06-23 18:38:06,334]::[InvokeAI]::INFO --> Graph stats: e9ea8285-bbec-489e-a4a6-f71f66be13bf
                        Node   Calls   Seconds  VRAM Used
           main_model_loader       1    0.001s     1.313G
                   clip_skip       1    0.001s     1.313G
                      compel       2    8.120s     1.310G
                     collect       2    0.001s     0.041G
                       noise       1    0.002s     0.041G
             denoise_latents       1   63.182s     5.471G
                         l2i       1    2.830s     5.471G
                      esrgan       1    2.666s     5.471G
TOTAL GRAPH EXECUTION TIME:  76.804s
TOTAL GRAPH WALL TIME:  76.817s
RAM used by InvokeAI process: 18.24G (+8.909G)
RAM used to load models: 2.40G
VRAM in use: 5.030G
RAM cache statistics:
 Model cache hits: 8
 Model cache misses: 7
 Models cached: 23
 Models cleared from cache: 0
 Cache high water mark: 12.52/64.00G
�[0m
[2024-06-23 18:38:06,335]::[ModelManagerService]::INFO --> Released torch device cuda:1�[0m
[2024-06-23 18:38:06,997]::[ModelManagerService]::INFO --> Reserved torch device cuda:1 for execution thread 132909396657856�[0m


15% 15/100 [00:29<02:52,  2.03s/it]�[A
0% 0/100 [00:00<?, ?it/s]
1% 1/100 [00:00<00:27,  3.64it/s]
2% 2/100 [00:00<00:26,  3.67it/s]
3% 3/100 [00:00<00:26,  3.68it/s]
4% 4/100 [00:01<00:26,  3.68it/s]
5% 5/100 [00:01<00:25,  3.68it/s]
6% 6/100 [00:01<00:25,  3.69it/s]

16% 16/100 [00:31<02:48,  2.00s/it]�[A
7% 7/100 [00:01<00:25,  3.70it/s]
8% 8/100 [00:02<00:24,  3.70it/s]
9% 9/100 [00:02<00:24,  3.70it/s]
10% 10/100 [00:02<00:24,  3.69it/s]
11% 11/100 [00:02<00:24,  3.70it/s]
12% 12/100 [00:03<00:23,  3.70it/s]
13% 13/100 [00:03<00:23,  3.69it/s]

17% 17/100 [00:33<02:44,  1.98s/it]�[A
14% 14/100 [00:03<00:23,  3.69it/s]
15% 15/100 [00:04<00:22,  3.70it/s]
16% 16/100 [00:04<00:22,  3.70it/s]
17% 17/100 [00:04<00:22,  3.70it/s]
18% 18/100 [00:04<00:22,  3.70it/s]
19% 19/100 [00:05<00:21,  3.70it/s]
20% 20/100 [00:05<00:21,  3.70it/s]

18% 18/100 [00:35<02:41,  1.97s/it]�[A
21% 21/100 [00:05<00:21,  3.70it/s]
22% 22/100 [00:05<00:21,  3.70it/s]
23% 23/100 [00:06<00:20,  3.70it/s]
24% 24/100 [00:06<00:20,  3.70it/s]
25% 25/100 [00:06<00:20,  3.70it/s]
26% 26/100 [00:07<00:20,  3.70it/s]
27% 27/100 [00:07<00:19,  3.70it/s]

19% 19/100 [00:37<02:38,  1.95s/it]�[A
28% 28/100 [00:07<00:19,  3.70it/s]
29% 29/100 [00:07<00:19,  3.70it/s]
30% 30/100 [00:08<00:18,  3.70it/s]
31% 31/100 [00:08<00:18,  3.69it/s]
32% 32/100 [00:08<00:18,  3.69it/s]
33% 33/100 [00:08<00:18,  3.69it/s]
34% 34/100 [00:09<00:17,  3.69it/s]

20% 20/100 [00:39<02:35,  1.94s/it]�[A
35% 35/100 [00:09<00:17,  3.69it/s]
36% 36/100 [00:09<00:17,  3.69it/s]
37% 37/100 [00:10<00:17,  3.69it/s]
38% 38/100 [00:10<00:16,  3.70it/s]
39% 39/100 [00:10<00:16,  3.70it/s]
40% 40/100 [00:10<00:16,  3.70it/s]
41% 41/100 [00:11<00:15,  3.69it/s]

21% 21/100 [00:40<02:32,  1.93s/it]�[A
42% 42/100 [00:11<00:15,  3.69it/s]
43% 43/100 [00:11<00:15,  3.70it/s]
44% 44/100 [00:11<00:15,  3.70it/s]
45% 45/100 [00:12<00:14,  3.69it/s]
46% 46/100 [00:12<00:14,  3.69it/s]
47% 47/100 [00:12<00:14,  3.69it/s]
48% 48/100 [00:12<00:14,  3.70it/s]

22% 22/100 [00:42<02:30,  1.93s/it]�[A
49% 49/100 [00:13<00:13,  3.70it/s]
50% 50/100 [00:13<00:13,  3.70it/s]
51% 51/100 [00:13<00:13,  3.69it/s]
52% 52/100 [00:14<00:12,  3.70it/s]
53% 53/100 [00:14<00:12,  3.70it/s]
54% 54/100 [00:14<00:12,  3.70it/s]
55% 55/100 [00:14<00:12,  3.70it/s]
56% 56/100 [00:15<00:11,  3.69it/s]

23% 23/100 [00:44<02:29,  1.94s/it]�[A
57% 57/100 [00:15<00:11,  3.69it/s]
58% 58/100 [00:15<00:11,  3.69it/s]
59% 59/100 [00:15<00:11,  3.69it/s]
60% 60/100 [00:16<00:10,  3.69it/s]
61% 61/100 [00:16<00:10,  3.69it/s]
62% 62/100 [00:16<00:10,  3.69it/s]
63% 63/100 [00:17<00:10,  3.69it/s]

24% 24/100 [00:46<02:28,  1.95s/it]�[A
64% 64/100 [00:17<00:09,  3.69it/s]
65% 65/100 [00:17<00:09,  3.69it/s]
66% 66/100 [00:17<00:09,  3.69it/s]
67% 67/100 [00:18<00:08,  3.70it/s]
68% 68/100 [00:18<00:08,  3.70it/s]
69% 69/100 [00:18<00:08,  3.70it/s]
70% 70/100 [00:18<00:08,  3.70it/s]

25% 25/100 [00:48<02:27,  1.96s/it]�[A
71% 71/100 [00:19<00:07,  3.70it/s]
72% 72/100 [00:19<00:07,  3.70it/s]
73% 73/100 [00:19<00:07,  3.70it/s]
74% 74/100 [00:20<00:07,  3.70it/s]
75% 75/100 [00:20<00:06,  3.70it/s]
76% 76/100 [00:20<00:06,  3.70it/s]
77% 77/100 [00:20<00:06,  3.70it/s]
78% 78/100 [00:21<00:05,  3.70it/s]

26% 26/100 [00:50<02:25,  1.97s/it]�[A
79% 79/100 [00:21<00:05,  3.70it/s]
80% 80/100 [00:21<00:05,  3.70it/s]
81% 81/100 [00:21<00:05,  3.69it/s]
82% 82/100 [00:22<00:04,  3.69it/s]
83% 83/100 [00:22<00:04,  3.69it/s]
84% 84/100 [00:22<00:04,  3.69it/s]
85% 85/100 [00:23<00:04,  3.69it/s]

27% 27/100 [00:52<02:24,  1.98s/it]�[A
86% 86/100 [00:23<00:03,  3.69it/s]
87% 87/100 [00:23<00:03,  3.69it/s]
88% 88/100 [00:23<00:03,  3.69it/s]
89% 89/100 [00:24<00:02,  3.69it/s]
90% 90/100 [00:24<00:02,  3.70it/s]
91% 91/100 [00:24<00:02,  3.69it/s]
92% 92/100 [00:24<00:02,  3.69it/s]

28% 28/100 [00:54<02:22,  1.98s/it]�[A
93% 93/100 [00:25<00:01,  3.69it/s]
94% 94/100 [00:25<00:01,  3.69it/s]
95% 95/100 [00:25<00:01,  3.69it/s]
96% 96/100 [00:25<00:01,  3.69it/s]
97% 97/100 [00:26<00:00,  3.69it/s]
98% 98/100 [00:26<00:00,  3.69it/s]
99% 99/100 [00:26<00:00,  3.69it/s]
100% 100/100 [00:27<00:00,  3.69it/s]
100% 100/100 [00:27<00:00,  3.69it/s]


29% 29/100 [00:56<02:22,  2.00s/it]�[A�[38;20m[2024-06-23 18:38:37,211]::[uvicorn.access]::INFO --> 172.24.13.33:51781 - "GET /api/v1/images/i/2a34f58a-8093-46fb-9985-a03d8e0e2619.png HTTP/1.1" 200�[0m


30% 30/100 [00:58<02:21,  2.01s/it]�[A
Upscaling:   0% 0/4 [00:00<?, ?it/s]
Upscaling:   0% 0/4 [00:00<?, ?it/s]
[2024-06-23 18:38:38,959]::[InvokeAI]::ERROR --> Error while invoking session 5975bffd-00ff-4525-90ca-e00a33792c73, invocation 2fe56708-63cf-4c7a-afa3-419312700b26 (esrgan): Input type (float) and bias type (c10::Half) should be the same�[0m
[2024-06-23 18:38:38,960]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
  output = invocation.invoke_internal(context=context, services=self._services)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
  output = self.invoke(context)
           ^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/upscale.py", line 110, in invoke
  upscaled_image = upscaler.upscale(cv2_image)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  return func(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 231, in upscale
  self.tile_process()
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 159, in tile_process
  output_tile = self.model(input_tile)
                ^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/basicsr/rrdbnet_arch.py", line 118, in forward
  feat = self.conv_first(feat)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 460, in forward
  return self._conv_forward(input, self.weight, self.bias)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
  return F.conv2d(input, weight, bias, self.stride,
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Input type (float) and bias type (c10::Half) should be the same
�[0m
[2024-06-23 18:38:39,128]::[InvokeAI]::INFO --> Graph stats: 5975bffd-00ff-4525-90ca-e00a33792c73
                        Node   Calls   Seconds  VRAM Used
           main_model_loader       1    0.003s     5.030G
                   clip_skip       1    0.001s     5.036G
                      compel       2    0.001s     5.065G
                     collect       2    0.003s     5.103G
                       noise       1    0.004s     5.050G
             denoise_latents       1   28.466s     5.471G
                         l2i       1    1.650s     5.471G
                      esrgan       1    1.790s     5.471G
TOTAL GRAPH EXECUTION TIME:  31.917s
TOTAL GRAPH WALL TIME:  31.932s
RAM used by InvokeAI process: 18.24G (+0.000G)
RAM used to load models: 1.77G
VRAM in use: 4.900G
RAM cache statistics:
 Model cache hits: 4
 Model cache misses: 0
 Models cached: 23
 Models cleared from cache: 0
 Cache high water mark: 12.52/64.00G
�[0m
[2024-06-23 18:38:39,128]::[ModelManagerService]::INFO --> Released torch device cuda:1�[0m
[2024-06-23 18:38:39,583]::[ModelManagerService]::INFO --> Reserved torch device cuda:1 for execution thread 132909396657856�[0m


31% 31/100 [01:00<02:20,  2.03s/it]�[A

32% 32/100 [01:03<02:22,  2.09s/it]�[A

33% 33/100 [01:05<02:30,  2.25s/it]�[A

34% 34/100 [01:07<02:24,  2.19s/it]�[A

35% 35/100 [01:09<02:16,  2.11s/it]�[A

36% 36/100 [01:11<02:11,  2.05s/it]�[A

37% 37/100 [01:13<02:06,  2.01s/it]�[A

38% 38/100 [01:15<02:03,  1.99s/it]�[A

39% 39/100 [01:17<02:00,  1.98s/it]�[A
0% 0/100 [00:00<?, ?it/s]
1% 1/100 [00:00<00:31,  3.15it/s]
2% 2/100 [00:00<00:29,  3.31it/s]
3% 3/100 [00:00<00:28,  3.38it/s]
4% 4/100 [00:01<00:28,  3.42it/s]
5% 5/100 [00:01<00:27,  3.44it/s]

40% 40/100 [01:19<01:58,  1.97s/it]�[A
6% 6/100 [00:01<00:27,  3.45it/s]
7% 7/100 [00:02<00:26,  3.46it/s]
8% 8/100 [00:02<00:26,  3.47it/s]
9% 9/100 [00:02<00:26,  3.48it/s]
10% 10/100 [00:02<00:25,  3.48it/s]
11% 11/100 [00:03<00:25,  3.48it/s]
12% 12/100 [00:03<00:25,  3.48it/s]

41% 41/100 [01:21<01:55,  1.96s/it]�[A
13% 13/100 [00:03<00:24,  3.48it/s]
14% 14/100 [00:04<00:24,  3.48it/s]
15% 15/100 [00:04<00:24,  3.48it/s]
16% 16/100 [00:04<00:24,  3.47it/s]
17% 17/100 [00:04<00:23,  3.47it/s]
18% 18/100 [00:05<00:23,  3.47it/s]
19% 19/100 [00:05<00:23,  3.47it/s]

42% 42/100 [01:23<01:53,  1.95s/it]�[A
20% 20/100 [00:05<00:23,  3.47it/s]
21% 21/100 [00:06<00:22,  3.47it/s]
22% 22/100 [00:06<00:22,  3.47it/s]
23% 23/100 [00:06<00:22,  3.47it/s]
24% 24/100 [00:06<00:21,  3.47it/s]
25% 25/100 [00:07<00:21,  3.47it/s]

43% 43/100 [01:25<01:51,  1.96s/it]�[A
26% 26/100 [00:07<00:21,  3.47it/s]
27% 27/100 [00:07<00:21,  3.47it/s]
28% 28/100 [00:08<00:20,  3.48it/s]
29% 29/100 [00:08<00:20,  3.48it/s]
30% 30/100 [00:08<00:20,  3.48it/s]
31% 31/100 [00:08<00:19,  3.48it/s]
32% 32/100 [00:09<00:19,  3.47it/s]

44% 44/100 [01:27<01:50,  1.97s/it]�[A
33% 33/100 [00:09<00:19,  3.47it/s]
34% 34/100 [00:09<00:19,  3.46it/s]
35% 35/100 [00:10<00:18,  3.47it/s]
36% 36/100 [00:10<00:18,  3.47it/s]
37% 37/100 [00:10<00:18,  3.47it/s]
38% 38/100 [00:10<00:17,  3.47it/s]
39% 39/100 [00:11<00:17,  3.47it/s]

45% 45/100 [01:29<01:48,  1.98s/it]�[A
40% 40/100 [00:11<00:17,  3.47it/s]
41% 41/100 [00:11<00:16,  3.47it/s]
42% 42/100 [00:12<00:16,  3.48it/s]
43% 43/100 [00:12<00:16,  3.47it/s]
44% 44/100 [00:12<00:16,  3.47it/s]
45% 45/100 [00:12<00:15,  3.47it/s]
46% 46/100 [00:13<00:15,  3.47it/s]

46% 46/100 [01:31<01:47,  1.98s/it]�[A
47% 47/100 [00:13<00:15,  3.47it/s]
48% 48/100 [00:13<00:14,  3.48it/s]
49% 49/100 [00:14<00:14,  3.47it/s]
50% 50/100 [00:14<00:14,  3.47it/s]
51% 51/100 [00:14<00:14,  3.48it/s]
52% 52/100 [00:14<00:13,  3.48it/s]
53% 53/100 [00:15<00:13,  3.48it/s]

47% 47/100 [01:33<01:45,  1.98s/it]�[A
54% 54/100 [00:15<00:13,  3.48it/s]
55% 55/100 [00:15<00:12,  3.47it/s]
56% 56/100 [00:16<00:12,  3.48it/s]
57% 57/100 [00:16<00:12,  3.47it/s]
58% 58/100 [00:16<00:12,  3.47it/s]
59% 59/100 [00:17<00:11,  3.48it/s]
60% 60/100 [00:17<00:11,  3.48it/s]

48% 48/100 [01:35<01:43,  1.99s/it]�[A
61% 61/100 [00:17<00:11,  3.48it/s]
62% 62/100 [00:17<00:10,  3.48it/s]
63% 63/100 [00:18<00:10,  3.47it/s]
64% 64/100 [00:18<00:10,  3.47it/s]
65% 65/100 [00:18<00:10,  3.47it/s]
66% 66/100 [00:19<00:09,  3.47it/s]
67% 67/100 [00:19<00:09,  3.47it/s]

49% 49/100 [01:37<01:41,  1.99s/it]�[A
68% 68/100 [00:19<00:09,  3.47it/s]
69% 69/100 [00:19<00:08,  3.47it/s]
70% 70/100 [00:20<00:08,  3.47it/s]
71% 71/100 [00:20<00:08,  3.47it/s]
72% 72/100 [00:20<00:08,  3.47it/s]
73% 73/100 [00:21<00:07,  3.47it/s]
74% 74/100 [00:21<00:07,  3.47it/s]

50% 50/100 [01:39<01:39,  1.99s/it]�[A
75% 75/100 [00:21<00:07,  3.47it/s]
76% 76/100 [00:21<00:06,  3.47it/s]
77% 77/100 [00:22<00:06,  3.47it/s]
78% 78/100 [00:22<00:06,  3.47it/s]
79% 79/100 [00:22<00:06,  3.48it/s]
80% 80/100 [00:23<00:05,  3.48it/s]
81% 81/100 [00:23<00:05,  3.47it/s]

51% 51/100 [01:41<01:37,  2.00s/it]�[A
82% 82/100 [00:23<00:05,  3.48it/s]
83% 83/100 [00:23<00:04,  3.48it/s]
84% 84/100 [00:24<00:04,  3.47it/s]
85% 85/100 [00:24<00:04,  3.47it/s]
86% 86/100 [00:24<00:04,  3.47it/s]
87% 87/100 [00:25<00:03,  3.46it/s]
88% 88/100 [00:25<00:03,  3.47it/s]

52% 52/100 [01:43<01:35,  1.99s/it]�[A
89% 89/100 [00:25<00:03,  3.47it/s]
90% 90/100 [00:25<00:02,  3.47it/s]
91% 91/100 [00:26<00:02,  3.47it/s]
92% 92/100 [00:26<00:02,  3.47it/s]
93% 93/100 [00:26<00:02,  3.47it/s]
94% 94/100 [00:27<00:01,  3.47it/s]
95% 95/100 [00:27<00:01,  3.47it/s]

53% 53/100 [01:45<01:33,  1.99s/it]�[A
96% 96/100 [00:27<00:01,  3.47it/s]
97% 97/100 [00:27<00:00,  3.47it/s]
98% 98/100 [00:28<00:00,  3.47it/s]
99% 99/100 [00:28<00:00,  3.47it/s]
100% 100/100 [00:28<00:00,  3.47it/s]
100% 100/100 [00:28<00:00,  3.47it/s]


54% 54/100 [01:47<01:32,  2.02s/it]�[A

55% 55/100 [01:49<01:31,  2.03s/it]�[A�[38;20m[2024-06-23 18:39:29,401]::[uvicorn.access]::INFO --> 172.24.13.33:51792 - "GET /api/v1/images/i/df9cfe8b-c3c4-4e3c-809c-cb266da95509.png HTTP/1.1" 200�[0m


56% 56/100 [01:51<01:29,  2.03s/it]�[A
Upscaling:   0% 0/4 [00:00<?, ?it/s]
Upscaling:   0% 0/4 [00:00<?, ?it/s]
[2024-06-23 18:39:31,047]::[InvokeAI]::ERROR --> Error while invoking session 2250ddd0-0241-424c-b95f-edd1191eaa18, invocation 0c44d264-330e-44cf-9e8e-073f9a0fa206 (esrgan): Input type (float) and bias type (c10::Half) should be the same�[0m
[2024-06-23 18:39:31,047]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
  output = invocation.invoke_internal(context=context, services=self._services)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
  output = self.invoke(context)
           ^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/upscale.py", line 110, in invoke
  upscaled_image = upscaler.upscale(cv2_image)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  return func(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 231, in upscale
  self.tile_process()
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 159, in tile_process
  output_tile = self.model(input_tile)
                ^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/basicsr/rrdbnet_arch.py", line 118, in forward
  feat = self.conv_first(feat)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 460, in forward
  return self._conv_forward(input, self.weight, self.bias)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
  return F.conv2d(input, weight, bias, self.stride,
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Input type (float) and bias type (c10::Half) should be the same
�[0m
[2024-06-23 18:39:31,131]::[InvokeAI]::INFO --> Graph stats: 2250ddd0-0241-424c-b95f-edd1191eaa18
                        Node   Calls   Seconds  VRAM Used
           main_model_loader       1    0.002s     4.900G
                   clip_skip       1    0.001s     4.900G
                      compel       2    3.336s     5.471G
                     collect       2    0.001s     5.041G
                       noise       1    0.002s     5.103G
             denoise_latents       1   42.999s     5.471G
                         l2i       1    3.365s     5.471G
                      esrgan       1    1.699s     5.471G
TOTAL GRAPH EXECUTION TIME:  51.404s
TOTAL GRAPH WALL TIME:  51.415s
RAM used by InvokeAI process: 20.57G (+2.333G)
RAM used to load models: 2.22G
VRAM in use: 5.042G
RAM cache statistics:
 Model cache hits: 9
 Model cache misses: 5
 Models cached: 28
 Models cleared from cache: 0
 Cache high water mark: 14.51/64.00G
�[0m
[2024-06-23 18:39:31,132]::[ModelManagerService]::INFO --> Released torch device cuda:1�[0m
[2024-06-23 18:39:31,732]::[ModelManagerService]::INFO --> Reserved torch device cuda:1 for execution thread 132909396657856�[0m


57% 57/100 [01:53<01:28,  2.05s/it]�[A
0% 0/100 [00:00<?, ?it/s]
1% 1/100 [00:00<00:28,  3.42it/s]
2% 2/100 [00:00<00:28,  3.45it/s]
3% 3/100 [00:00<00:28,  3.46it/s]
4% 4/100 [00:01<00:27,  3.47it/s]
5% 5/100 [00:01<00:27,  3.47it/s]
6% 6/100 [00:01<00:27,  3.47it/s]

58% 58/100 [01:55<01:25,  2.04s/it]�[A
7% 7/100 [00:02<00:26,  3.47it/s]
8% 8/100 [00:02<00:26,  3.47it/s]
9% 9/100 [00:02<00:26,  3.47it/s]
10% 10/100 [00:02<00:25,  3.48it/s]
11% 11/100 [00:03<00:25,  3.48it/s]
12% 12/100 [00:03<00:25,  3.48it/s]
13% 13/100 [00:03<00:24,  3.48it/s]

59% 59/100 [01:57<01:22,  2.02s/it]�[A
14% 14/100 [00:04<00:24,  3.48it/s]
15% 15/100 [00:04<00:24,  3.48it/s]
16% 16/100 [00:04<00:24,  3.48it/s]
17% 17/100 [00:04<00:23,  3.47it/s]
18% 18/100 [00:05<00:23,  3.47it/s]
19% 19/100 [00:05<00:23,  3.47it/s]
20% 20/100 [00:05<00:22,  3.48it/s]

60% 60/100 [01:59<01:20,  2.01s/it]�[A
21% 21/100 [00:06<00:22,  3.48it/s]
22% 22/100 [00:06<00:22,  3.48it/s]
23% 23/100 [00:06<00:22,  3.48it/s]
24% 24/100 [00:06<00:21,  3.49it/s]
25% 25/100 [00:07<00:21,  3.48it/s]
26% 26/100 [00:07<00:21,  3.48it/s]
27% 27/100 [00:07<00:21,  3.48it/s]

61% 61/100 [02:01<01:18,  2.01s/it]�[A
28% 28/100 [00:08<00:20,  3.48it/s]
29% 29/100 [00:08<00:20,  3.48it/s]
30% 30/100 [00:08<00:20,  3.48it/s]
31% 31/100 [00:08<00:19,  3.48it/s]
32% 32/100 [00:09<00:19,  3.47it/s]
33% 33/100 [00:09<00:19,  3.47it/s]
34% 34/100 [00:09<00:18,  3.47it/s]

62% 62/100 [02:03<01:16,  2.01s/it]�[A
35% 35/100 [00:10<00:18,  3.47it/s]
36% 36/100 [00:10<00:18,  3.48it/s]
37% 37/100 [00:10<00:18,  3.48it/s]
38% 38/100 [00:10<00:17,  3.48it/s]
39% 39/100 [00:11<00:17,  3.48it/s]
40% 40/100 [00:11<00:17,  3.48it/s]
41% 41/100 [00:11<00:16,  3.48it/s]

63% 63/100 [02:05<01:14,  2.00s/it]�[A
42% 42/100 [00:12<00:16,  3.48it/s]
43% 43/100 [00:12<00:16,  3.48it/s]
44% 44/100 [00:12<00:16,  3.48it/s]
45% 45/100 [00:12<00:15,  3.47it/s]
46% 46/100 [00:13<00:15,  3.47it/s]
47% 47/100 [00:13<00:15,  3.47it/s]
48% 48/100 [00:13<00:14,  3.47it/s]

64% 64/100 [02:07<01:12,  2.00s/it]�[A
49% 49/100 [00:14<00:14,  3.47it/s]
50% 50/100 [00:14<00:14,  3.47it/s]
51% 51/100 [00:14<00:14,  3.47it/s]
52% 52/100 [00:14<00:13,  3.47it/s]
53% 53/100 [00:15<00:13,  3.47it/s]
54% 54/100 [00:15<00:13,  3.47it/s]
55% 55/100 [00:15<00:12,  3.47it/s]

65% 65/100 [02:09<01:10,  2.00s/it]�[A
56% 56/100 [00:16<00:12,  3.47it/s]
57% 57/100 [00:16<00:12,  3.47it/s]
58% 58/100 [00:16<00:12,  3.47it/s]
59% 59/100 [00:16<00:11,  3.47it/s]
60% 60/100 [00:17<00:11,  3.47it/s]
61% 61/100 [00:17<00:11,  3.48it/s]
62% 62/100 [00:17<00:10,  3.47it/s]

66% 66/100 [02:11<01:08,  2.00s/it]�[A
63% 63/100 [00:18<00:10,  3.47it/s]
64% 64/100 [00:18<00:10,  3.47it/s]
65% 65/100 [00:18<00:10,  3.47it/s]
66% 66/100 [00:18<00:09,  3.47it/s]
67% 67/100 [00:19<00:09,  3.47it/s]
68% 68/100 [00:19<00:09,  3.47it/s]
69% 69/100 [00:19<00:08,  3.47it/s]

67% 67/100 [02:13<01:06,  2.00s/it]�[A
70% 70/100 [00:20<00:08,  3.47it/s]
71% 71/100 [00:20<00:08,  3.47it/s]
72% 72/100 [00:20<00:08,  3.47it/s]
73% 73/100 [00:21<00:07,  3.47it/s]
74% 74/100 [00:21<00:07,  3.47it/s]
75% 75/100 [00:21<00:07,  3.47it/s]
76% 76/100 [00:21<00:06,  3.47it/s]

68% 68/100 [02:15<01:03,  2.00s/it]�[A
77% 77/100 [00:22<00:06,  3.47it/s]
78% 78/100 [00:22<00:06,  3.47it/s]
79% 79/100 [00:22<00:06,  3.47it/s]
80% 80/100 [00:23<00:05,  3.47it/s]
81% 81/100 [00:23<00:05,  3.47it/s]
82% 82/100 [00:23<00:05,  3.47it/s]
83% 83/100 [00:23<00:04,  3.47it/s]

69% 69/100 [02:17<01:01,  2.00s/it]�[A
84% 84/100 [00:24<00:04,  3.47it/s]
85% 85/100 [00:24<00:04,  3.47it/s]
86% 86/100 [00:24<00:04,  3.47it/s]
87% 87/100 [00:25<00:03,  3.47it/s]
88% 88/100 [00:25<00:03,  3.47it/s]
89% 89/100 [00:25<00:03,  3.47it/s]

70% 70/100 [02:19<00:59,  2.00s/it]�[A
90% 90/100 [00:25<00:02,  3.47it/s]
91% 91/100 [00:26<00:02,  3.47it/s]
92% 92/100 [00:26<00:02,  3.47it/s]
93% 93/100 [00:26<00:02,  3.47it/s]
94% 94/100 [00:27<00:01,  3.47it/s]
95% 95/100 [00:27<00:01,  3.47it/s]
96% 96/100 [00:27<00:01,  3.47it/s]

71% 71/100 [02:21<00:57,  2.00s/it]�[A
97% 97/100 [00:27<00:00,  3.47it/s]
98% 98/100 [00:28<00:00,  3.47it/s]
99% 99/100 [00:28<00:00,  3.47it/s]
100% 100/100 [00:28<00:00,  3.47it/s]
100% 100/100 [00:28<00:00,  3.47it/s]


72% 72/100 [02:23<00:56,  2.03s/it]�[A�[38;20m[2024-06-23 18:40:03,555]::[uvicorn.access]::INFO --> 172.24.13.33:51795 - "GET /api/v1/images/i/45e81fef-2890-43e6-bc41-a2c7132a5560.png HTTP/1.1" 200�[0m


73% 73/100 [02:25<00:54,  2.04s/it]�[A
Upscaling:   0% 0/4 [00:00<?, ?it/s]
Upscaling:   0% 0/4 [00:00<?, ?it/s]
[2024-06-23 18:40:05,437]::[InvokeAI]::ERROR --> Error while invoking session 551a36ec-9d1f-4fe1-9bbc-cf51491dc891, invocation 865680e0-0622-4f69-ac2b-9a6f3a9a0949 (esrgan): Input type (float) and bias type (c10::Half) should be the same�[0m
[2024-06-23 18:40:05,437]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
  output = invocation.invoke_internal(context=context, services=self._services)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
  output = self.invoke(context)
           ^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/upscale.py", line 110, in invoke
  upscaled_image = upscaler.upscale(cv2_image)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  return func(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 231, in upscale
  self.tile_process()
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 159, in tile_process
  output_tile = self.model(input_tile)
                ^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/basicsr/rrdbnet_arch.py", line 118, in forward
  feat = self.conv_first(feat)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 460, in forward
  return self._conv_forward(input, self.weight, self.bias)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
  return F.conv2d(input, weight, bias, self.stride,
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Input type (float) and bias type (c10::Half) should be the same
�[0m
[2024-06-23 18:40:05,591]::[InvokeAI]::INFO --> Graph stats: 551a36ec-9d1f-4fe1-9bbc-cf51491dc891
                        Node   Calls   Seconds  VRAM Used
           main_model_loader       1    0.001s     5.144G
                   clip_skip       1    0.000s     5.072G
                      compel       2    0.001s     5.062G
                     collect       2    0.001s     5.072G
                       noise       1    0.002s     5.072G
             denoise_latents       1   30.016s     5.471G
                         l2i       1    1.695s     5.471G
                      esrgan       1    1.945s     5.471G
TOTAL GRAPH EXECUTION TIME:  33.660s
TOTAL GRAPH WALL TIME:  33.671s
RAM used by InvokeAI process: 20.57G (+0.001G)
RAM used to load models: 1.76G
VRAM in use: 4.900G
RAM cache statistics:
 Model cache hits: 4
 Model cache misses: 0
 Models cached: 28
 Models cleared from cache: 0
 Cache high water mark: 14.51/64.00G
�[0m
[2024-06-23 18:40:05,591]::[ModelManagerService]::INFO --> Released torch device cuda:1�[0m
[2024-06-23 18:40:06,322]::[ModelManagerService]::INFO --> Reserved torch device cuda:1 for execution thread 132909396657856�[0m


74% 74/100 [02:27<00:53,  2.05s/it]�[A
0% 0/100 [00:00<?, ?it/s]
1% 1/100 [00:00<00:28,  3.45it/s]

75% 75/100 [02:29<00:51,  2.07s/it]�[A
2% 2/100 [00:00<00:28,  3.48it/s]
3% 3/100 [00:00<00:27,  3.47it/s]
4% 4/100 [00:01<00:27,  3.48it/s]
5% 5/100 [00:01<00:27,  3.48it/s]
6% 6/100 [00:01<00:27,  3.48it/s]
7% 7/100 [00:02<00:26,  3.48it/s]
8% 8/100 [00:02<00:26,  3.48it/s]

76% 76/100 [02:31<00:49,  2.05s/it]�[A
9% 9/100 [00:02<00:26,  3.48it/s]
10% 10/100 [00:02<00:25,  3.48it/s]
11% 11/100 [00:03<00:25,  3.48it/s]
12% 12/100 [00:03<00:25,  3.48it/s]
13% 13/100 [00:03<00:25,  3.47it/s]
14% 14/100 [00:04<00:24,  3.47it/s]
15% 15/100 [00:04<00:24,  3.47it/s]

77% 77/100 [02:33<00:46,  2.04s/it]�[A
16% 16/100 [00:04<00:24,  3.47it/s]
17% 17/100 [00:04<00:23,  3.47it/s]
18% 18/100 [00:05<00:23,  3.47it/s]
19% 19/100 [00:05<00:23,  3.47it/s]
20% 20/100 [00:05<00:23,  3.47it/s]
21% 21/100 [00:06<00:22,  3.47it/s]
22% 22/100 [00:06<00:22,  3.47it/s]

78% 78/100 [02:35<00:44,  2.02s/it]�[A
23% 23/100 [00:06<00:22,  3.47it/s]
24% 24/100 [00:06<00:21,  3.47it/s]
25% 25/100 [00:07<00:21,  3.47it/s]
26% 26/100 [00:07<00:21,  3.47it/s]
27% 27/100 [00:07<00:21,  3.47it/s]
28% 28/100 [00:08<00:20,  3.47it/s]
29% 29/100 [00:08<00:20,  3.47it/s]

79% 79/100 [02:37<00:41,  1.99s/it]�[A
30% 30/100 [00:08<00:20,  3.47it/s]
31% 31/100 [00:08<00:19,  3.46it/s]
32% 32/100 [00:09<00:19,  3.47it/s]
33% 33/100 [00:09<00:19,  3.47it/s]
34% 34/100 [00:09<00:19,  3.47it/s]
35% 35/100 [00:10<00:18,  3.47it/s]

80% 80/100 [02:39<00:39,  1.97s/it]�[A
36% 36/100 [00:10<00:18,  3.47it/s]
37% 37/100 [00:10<00:18,  3.47it/s]
38% 38/100 [00:10<00:17,  3.47it/s]
39% 39/100 [00:11<00:17,  3.47it/s]
40% 40/100 [00:11<00:17,  3.47it/s]
41% 41/100 [00:11<00:17,  3.47it/s]
42% 42/100 [00:12<00:16,  3.47it/s]

81% 81/100 [02:41<00:37,  1.96s/it]�[A
43% 43/100 [00:12<00:16,  3.47it/s]
44% 44/100 [00:12<00:16,  3.47it/s]
45% 45/100 [00:12<00:15,  3.47it/s]
46% 46/100 [00:13<00:15,  3.47it/s]
47% 47/100 [00:13<00:15,  3.47it/s]
48% 48/100 [00:13<00:14,  3.47it/s]
49% 49/100 [00:14<00:14,  3.47it/s]

82% 82/100 [02:43<00:35,  1.95s/it]�[A
50% 50/100 [00:14<00:14,  3.47it/s]
51% 51/100 [00:14<00:14,  3.47it/s]
52% 52/100 [00:14<00:13,  3.47it/s]
53% 53/100 [00:15<00:13,  3.46it/s]
54% 54/100 [00:15<00:13,  3.46it/s]
55% 55/100 [00:15<00:12,  3.46it/s]
56% 56/100 [00:16<00:12,  3.46it/s]

83% 83/100 [02:45<00:33,  1.94s/it]�[A
57% 57/100 [00:16<00:12,  3.47it/s]
58% 58/100 [00:16<00:12,  3.47it/s]
59% 59/100 [00:16<00:11,  3.47it/s]
60% 60/100 [00:17<00:11,  3.47it/s]
61% 61/100 [00:17<00:11,  3.47it/s]
62% 62/100 [00:17<00:10,  3.47it/s]

84% 84/100 [02:47<00:31,  1.95s/it]�[A
63% 63/100 [00:18<00:10,  3.47it/s]
64% 64/100 [00:18<00:10,  3.47it/s]
65% 65/100 [00:18<00:10,  3.47it/s]
66% 66/100 [00:19<00:09,  3.47it/s]
67% 67/100 [00:19<00:09,  3.47it/s]
68% 68/100 [00:19<00:09,  3.47it/s]
69% 69/100 [00:19<00:08,  3.47it/s]

85% 85/100 [02:49<00:29,  1.94s/it]�[A
70% 70/100 [00:20<00:08,  3.47it/s]
71% 71/100 [00:20<00:08,  3.47it/s]
72% 72/100 [00:20<00:08,  3.47it/s]
73% 73/100 [00:21<00:07,  3.47it/s]
74% 74/100 [00:21<00:07,  3.47it/s]
75% 75/100 [00:21<00:07,  3.47it/s]
76% 76/100 [00:21<00:06,  3.46it/s]

86% 86/100 [02:51<00:27,  1.94s/it]�[A
77% 77/100 [00:22<00:06,  3.47it/s]
78% 78/100 [00:22<00:06,  3.47it/s]
79% 79/100 [00:22<00:06,  3.47it/s]
80% 80/100 [00:23<00:05,  3.47it/s]
81% 81/100 [00:23<00:05,  3.47it/s]
82% 82/100 [00:23<00:05,  3.47it/s]

87% 87/100 [02:53<00:25,  1.93s/it]�[A
83% 83/100 [00:23<00:04,  3.47it/s]
84% 84/100 [00:24<00:04,  3.48it/s]
85% 85/100 [00:24<00:04,  3.47it/s]
86% 86/100 [00:24<00:04,  3.47it/s]
87% 87/100 [00:25<00:03,  3.47it/s]
88% 88/100 [00:25<00:03,  3.48it/s]
89% 89/100 [00:25<00:03,  3.48it/s]

88% 88/100 [02:55<00:23,  1.93s/it]�[A
90% 90/100 [00:25<00:02,  3.48it/s]
91% 91/100 [00:26<00:02,  3.48it/s]
92% 92/100 [00:26<00:02,  3.48it/s]
93% 93/100 [00:26<00:02,  3.48it/s]
94% 94/100 [00:27<00:01,  3.47it/s]
95% 95/100 [00:27<00:01,  3.48it/s]
96% 96/100 [00:27<00:01,  3.48it/s]

89% 89/100 [02:57<00:21,  1.94s/it]�[A
97% 97/100 [00:27<00:00,  3.48it/s]
98% 98/100 [00:28<00:00,  3.48it/s]
99% 99/100 [00:28<00:00,  3.47it/s]
100% 100/100 [00:28<00:00,  3.47it/s]
100% 100/100 [00:28<00:00,  3.47it/s]


90% 90/100 [02:59<00:19,  1.96s/it]�[A�[38;20m[2024-06-23 18:40:39,158]::[uvicorn.access]::INFO --> 172.24.13.33:51797 - "GET /api/v1/images/i/ccc28b4d-db99-4d82-8478-47e9bf0dab58.png HTTP/1.1" 200�[0m


91% 91/100 [03:01<00:17,  1.97s/it]�[A
Upscaling:   0% 0/4 [00:00<?, ?it/s]
Upscaling:   0% 0/4 [00:00<?, ?it/s]
[2024-06-23 18:40:40,882]::[InvokeAI]::ERROR --> Error while invoking session 1362c7bd-8777-40a9-a09f-9d6503ade50e, invocation 3dfe7678-642d-4ef4-86ff-cdd49b808d00 (esrgan): Input type (float) and bias type (c10::Half) should be the same�[0m
[2024-06-23 18:40:40,883]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
  output = invocation.invoke_internal(context=context, services=self._services)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
  output = self.invoke(context)
           ^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/upscale.py", line 110, in invoke
  upscaled_image = upscaler.upscale(cv2_image)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  return func(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 231, in upscale
  self.tile_process()
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 159, in tile_process
  output_tile = self.model(input_tile)
                ^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/basicsr/rrdbnet_arch.py", line 118, in forward
  feat = self.conv_first(feat)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 460, in forward
  return self._conv_forward(input, self.weight, self.bias)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
  return F.conv2d(input, weight, bias, self.stride,
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Input type (float) and bias type (c10::Half) should be the same
�[0m
[2024-06-23 18:40:41,014]::[InvokeAI]::INFO --> Graph stats: 1362c7bd-8777-40a9-a09f-9d6503ade50e
                        Node   Calls   Seconds  VRAM Used
           main_model_loader       1    0.001s     4.900G
                   clip_skip       1    0.001s     4.900G
                      compel       2    0.002s     4.901G
                     collect       2    0.002s     4.901G
                       noise       1    0.003s     4.902G
             denoise_latents       1   31.199s     5.471G
                         l2i       1    1.461s     5.471G
                      esrgan       1    1.815s     5.471G
TOTAL GRAPH EXECUTION TIME:  34.484s
TOTAL GRAPH WALL TIME:  34.497s
RAM used by InvokeAI process: 20.57G (+0.000G)
RAM used to load models: 1.76G
VRAM in use: 4.900G
RAM cache statistics:
 Model cache hits: 4
 Model cache misses: 0
 Models cached: 28
 Models cleared from cache: 0
 Cache high water mark: 14.51/64.00G
�[0m
[2024-06-23 18:40:41,014]::[ModelManagerService]::INFO --> Released torch device cuda:1�[0m
[2024-06-23 18:40:41,709]::[ModelManagerService]::INFO --> Reserved torch device cuda:1 for execution thread 132909396657856�[0m


92% 92/100 [03:03<00:15,  1.98s/it]�[A
0% 0/100 [00:00<?, ?it/s]
1% 1/100 [00:00<00:31,  3.11it/s]
2% 2/100 [00:00<00:29,  3.29it/s]
3% 3/100 [00:00<00:28,  3.37it/s]
4% 4/100 [00:01<00:28,  3.41it/s]
5% 5/100 [00:01<00:27,  3.44it/s]

93% 93/100 [03:05<00:13,  1.98s/it]�[A
6% 6/100 [00:01<00:27,  3.45it/s]
7% 7/100 [00:02<00:26,  3.46it/s]
8% 8/100 [00:02<00:26,  3.47it/s]
9% 9/100 [00:02<00:26,  3.47it/s]
10% 10/100 [00:02<00:25,  3.47it/s]
11% 11/100 [00:03<00:25,  3.47it/s]
12% 12/100 [00:03<00:25,  3.48it/s]

94% 94/100 [03:07<00:11,  1.97s/it]�[A
13% 13/100 [00:03<00:25,  3.47it/s]
14% 14/100 [00:04<00:24,  3.47it/s]
15% 15/100 [00:04<00:24,  3.47it/s]
16% 16/100 [00:04<00:24,  3.47it/s]
17% 17/100 [00:04<00:23,  3.47it/s]
18% 18/100 [00:05<00:23,  3.47it/s]
19% 19/100 [00:05<00:23,  3.47it/s]

95% 95/100 [03:09<00:09,  1.98s/it]�[A
20% 20/100 [00:05<00:23,  3.48it/s]
21% 21/100 [00:06<00:22,  3.48it/s]
22% 22/100 [00:06<00:22,  3.48it/s]
23% 23/100 [00:06<00:22,  3.47it/s]
24% 24/100 [00:06<00:21,  3.47it/s]
25% 25/100 [00:07<00:21,  3.47it/s]
26% 26/100 [00:07<00:21,  3.48it/s]

96% 96/100 [03:11<00:07,  1.99s/it]�[A
27% 27/100 [00:07<00:20,  3.48it/s]
28% 28/100 [00:08<00:20,  3.47it/s]
29% 29/100 [00:08<00:20,  3.47it/s]
30% 30/100 [00:08<00:20,  3.47it/s]
31% 31/100 [00:08<00:19,  3.47it/s]
32% 32/100 [00:09<00:19,  3.47it/s]
33% 33/100 [00:09<00:19,  3.47it/s]

97% 97/100 [03:13<00:05,  1.99s/it]�[A
34% 34/100 [00:09<00:19,  3.47it/s]
35% 35/100 [00:10<00:18,  3.47it/s]
36% 36/100 [00:10<00:18,  3.47it/s]
37% 37/100 [00:10<00:18,  3.47it/s]
38% 38/100 [00:10<00:17,  3.47it/s]
39% 39/100 [00:11<00:17,  3.47it/s]
40% 40/100 [00:11<00:17,  3.47it/s]

98% 98/100 [03:15<00:03,  1.99s/it]�[A
41% 41/100 [00:11<00:16,  3.48it/s]
42% 42/100 [00:12<00:16,  3.48it/s]
43% 43/100 [00:12<00:16,  3.48it/s]
44% 44/100 [00:12<00:16,  3.47it/s]
45% 45/100 [00:12<00:15,  3.47it/s]
46% 46/100 [00:13<00:15,  3.47it/s]
47% 47/100 [00:13<00:15,  3.47it/s]

99% 99/100 [03:17<00:01,  2.00s/it]�[A
48% 48/100 [00:13<00:14,  3.48it/s]
49% 49/100 [00:14<00:14,  3.47it/s]
50% 50/100 [00:14<00:14,  3.47it/s]
51% 51/100 [00:14<00:14,  3.47it/s]
52% 52/100 [00:15<00:13,  3.47it/s]
53% 53/100 [00:15<00:13,  3.47it/s]
54% 54/100 [00:15<00:13,  3.47it/s]

100% 100/100 [03:19<00:00,  2.00s/it]�[A
100% 100/100 [03:19<00:00,  1.99s/it]

55% 55/100 [00:16<00:16,  2.68it/s]
56% 56/100 [00:16<00:18,  2.44it/s]
57% 57/100 [00:16<00:16,  2.68it/s]
58% 58/100 [00:17<00:14,  2.87it/s]
59% 59/100 [00:17<00:13,  3.03it/s]
60% 60/100 [00:17<00:13,  3.06it/s]
61% 61/100 [00:18<00:12,  3.17it/s]
62% 62/100 [00:18<00:11,  3.26it/s]
63% 63/100 [00:18<00:11,  3.31it/s]
64% 64/100 [00:18<00:10,  3.36it/s]
65% 65/100 [00:19<00:10,  3.39it/s]
66% 66/100 [00:19<00:09,  3.41it/s]
67% 67/100 [00:19<00:09,  3.43it/s]
68% 68/100 [00:20<00:09,  3.45it/s]
69% 69/100 [00:20<00:08,  3.46it/s]
70% 70/100 [00:20<00:08,  3.46it/s]
71% 71/100 [00:21<00:08,  3.47it/s]
72% 72/100 [00:21<00:08,  3.26it/s]
73% 73/100 [00:21<00:08,  3.26it/s]�[38;20m[2024-06-23 18:41:03,837]::[uvicorn.access]::INFO --> 172.24.13.33:51802 - "GET /api/v1/images/i/46b36a2e-da0b-4dc2-ac55-9fd644dea8e9.png HTTP/1.1" 200�[0m
[2024-06-23 18:41:03,875]::[InvokeAI]::INFO --> Graph stats: 11b307d0-edaa-4d9d-8a03-a51498d00c05
                        Node   Calls   Seconds  VRAM Used
           sdxl_model_loader       1    0.002s     1.858G
          sdxl_compel_prompt       2   12.660s     1.553G
                     collect       2    0.001s     1.310G
                       noise       1    0.005s     1.310G
             denoise_latents       1  246.877s     5.471G
               core_metadata       1    0.002s     4.899G
                         l2i       1    5.911s     4.899G
TOTAL GRAPH EXECUTION TIME: 265.458s
TOTAL GRAPH WALL TIME: 265.470s
RAM used by InvokeAI process: 20.74G (+14.554G)
RAM used to load models: 7.48G
VRAM in use: 0.312G
RAM cache statistics:
 Model cache hits: 11
 Model cache misses: 6
 Models cached: 29
 Models cleared from cache: 0
 Cache high water mark: 14.67/64.00G
�[0m
[2024-06-23 18:41:03,875]::[ModelManagerService]::INFO --> Released torch device cuda:0�[0m
[2024-06-23 18:41:04,071]::[ModelManagerService]::INFO --> Reserved torch device cuda:0 for execution thread 132909407143616�[0m

74% 74/100 [00:22<00:13,  2.00it/s]�[38;20m[2024-06-23 18:41:04,705]::[uvicorn.access]::INFO --> 172.24.13.33:51803 - "GET /api/v1/boards/?all=true HTTP/1.1" 200�[0m
[2024-06-23 18:41:04,711]::[uvicorn.access]::INFO --> 172.24.13.33:51806 - "GET /api/v1/images/i/46b36a2e-da0b-4dc2-ac55-9fd644dea8e9.png/metadata HTTP/1.1" 200�[0m
[2024-06-23 18:41:04,717]::[uvicorn.access]::INFO --> 172.24.13.33:51804 - "GET /api/v1/images/i/46b36a2e-da0b-4dc2-ac55-9fd644dea8e9.png/thumbnail HTTP/1.1" 200�[0m
[2024-06-23 18:41:04,719]::[uvicorn.access]::INFO --> 172.24.13.33:51805 - "GET /api/v1/images/i/46b36a2e-da0b-4dc2-ac55-9fd644dea8e9.png/full HTTP/1.1" 200�[0m

75% 75/100 [00:23<00:14,  1.77it/s]
76% 76/100 [00:23<00:11,  2.07it/s]
77% 77/100 [00:23<00:09,  2.36it/s]
78% 78/100 [00:24<00:08,  2.61it/s]
79% 79/100 [00:24<00:07,  2.81it/s]
80% 80/100 [00:24<00:07,  2.58it/s]
81% 81/100 [00:26<00:12,  1.56it/s]
82% 82/100 [00:26<00:09,  1.86it/s]
83% 83/100 [00:26<00:07,  2.15it/s]
84% 84/100 [00:27<00:06,  2.43it/s]
85% 85/100 [00:27<00:05,  2.66it/s]
86% 86/100 [00:27<00:04,  2.86it/s]
87% 87/100 [00:27<00:04,  3.01it/s]
88% 88/100 [00:28<00:03,  3.13it/s]
89% 89/100 [00:28<00:03,  3.22it/s]
90% 90/100 [00:28<00:03,  3.29it/s]
91% 91/100 [00:29<00:02,  3.34it/s]
92% 92/100 [00:29<00:02,  3.37it/s]
93% 93/100 [00:29<00:02,  3.40it/s]
94% 94/100 [00:29<00:01,  3.41it/s]
95% 95/100 [00:30<00:01,  3.43it/s]
96% 96/100 [00:30<00:01,  3.44it/s]
97% 97/100 [00:30<00:00,  3.44it/s]
98% 98/100 [00:31<00:00,  3.32it/s]
99% 99/100 [00:31<00:00,  3.35it/s]
100% 100/100 [00:31<00:00,  2.93it/s]
100% 100/100 [00:31<00:00,  3.13it/s]
[2024-06-23 18:41:15,159]::[uvicorn.access]::INFO --> 172.24.13.33:51812 - "GET /api/v1/images/i/a5bf9d3b-a2f9-444e-9a5f-7af470a1000d.png HTTP/1.1" 200�[0m
[2024-06-23 18:41:16,034]::[InvokeAI]::ERROR --> Error while invoking session a88e6329-e086-41e3-b250-b2b36ba16792, invocation 7e28c47e-7280-488b-8ac3-07cc48eb18cf (esrgan): Cannot copy out of meta tensor; no data!�[0m
[2024-06-23 18:41:16,035]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
  output = invocation.invoke_internal(context=context, services=self._services)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
  output = self.invoke(context)
           ^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/upscale.py", line 99, in invoke
  upscaler = RealESRGAN(
             ^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 78, in __init__
  self.model = model.to(self.device)
               ^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1152, in to
  return self._apply(convert)
         ^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 802, in _apply
  module._apply(fn)
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 802, in _apply
  module._apply(fn)
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 802, in _apply
  module._apply(fn)
[Previous line repeated 1 more time]
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 825, in _apply
  param_applied = fn(param)
                  ^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1150, in convert
  return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
NotImplementedError: Cannot copy out of meta tensor; no data!
�[0m
[2024-06-23 18:41:16,232]::[InvokeAI]::INFO --> Graph stats: a88e6329-e086-41e3-b250-b2b36ba16792
                        Node   Calls   Seconds  VRAM Used
           main_model_loader       1    0.001s     4.900G
                   clip_skip       1    0.000s     4.900G
                      compel       2    0.002s     4.901G
                     collect       2    0.001s     4.901G
                       noise       1    0.002s     4.940G
             denoise_latents       1   32.224s     1.852G
                         l2i       1    1.109s     1.855G
                      esrgan       1    0.930s     1.311G
TOTAL GRAPH EXECUTION TIME:  34.270s
TOTAL GRAPH WALL TIME:  34.281s
RAM used by InvokeAI process: 22.02G (+1.454G)
RAM used to load models: 1.76G
VRAM in use: 1.311G
RAM cache statistics:
 Model cache hits: 3
 Model cache misses: 0
 Models cached: 33
 Models cleared from cache: 0
 Cache high water mark: 16.19/64.00G
�[0m
[2024-06-23 18:41:16,232]::[ModelManagerService]::INFO --> Released torch device cuda:1�[0m
[2024-06-23 18:41:16,233]::[ModelManagerService]::INFO --> Reserved torch device cuda:1 for execution thread 132909396657856�[0m
[2024-06-23 18:41:19,802]::[InvokeAI]::ERROR --> Error while invoking session b82f5a87-584c-4a52-bac9-f0e5e6c89eaa, invocation 6cf4ee6a-40e4-443e-bf0d-bd87e633b729 (denoise_latents): Cannot copy out of meta tensor; no data!�[0m
[2024-06-23 18:41:19,802]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
  output = invocation.invoke_internal(context=context, services=self._services)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
  output = self.invoke(context)
           ^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  return func(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/contextlib.py", line 81, in inner
  return func(*args, **kwds)
         ^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/denoise_latents.py", line 725, in invoke
  with (
File "/usr/lib/python3.11/contextlib.py", line 137, in __enter__
  return next(self.gen)
         ^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/model_manager/load/load_base.py", line 77, in model_on_device
  locked_model = self._locker.lock()
                 ^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/model_manager/load/model_cache/model_locker.py", line 43, in lock
  self._cache.offload_unlocked_models(self._cache_entry.size)
File "/opt/invokeai/invokeai/backend/model_manager/load/model_cache/model_cache_default.py", line 301, in offload_unlocked_models
  self.move_model_to_device(cache_entry, self.storage_device)
File "/opt/invokeai/invokeai/backend/model_manager/load/model_cache/model_cache_default.py", line 356, in move_model_to_device
  raise e
File "/opt/invokeai/invokeai/backend/model_manager/load/model_cache/model_cache_default.py", line 352, in move_model_to_device
  cache_entry.model.to(target_device, non_blocking=True)
File "/opt/venv/invokeai/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2724, in to
  return super().to(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1152, in to
  return self._apply(convert)
         ^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 802, in _apply
  module._apply(fn)
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 802, in _apply
  module._apply(fn)
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 802, in _apply
  module._apply(fn)
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 825, in _apply
  param_applied = fn(param)
                  ^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1150, in convert
  return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
NotImplementedError: Cannot copy out of meta tensor; no data!
�[0m
[2024-06-23 18:41:19,915]::[InvokeAI]::INFO --> Graph stats: b82f5a87-584c-4a52-bac9-f0e5e6c89eaa
                        Node   Calls   Seconds  VRAM Used
           sdxl_model_loader       1    0.003s     0.312G
          sdxl_compel_prompt       2   10.533s     1.860G
                     collect       2    0.001s     1.852G
                       noise       1    0.005s     1.311G
             denoise_latents       1    4.572s     1.311G
TOTAL GRAPH EXECUTION TIME:  15.114s
TOTAL GRAPH WALL TIME:  15.711s
RAM used by InvokeAI process: 23.90G (+3.166G)
RAM used to load models: 3.05G
VRAM in use: 0.017G
RAM cache statistics:
 Model cache hits: 9
 Model cache misses: 5
 Models cached: 33
 Models cleared from cache: 0
 Cache high water mark: 16.19/64.00G
�[0m
[2024-06-23 18:41:19,915]::[ModelManagerService]::INFO --> Released torch device cuda:0�[0m
[2024-06-23 18:41:19,975]::[ModelManagerService]::INFO --> Reserved torch device cuda:0 for execution thread 132909407143616�[0m

0% 0/100 [00:00<?, ?it/s]
1% 1/100 [00:00<00:37,  2.61it/s]
2% 2/100 [00:00<00:32,  3.06it/s]
3% 3/100 [00:00<00:29,  3.24it/s]
4% 4/100 [00:01<00:28,  3.33it/s]
5% 5/100 [00:01<00:28,  3.39it/s]
6% 6/100 [00:01<00:27,  3.42it/s]
7% 7/100 [00:02<00:27,  3.44it/s]
8% 8/100 [00:02<00:26,  3.45it/s]
9% 9/100 [00:02<00:26,  3.46it/s]
10% 10/100 [00:02<00:25,  3.46it/s]
11% 11/100 [00:03<00:25,  3.47it/s]
12% 12/100 [00:03<00:25,  3.47it/s]
13% 13/100 [00:03<00:25,  3.47it/s]
14% 14/100 [00:04<00:24,  3.47it/s]
15% 15/100 [00:04<00:24,  3.43it/s]
16% 16/100 [00:04<00:24,  3.42it/s]
17% 17/100 [00:05<00:24,  3.39it/s]
18% 18/100 [00:05<00:24,  3.40it/s]
19% 19/100 [00:05<00:23,  3.38it/s]
20% 20/100 [00:05<00:23,  3.39it/s]
21% 21/100 [00:06<00:23,  3.37it/s]
22% 22/100 [00:06<00:23,  3.38it/s]
23% 23/100 [00:06<00:22,  3.36it/s]
24% 24/100 [00:07<00:22,  3.38it/s]
25% 25/100 [00:07<00:22,  3.36it/s]
26% 26/100 [00:07<00:21,  3.38it/s]
27% 27/100 [00:07<00:21,  3.35it/s]
28% 28/100 [00:08<00:21,  3.37it/s]
29% 29/100 [00:08<00:21,  3.35it/s]
30% 30/100 [00:08<00:20,  3.37it/s]
31% 31/100 [00:09<00:20,  3.36it/s]
32% 32/100 [00:09<00:20,  3.37it/s]
33% 33/100 [00:09<00:19,  3.36it/s]
34% 34/100 [00:10<00:19,  3.37it/s]
35% 35/100 [00:10<00:19,  3.36it/s]
36% 36/100 [00:10<00:18,  3.37it/s]
37% 37/100 [00:10<00:18,  3.38it/s]
38% 38/100 [00:11<00:18,  3.37it/s]
39% 39/100 [00:11<00:18,  3.38it/s]
40% 40/100 [00:11<00:17,  3.36it/s]
41% 41/100 [00:12<00:17,  3.37it/s]
42% 42/100 [00:12<00:17,  3.36it/s]
43% 43/100 [00:12<00:16,  3.37it/s]
44% 44/100 [00:13<00:16,  3.36it/s]
45% 45/100 [00:13<00:16,  3.37it/s]
46% 46/100 [00:13<00:16,  3.36it/s]
47% 47/100 [00:13<00:15,  3.37it/s]
48% 48/100 [00:14<00:15,  3.35it/s]
49% 49/100 [00:14<00:15,  3.36it/s]
50% 50/100 [00:14<00:15,  3.33it/s]
51% 51/100 [00:15<00:14,  3.35it/s]
52% 52/100 [00:15<00:14,  3.34it/s]
53% 53/100 [00:15<00:14,  3.35it/s]
54% 54/100 [00:16<00:13,  3.34it/s]
55% 55/100 [00:16<00:13,  3.36it/s]
56% 56/100 [00:16<00:13,  3.35it/s]
57% 57/100 [00:16<00:12,  3.36it/s]
58% 58/100 [00:17<00:12,  3.35it/s]
59% 59/100 [00:17<00:12,  3.37it/s]
60% 60/100 [00:17<00:11,  3.36it/s]
61% 61/100 [00:18<00:11,  3.37it/s]
62% 62/100 [00:18<00:11,  3.36it/s]
63% 63/100 [00:18<00:10,  3.37it/s]
64% 64/100 [00:18<00:10,  3.36it/s]
65% 65/100 [00:19<00:10,  3.37it/s]
66% 66/100 [00:19<00:10,  3.35it/s]
67% 67/100 [00:19<00:09,  3.36it/s]
68% 68/100 [00:20<00:09,  3.35it/s]
69% 69/100 [00:20<00:09,  3.36it/s]
70% 70/100 [00:20<00:08,  3.37it/s]
71% 71/100 [00:21<00:08,  3.36it/s]
72% 72/100 [00:21<00:08,  3.34it/s]
73% 73/100 [00:21<00:08,  3.36it/s]
74% 74/100 [00:21<00:07,  3.37it/s]
75% 75/100 [00:22<00:07,  3.35it/s]
76% 76/100 [00:22<00:07,  3.36it/s]
77% 77/100 [00:22<00:06,  3.35it/s]
78% 78/100 [00:23<00:06,  3.34it/s]
79% 79/100 [00:23<00:06,  3.35it/s]
80% 80/100 [00:23<00:05,  3.34it/s]
81% 81/100 [00:24<00:05,  3.36it/s]
82% 82/100 [00:24<00:05,  3.37it/s]
83% 83/100 [00:24<00:05,  3.36it/s]
84% 84/100 [00:24<00:04,  3.34it/s]
85% 85/100 [00:25<00:04,  3.34it/s]
86% 86/100 [00:25<00:04,  3.36it/s]
87% 87/100 [00:25<00:03,  3.35it/s]
88% 88/100 [00:26<00:03,  3.36it/s]
89% 89/100 [00:26<00:03,  3.35it/s]
90% 90/100 [00:26<00:02,  3.36it/s]
91% 91/100 [00:27<00:02,  3.35it/s]
92% 92/100 [00:27<00:02,  3.37it/s]
93% 93/100 [00:27<00:02,  3.35it/s]
94% 94/100 [00:27<00:01,  3.37it/s]
95% 95/100 [00:28<00:01,  3.35it/s]
96% 96/100 [00:28<00:01,  3.36it/s]
97% 97/100 [00:28<00:01,  2.91it/s]
98% 98/100 [00:29<00:00,  3.01it/s]
99% 99/100 [00:29<00:00,  3.08it/s]

0% 0/100 [00:00<?, ?it/s]�[A
100% 100/100 [00:29<00:00,  3.15it/s]
100% 100/100 [00:29<00:00,  3.35it/s]


1% 1/100 [00:02<03:20,  2.03s/it]�[A�[38;20m[2024-06-23 18:41:53,372]::[uvicorn.access]::INFO --> 172.24.13.33:51817 - "GET /api/v1/images/i/7137439b-3f7a-47b4-86aa-a5e7fabe5660.png HTTP/1.1" 200�[0m


2% 2/100 [00:03<03:13,  1.98s/it]�[A
Upscaling:   0% 0/4 [00:00<?, ?it/s]
Upscaling:   0% 0/4 [00:00<?, ?it/s]
[2024-06-23 18:41:55,125]::[InvokeAI]::ERROR --> Error while invoking session c4e56901-b8b5-493e-85ca-f98784b43df8, invocation 42ab349d-3029-48dc-aee4-cb0f1a63e453 (esrgan): Input type (float) and bias type (c10::Half) should be the same�[0m
[2024-06-23 18:41:55,125]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
  output = invocation.invoke_internal(context=context, services=self._services)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
  output = self.invoke(context)
           ^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/upscale.py", line 110, in invoke
  upscaled_image = upscaler.upscale(cv2_image)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  return func(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 231, in upscale
  self.tile_process()
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 159, in tile_process
  output_tile = self.model(input_tile)
                ^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/basicsr/rrdbnet_arch.py", line 118, in forward
  feat = self.conv_first(feat)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 460, in forward
  return self._conv_forward(input, self.weight, self.bias)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
  return F.conv2d(input, weight, bias, self.stride,
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Input type (float) and bias type (c10::Half) should be the same
�[0m
[2024-06-23 18:41:55,297]::[InvokeAI]::INFO --> Graph stats: c4e56901-b8b5-493e-85ca-f98784b43df8
                        Node   Calls   Seconds  VRAM Used
           main_model_loader       1    0.001s     0.017G
                   clip_skip       1    0.001s     0.017G
                      compel       2    0.002s     0.017G
                     collect       2    0.002s     0.017G
                       noise       1    0.004s     0.017G
             denoise_latents       1   31.934s     2.087G
                         l2i       1    1.279s     2.676G
                      esrgan       1    1.842s     1.974G
TOTAL GRAPH EXECUTION TIME:  35.065s
TOTAL GRAPH WALL TIME:  35.078s
RAM used by InvokeAI process: 29.97G (+5.957G)
RAM used to load models: 6.54G
VRAM in use: 1.926G
RAM cache statistics:
 Model cache hits: 6
 Model cache misses: 2
 Models cached: 35
 Models cleared from cache: 0
 Cache high water mark: 19.83/64.00G
�[0m
[2024-06-23 18:41:55,297]::[ModelManagerService]::INFO --> Released torch device cuda:0�[0m
[2024-06-23 18:41:55,870]::[ModelManagerService]::INFO --> Reserved torch device cuda:0 for execution thread 132909407143616�[0m

0% 0/100 [00:00<?, ?it/s]
1% 1/100 [00:00<00:33,  2.99it/s]
2% 2/100 [00:00<00:30,  3.18it/s]

3% 3/100 [00:05<03:13,  1.99s/it]�[A
3% 3/100 [00:00<00:29,  3.28it/s]
4% 4/100 [00:01<00:29,  3.30it/s]
5% 5/100 [00:01<00:28,  3.34it/s]
6% 6/100 [00:01<00:28,  3.34it/s]
7% 7/100 [00:02<00:27,  3.36it/s]
8% 8/100 [00:02<00:27,  3.35it/s]
9% 9/100 [00:02<00:26,  3.37it/s]

4% 4/100 [00:07<03:08,  1.96s/it]�[A
10% 10/100 [00:03<00:26,  3.36it/s]
11% 11/100 [00:03<00:26,  3.37it/s]
12% 12/100 [00:03<00:26,  3.36it/s]
13% 13/100 [00:03<00:25,  3.38it/s]
14% 14/100 [00:04<00:25,  3.36it/s]
15% 15/100 [00:04<00:25,  3.37it/s]

5% 5/100 [00:09<03:05,  1.95s/it]�[A
16% 16/100 [00:04<00:25,  3.36it/s]
17% 17/100 [00:05<00:24,  3.37it/s]
18% 18/100 [00:05<00:24,  3.35it/s]
19% 19/100 [00:05<00:24,  3.37it/s]
20% 20/100 [00:05<00:23,  3.37it/s]
21% 21/100 [00:06<00:23,  3.36it/s]
22% 22/100 [00:06<00:23,  3.35it/s]

6% 6/100 [00:11<03:02,  1.94s/it]�[A
23% 23/100 [00:06<00:22,  3.37it/s]
24% 24/100 [00:07<00:22,  3.38it/s]
25% 25/100 [00:07<00:22,  3.36it/s]
26% 26/100 [00:07<00:21,  3.38it/s]
27% 27/100 [00:08<00:21,  3.36it/s]
28% 28/100 [00:08<00:21,  3.37it/s]

7% 7/100 [00:13<03:00,  1.94s/it]�[A
29% 29/100 [00:08<00:21,  3.36it/s]
30% 30/100 [00:08<00:20,  3.38it/s]
31% 31/100 [00:09<00:20,  3.36it/s]
32% 32/100 [00:09<00:20,  3.37it/s]
33% 33/100 [00:09<00:19,  3.36it/s]
34% 34/100 [00:10<00:19,  3.37it/s]
35% 35/100 [00:10<00:19,  3.34it/s]

8% 8/100 [00:15<02:57,  1.93s/it]�[A
36% 36/100 [00:10<00:19,  3.36it/s]
37% 37/100 [00:11<00:18,  3.35it/s]
38% 38/100 [00:11<00:18,  3.37it/s]
39% 39/100 [00:11<00:18,  3.36it/s]
40% 40/100 [00:11<00:17,  3.38it/s]
41% 41/100 [00:12<00:17,  3.36it/s]

9% 9/100 [00:17<02:55,  1.93s/it]�[A
42% 42/100 [00:12<00:17,  3.38it/s]
43% 43/100 [00:12<00:16,  3.36it/s]
44% 44/100 [00:13<00:16,  3.37it/s]
45% 45/100 [00:13<00:16,  3.36it/s]
46% 46/100 [00:13<00:16,  3.37it/s]
47% 47/100 [00:13<00:15,  3.38it/s]
48% 48/100 [00:14<00:15,  3.36it/s]

10% 10/100 [00:19<02:53,  1.93s/it]�[A
49% 49/100 [00:14<00:15,  3.35it/s]
50% 50/100 [00:14<00:14,  3.37it/s]
51% 51/100 [00:15<00:14,  3.35it/s]
52% 52/100 [00:15<00:14,  3.37it/s]
53% 53/100 [00:15<00:13,  3.36it/s]
54% 54/100 [00:16<00:13,  3.37it/s]

11% 11/100 [00:21<02:51,  1.92s/it]�[A
55% 55/100 [00:16<00:13,  3.36it/s]
56% 56/100 [00:16<00:13,  3.38it/s]
57% 57/100 [00:16<00:12,  3.38it/s]
58% 58/100 [00:17<00:12,  3.38it/s]
59% 59/100 [00:17<00:12,  3.38it/s]
60% 60/100 [00:17<00:11,  3.37it/s]
61% 61/100 [00:18<00:11,  3.38it/s]

12% 12/100 [00:23<02:49,  1.92s/it]�[A
62% 62/100 [00:18<00:11,  3.36it/s]
63% 63/100 [00:18<00:10,  3.37it/s]
64% 64/100 [00:19<00:10,  3.36it/s]
65% 65/100 [00:19<00:10,  3.37it/s]
66% 66/100 [00:19<00:10,  3.36it/s]
67% 67/100 [00:19<00:09,  3.37it/s]

13% 13/100 [00:25<02:47,  1.92s/it]�[A
68% 68/100 [00:20<00:09,  3.35it/s]
69% 69/100 [00:20<00:09,  3.37it/s]
70% 70/100 [00:20<00:09,  3.33it/s]
71% 71/100 [00:21<00:08,  3.35it/s]
72% 72/100 [00:21<00:08,  3.34it/s]
73% 73/100 [00:21<00:08,  3.37it/s]
74% 74/100 [00:22<00:07,  3.38it/s]

14% 14/100 [00:27<02:45,  1.93s/it]�[A
75% 75/100 [00:22<00:07,  3.36it/s]
76% 76/100 [00:22<00:07,  3.35it/s]
77% 77/100 [00:22<00:06,  3.36it/s]
78% 78/100 [00:23<00:06,  3.35it/s]
79% 79/100 [00:23<00:06,  3.37it/s]
80% 80/100 [00:23<00:05,  3.35it/s]

15% 15/100 [00:29<02:43,  1.92s/it]�[A
81% 81/100 [00:24<00:05,  3.37it/s]
82% 82/100 [00:24<00:05,  3.36it/s]
83% 83/100 [00:24<00:05,  3.37it/s]
84% 84/100 [00:25<00:04,  3.36it/s]
85% 85/100 [00:25<00:04,  3.37it/s]
86% 86/100 [00:25<00:04,  3.35it/s]

16% 16/100 [00:30<02:41,  1.92s/it]�[A
87% 87/100 [00:25<00:03,  3.37it/s]
88% 88/100 [00:26<00:03,  3.36it/s]
89% 89/100 [00:26<00:03,  3.37it/s]
90% 90/100 [00:26<00:02,  3.38it/s]
91% 91/100 [00:27<00:02,  3.37it/s]
92% 92/100 [00:27<00:02,  3.38it/s]
93% 93/100 [00:27<00:02,  3.37it/s]

17% 17/100 [00:32<02:39,  1.92s/it]�[A
94% 94/100 [00:27<00:01,  3.38it/s]
95% 95/100 [00:28<00:01,  3.36it/s]
96% 96/100 [00:28<00:01,  3.37it/s]
97% 97/100 [00:28<00:00,  3.36it/s]
98% 98/100 [00:29<00:00,  3.37it/s]
99% 99/100 [00:29<00:00,  3.36it/s]

18% 18/100 [00:34<02:37,  1.93s/it]�[A
100% 100/100 [00:29<00:00,  3.37it/s]
100% 100/100 [00:29<00:00,  3.36it/s]
[2024-06-23 18:42:27,790]::[uvicorn.access]::INFO --> 172.24.13.33:51820 - "GET /api/v1/images/i/0f4638c2-1f63-4312-add6-fb37da7d3aaa.png HTTP/1.1" 200�[0m


19% 19/100 [00:36<02:37,  1.94s/it]�[A
Upscaling:   0% 0/4 [00:00<?, ?it/s]
Upscaling:   0% 0/4 [00:00<?, ?it/s]
[2024-06-23 18:42:29,741]::[InvokeAI]::ERROR --> Error while invoking session 1f847b22-3d11-41df-b522-d84f593f31d3, invocation b8162278-48cf-4f64-a600-969d79ff5f6a (esrgan): Input type (float) and bias type (c10::Half) should be the same�[0m
[2024-06-23 18:42:29,741]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/opt/invokeai/invokeai/app/services/session_processor/session_processor_default.py", line 135, in run_node
  output = invocation.invoke_internal(context=context, services=self._services)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
  output = self.invoke(context)
           ^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/app/invocations/upscale.py", line 110, in invoke
  upscaled_image = upscaler.upscale(cv2_image)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  return func(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 231, in upscale
  self.tile_process()
File "/opt/invokeai/invokeai/backend/image_util/realesrgan/realesrgan.py", line 159, in tile_process
  output_tile = self.model(input_tile)
                ^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/invokeai/invokeai/backend/image_util/basicsr/rrdbnet_arch.py", line 118, in forward
  feat = self.conv_first(feat)
         ^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
  return self._call_impl(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
  return forward_call(*args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 460, in forward
  return self._conv_forward(input, self.weight, self.bias)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
  return F.conv2d(input, weight, bias, self.stride,
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Input type (float) and bias type (c10::Half) should be the same
�[0m


20% 20/100 [00:38<02:35,  1.95s/it]�[A�[38;20m[2024-06-23 18:42:29,939]::[InvokeAI]::INFO --> Graph stats: 1f847b22-3d11-41df-b522-d84f593f31d3
                        Node   Calls   Seconds  VRAM Used
           main_model_loader       1    0.001s     1.926G
                   clip_skip       1    0.000s     1.926G
                      compel       2    0.001s     1.926G
                     collect       2    0.001s     1.926G
                       noise       1    0.003s     1.926G
             denoise_latents       1   30.159s     2.087G
                         l2i       1    1.649s     2.676G
                      esrgan       1    2.000s     1.974G
TOTAL GRAPH EXECUTION TIME:  33.814s
TOTAL GRAPH WALL TIME:  33.825s
RAM used by InvokeAI process: 29.86G (-0.111G)
RAM used to load models: 1.76G
VRAM in use: 1.926G
RAM cache statistics:
 Model cache hits: 4
 Model cache misses: 0
 Models cached: 35
 Models cleared from cache: 0
 Cache high water mark: 19.83/64.00G
�[0m
[2024-06-23 18:42:29,939]::[ModelManagerService]::INFO --> Released torch device cuda:0�[0m
[2024-06-23 18:42:30,119]::[ModelManagerService]::INFO --> Reserved torch device cuda:0 for execution thread 132909407143616�[0m


21% 21/100 [00:40<02:36,  1.98s/it]�[A
0% 0/100 [00:00<?, ?it/s]

22% 22/100 [00:42<02:33,  1.97s/it]�[A
1% 1/100 [00:02<03:18,  2.00s/it]

23% 23/100 [00:44<02:30,  1.96s/it]�[A
2% 2/100 [00:03<03:15,  1.99s/it]

24% 24/100 [00:46<02:28,  1.95s/it]�[A
3% 3/100 [00:05<03:13,  2.00s/it]

25% 25/100 [00:48<02:25,  1.95s/it]�[A
4% 4/100 [00:07<03:11,  2.00s/it]

26% 26/100 [00:50<02:23,  1.94s/it]�[A
5% 5/100 [00:09<03:09,  2.00s/it]

27% 27/100 [00:52<02:21,  1.94s/it]�[A
6% 6/100 [00:11<03:07,  2.00s/it]

28% 28/100 [00:54<02:19,  1.93s/it]�[A
7% 7/100 [00:13<03:05,  1.99s/it]

29% 29/100 [00:56<02:17,  1.94s/it]�[A
8% 8/100 [00:15<03:03,  1.99s/it]

30% 30/100 [00:58<02:15,  1.94s/it]�[A
9% 9/100 [00:17<03:01,  1.99s/it]

31% 31/100 [01:00<02:13,  1.94s/it]�[A
10% 10/100 [00:19<02:59,  1.99s/it]

32% 32/100 [01:02<02:11,  1.94s/it]�[A
11% 11/100 [00:21<02:57,  1.99s/it]

33% 33/100 [01:04<02:09,  1.94s/it]�[A
12% 12/100 [00:23<02:55,  1.99s/it]

34% 34/100 [01:06<02:07,  1.94s/it]�[A
13% 13/100 [00:25<02:53,  1.99s/it]

35% 35/100 [01:07<02:05,  1.94s/it]�[A
14% 14/100 [00:27<02:51,  1.99s/it]

36% 36/100 [01:09<02:03,  1.94s/it]�[A
15% 15/100 [00:29<02:49,  1.99s/it]

37% 37/100 [01:11<02:01,  1.94s/it]�[A
16% 16/100 [00:31<02:47,  1.99s/it]

38% 38/100 [01:13<01:59,  1.93s/it]�[A
17% 17/100 [00:33<02:45,  1.99s/it]

39% 39/100 [01:15<01:57,  1.93s/it]�[A
18% 18/100 [00:35<02:43,  1.99s/it]

40% 40/100 [01:17<01:56,  1.94s/it]�[A
19% 19/100 [00:37<02:41,  1.99s/it]

41% 41/100 [01:19<01:54,  1.94s/it]�[A
20% 20/100 [00:39<02:39,  1.99s/it]

42% 42/100 [01:21<01:52,  1.93s/it]�[A
21% 21/100 [00:41<02:37,  1.99s/it]

43% 43/100 [01:23<01:50,  1.93s/it]�[A
22% 22/100 [00:43<02:35,  1.99s/it]

44% 44/100 [01:25<01:48,  1.93s/it]�[A
23% 23/100 [00:45<02:33,  2.00s/it]

45% 45/100 [01:27<01:46,  1.93s/it]�[A

46% 46/100 [01:29<01:44,  1.93s/it]�[A
24% 24/100 [00:47<02:31,  2.00s/it]

47% 47/100 [01:31<01:42,  1.93s/it]�[A
25% 25/100 [00:49<02:29,  1.99s/it]

48% 48/100 [01:33<01:40,  1.93s/it]�[A
26% 26/100 [00:51<02:27,  2.00s/it]

49% 49/100 [01:35<01:38,  1.93s/it]�[A
27% 27/100 [00:53<02:25,  2.00s/it]

50% 50/100 [01:36<01:36,  1.93s/it]�[A
28% 28/100 [00:55<02:23,  2.00s/it]

51% 51/100 [01:38<01:34,  1.93s/it]�[A
29% 29/100 [00:57<02:21,  1.99s/it]

52% 52/100 [01:40<01:32,  1.94s/it]�[A
30% 30/100 [00:59<02:19,  1.99s/it]

53% 53/100 [01:42<01:30,  1.93s/it]�[A
31% 31/100 [01:01<02:17,  2.00s/it]

54% 54/100 [01:44<01:28,  1.93s/it]�[A
32% 32/100 [01:03<02:15,  2.00s/it]

55% 55/100 [01:46<01:27,  1.93s/it]�[A
33% 33/100 [01:05<02:13,  2.00s/it]

56% 56/100 [01:48<01:25,  1.94s/it]�[A
34% 34/100 [01:07<02:11,  2.00s/it]

57% 57/100 [01:50<01:23,  1.93s/it]�[A
35% 35/100 [01:09<02:09,  2.00s/it]

58% 58/100 [01:52<01:21,  1.93s/it]�[A
36% 36/100 [01:11<02:08,  2.00s/it]

59% 59/100 [01:54<01:19,  1.93s/it]�[A
37% 37/100 [01:13<02:05,  2.00s/it]

60% 60/100 [01:56<01:17,  1.93s/it]�[A
38% 38/100 [01:15<02:03,  2.00s/it]

61% 61/100 [01:58<01:15,  1.93s/it]�[A
39% 39/100 [01:17<02:01,  2.00s/it]

62% 62/100 [02:00<01:13,  1.94s/it]�[A
40% 40/100 [01:19<01:59,  2.00s/it]

63% 63/100 [02:02<01:11,  1.94s/it]�[A
41% 41/100 [01:21<01:56,  1.98s/it]

64% 64/100 [02:04<01:09,  1.94s/it]�[A
42% 42/100 [01:23<01:53,  1.96s/it]

65% 65/100 [02:05<01:07,  1.94s/it]�[A
43% 43/100 [01:25<01:51,  1.96s/it]

66% 66/100 [02:07<01:06,  1.94s/it]�[A
44% 44/100 [01:27<01:49,  1.95s/it]

67% 67/100 [02:09<01:04,  1.94s/it]�[A
45% 45/100 [01:29<01:46,  1.94s/it]

68% 68/100 [02:11<01:02,  1.94s/it]�[A
46% 46/100 [01:31<01:44,  1.94s/it]

69% 69/100 [02:13<01:00,  1.94s/it]�[A
47% 47/100 [01:33<01:42,  1.93s/it]

70% 70/100 [02:15<00:58,  1.93s/it]�[A
48% 48/100 [01:35<01:40,  1.93s/it]

71% 71/100 [02:17<00:56,  1.94s/it]�[A
49% 49/100 [01:37<01:38,  1.94s/it]

72% 72/100 [02:19<00:54,  1.94s/it]�[A
50% 50/100 [01:39<01:36,  1.93s/it]

73% 73/100 [02:21<00:52,  1.94s/it]�[A
51% 51/100 [01:41<01:34,  1.93s/it]

74% 74/100 [02:23<00:50,  1.94s/it]�[A
52% 52/100 [01:42<01:32,  1.93s/it]

75% 75/100 [02:25<00:48,  1.94s/it]�[A
53% 53/100 [01:44<01:30,  1.93s/it]

76% 76/100 [02:27<00:46,  1.94s/it]�[A
54% 54/100 [01:46<01:29,  1.94s/it]

77% 77/100 [02:29<00:44,  1.94s/it]�[A
55% 55/100 [01:48<01:27,  1.95s/it]

78% 78/100 [02:31<00:42,  1.94s/it]�[A
56% 56/100 [01:50<01:26,  1.97s/it]

79% 79/100 [02:33<00:40,  1.94s/it]�[A
57% 57/100 [01:52<01:24,  1.98s/it]

80% 80/100 [02:35<00:38,  1.93s/it]�[A
58% 58/100 [01:54<01:23,  1.98s/it]

81% 81/100 [02:36<00:36,  1.93s/it]�[A
59% 59/100 [01:56<01:21,  1.99s/it]

82% 82/100 [02:38<00:34,  1.93s/it]�[A
60% 60/100 [01:58<01:19,  1.99s/it]

83% 83/100 [02:40<00:32,  1.94s/it]�[A
61% 61/100 [02:00<01:17,  1.99s/it]

84% 84/100 [02:42<00:31,  1.94s/it]�[A
62% 62/100 [02:02<01:16,  2.00s/it]

85% 85/100 [02:44<00:29,  1.94s/it]�[A
63% 63/100 [02:04<01:14,  2.03s/it]

86% 86/100 [02:46<00:27,  1.94s/it]�[A
64% 64/100 [02:06<01:12,  2.03s/it]

87% 87/100 [02:48<00:25,  1.94s/it]�[A
65% 65/100 [02:08<01:10,  2.02s/it]

88% 88/100 [02:50<00:23,  1.94s/it]�[A
66% 66/100 [02:10<01:08,  2.01s/it]

89% 89/100 [02:52<00:21,  1.94s/it]�[A
67% 67/100 [02:12<01:06,  2.00s/it]

90% 90/100 [02:54<00:19,  1.94s/it]�[A
68% 68/100 [02:14<01:03,  2.00s/it]

91% 91/100 [02:56<00:17,  1.94s/it]�[A

92% 92/100 [02:58<00:15,  1.94s/it]�[A
69% 69/100 [02:16<01:01,  2.00s/it]

93% 93/100 [03:00<00:13,  1.94s/it]�[A
70% 70/100 [02:18<00:59,  2.00s/it]

94% 94/100 [03:02<00:11,  1.94s/it]�[A
71% 71/100 [02:20<00:57,  2.00s/it]

95% 95/100 [03:04<00:09,  1.94s/it]�[A
72% 72/100 [02:22<00:55,  2.00s/it]

96% 96/100 [03:06<00:07,  1.94s/it]�[A
73% 73/100 [02:24<00:53,  1.99s/it]

97% 97/100 [03:08<00:05,  1.94s/it]�[A
74% 74/100 [02:26<00:51,  1.99s/it]

98% 98/100 [03:09<00:03,  1.94s/it]�[A
75% 75/100 [02:28<00:49,  2.00s/it]

99% 99/100 [03:11<00:01,  1.94s/it]�[A
76% 76/100 [02:30<00:47,  1.99s/it]

100% 100/100 [03:13<00:00,  1.94s/it]�[A
100% 100/100 [03:13<00:00,  1.94s/it]

77% 77/100 [02:32<00:46,  2.01s/it]
78% 78/100 [02:35<00:46,  2.12s/it]
79% 79/100 [02:37<00:43,  2.08s/it]
80% 80/100 [02:39<00:41,  2.09s/it]�[38;20m[2024-06-23 18:45:12,411]::[uvicorn.access]::INFO --> 172.24.13.33:51853 - "GET /api/v1/images/i/bccab588-dd62-45a4-8560-94bdc83d5079.png HTTP/1.1" 200�[0m
[2024-06-23 18:45:12,430]::[InvokeAI]::INFO --> Graph stats: 50844b2c-5f3e-48cc-883e-1d407b442448
                        Node   Calls   Seconds  VRAM Used
           sdxl_model_loader       1    0.001s     1.311G
          sdxl_compel_prompt       2    0.005s     1.311G
                     collect       2    0.001s     1.311G
                       noise       1    0.005s     1.311G
             denoise_latents       1  229.194s     5.471G
               core_metadata       1    0.001s     4.900G
                         l2i       1    6.763s     5.471G
TOTAL GRAPH EXECUTION TIME: 235.971s
TOTAL GRAPH WALL TIME: 235.981s
RAM used by InvokeAI process: 28.92G (+6.737G)
RAM used to load models: 4.94G
VRAM in use: 5.031G
RAM cache statistics:
 Model cache hits: 2
 Model cache misses: 2
 Models cached: 36
 Models cleared from cache: 0
 Cache high water mark: 20.97/64.00G
�[0m
[2024-06-23 18:45:12,430]::[ModelManagerService]::INFO --> Released torch device cuda:1�[0m
[2024-06-23 18:45:13,113]::[ModelManagerService]::INFO --> Reserved torch device cuda:1 for execution thread 132909396657856�[0m
[2024-06-23 18:45:13,169]::[uvicorn.access]::INFO --> 172.24.13.33:51854 - "GET /api/v1/boards/?all=true HTTP/1.1" 200�[0m
[2024-06-23 18:45:13,174]::[uvicorn.access]::INFO --> 172.24.13.33:51857 - "GET /api/v1/images/i/bccab588-dd62-45a4-8560-94bdc83d5079.png/metadata HTTP/1.1" 200�[0m
[2024-06-23 18:45:13,189]::[uvicorn.access]::INFO --> 172.24.13.33:51855 - "GET /api/v1/images/i/bccab588-dd62-45a4-8560-94bdc83d5079.png/thumbnail HTTP/1.1" 200�[0m
[2024-06-23 18:45:13,228]::[uvicorn.access]::INFO --> 172.24.13.33:51856 - "GET /api/v1/images/i/bccab588-dd62-45a4-8560-94bdc83d5079.png/full HTTP/1.1" 200�[0m

81% 81/100 [02:41<00:39,  2.10s/it]
82% 82/100 [02:43<00:37,  2.07s/it]

0% 0/100 [00:00<?, ?it/s]�[A
83% 83/100 [02:47<00:42,  2.49s/it]

1% 1/100 [00:04<07:52,  4.78s/it]�[A
84% 84/100 [02:50<00:45,  2.84s/it]

2% 2/100 [00:08<07:12,  4.42s/it]�[A
85% 85/100 [02:54<00:46,  3.13s/it]

3% 3/100 [00:13<06:56,  4.29s/it]�[A
86% 86/100 [02:58<00:47,  3.39s/it]

4% 4/100 [00:17<06:40,  4.18s/it]�[A
87% 87/100 [03:02<00:46,  3.61s/it]

5% 5/100 [00:21<06:30,  4.11s/it]�[A
88% 88/100 [03:06<00:44,  3.69s/it]

6% 6/100 [00:25<06:23,  4.08s/it]�[A
89% 89/100 [03:10<00:41,  3.78s/it]

7% 7/100 [00:29<06:18,  4.07s/it]�[A
90% 90/100 [03:14<00:38,  3.81s/it]

8% 8/100 [00:33<06:22,  4.15s/it]�[A
91% 91/100 [03:18<00:34,  3.82s/it]

9% 9/100 [00:37<06:12,  4.09s/it]�[A
92% 92/100 [03:22<00:31,  3.95s/it]

10% 10/100 [00:41<06:00,  4.00s/it]�[A
93% 93/100 [03:26<00:27,  3.96s/it]

11% 11/100 [00:45<05:53,  3.97s/it]�[A
94% 94/100 [03:30<00:24,  4.13s/it]

12% 12/100 [00:48<05:35,  3.81s/it]�[A
95% 95/100 [03:35<00:20,  4.15s/it]

13% 13/100 [00:52<05:32,  3.82s/it]�[A
96% 96/100 [03:39<00:16,  4.20s/it]

14% 14/100 [00:56<05:28,  3.81s/it]�[A

15% 15/100 [01:00<05:24,  3.81s/it]�[A
97% 97/100 [03:43<00:12,  4.18s/it]

16% 16/100 [01:03<05:15,  3.75s/it]�[A
98% 98/100 [03:48<00:08,  4.26s/it]

17% 17/100 [01:07<05:16,  3.82s/it]�[A
99% 99/100 [03:51<00:04,  4.17s/it]

18% 18/100 [01:11<05:21,  3.92s/it]�[A
100% 100/100 [03:55<00:00,  4.07s/it]
100% 100/100 [03:55<00:00,  2.36s/it]
[2024-06-23 18:46:32,475]::[uvicorn.access]::INFO --> 172.24.13.33:51862 - "GET /api/v1/images/i/8eb3b1cf-5d2b-43d0-bb5c-412df9cdba86.png HTTP/1.1" 200�[0m
[2024-06-23 18:46:32,487]::[InvokeAI]::INFO --> Graph stats: e35307e2-b63b-4508-aeb7-799ecd8de460
                        Node   Calls   Seconds  VRAM Used
           sdxl_model_loader       1    0.001s     1.926G
          sdxl_compel_prompt       2    0.001s     1.926G
                     collect       2    0.001s     1.926G
                       noise       1    0.004s     1.926G
             denoise_latents       1  238.227s     5.820G
               core_metadata       1    0.001s     5.021G
                         l2i       1    4.023s     8.332G
TOTAL GRAPH EXECUTION TIME: 242.258s
TOTAL GRAPH WALL TIME: 242.265s
RAM used by InvokeAI process: 28.92G (-0.943G)
RAM used to load models: 4.94G
VRAM in use: 5.332G
RAM cache statistics:
 Model cache hits: 3
 Model cache misses: 0
 Models cached: 36
 Models cleared from cache: 0
 Cache high water mark: 19.99/64.00G
�[0m
[2024-06-23 18:46:32,487]::[ModelManagerService]::INFO --> Released torch device cuda:0�[0m
[2024-06-23 18:46:33,189]::[uvicorn.access]::INFO --> 172.24.13.33:51863 - "GET /api/v1/boards/?all=true HTTP/1.1" 200�[0m
[2024-06-23 18:46:33,192]::[uvicorn.access]::INFO --> 172.24.13.33:51866 - "GET /api/v1/images/i/8eb3b1cf-5d2b-43d0-bb5c-412df9cdba86.png/metadata HTTP/1.1" 200�[0m
[2024-06-23 18:46:33,207]::[uvicorn.access]::INFO --> 172.24.13.33:51865 - "GET /api/v1/images/i/8eb3b1cf-5d2b-43d0-bb5c-412df9cdba86.png/full HTTP/1.1" 200�[0m
[2024-06-23 18:46:33,209]::[uvicorn.access]::INFO --> 172.24.13.33:51864 - "GET /api/v1/images/i/8eb3b1cf-5d2b-43d0-bb5c-412df9cdba86.png/thumbnail HTTP/1.1" 200�[0m


19% 19/100 [01:17<06:05,  4.52s/it]�[A

20% 20/100 [01:19<05:00,  3.76s/it]�[A

21% 21/100 [01:21<04:15,  3.23s/it]�[A

22% 22/100 [01:23<03:43,  2.86s/it]�[A

23% 23/100 [01:25<03:20,  2.60s/it]�[A

24% 24/100 [01:27<03:03,  2.42s/it]�[A

25% 25/100 [01:29<02:52,  2.29s/it]�[A

26% 26/100 [01:31<02:43,  2.20s/it]�[A

27% 27/100 [01:33<02:36,  2.14s/it]�[A

28% 28/100 [01:35<02:30,  2.09s/it]�[A

29% 29/100 [01:37<02:26,  2.06s/it]�[A

30% 30/100 [01:39<02:22,  2.04s/it]�[A

31% 31/100 [01:41<02:19,  2.02s/it]�[A

32% 32/100 [01:43<02:17,  2.02s/it]�[A

33% 33/100 [01:45<02:14,  2.01s/it]�[A

34% 34/100 [01:47<02:12,  2.01s/it]�[A

35% 35/100 [01:49<02:09,  2.00s/it]�[A

36% 36/100 [01:51<02:07,  1.99s/it]�[A

37% 37/100 [01:53<02:05,  1.99s/it]�[A

38% 38/100 [01:55<02:03,  1.99s/it]�[A

39% 39/100 [01:57<02:01,  1.99s/it]�[A

40% 40/100 [01:59<01:59,  1.99s/it]�[A

41% 41/100 [02:01<01:57,  1.99s/it]�[A

42% 42/100 [02:03<01:55,  1.99s/it]�[A

43% 43/100 [02:05<01:53,  1.99s/it]�[A

44% 44/100 [02:07<01:51,  1.99s/it]�[A

45% 45/100 [02:09<01:49,  1.99s/it]�[A

46% 46/100 [02:11<01:47,  2.00s/it]�[A

47% 47/100 [02:13<01:45,  2.00s/it]�[A

48% 48/100 [02:15<01:43,  2.00s/it]�[A

49% 49/100 [02:17<01:41,  2.00s/it]�[A

50% 50/100 [02:19<01:39,  2.00s/it]�[A

51% 51/100 [02:21<01:37,  2.00s/it]�[A

52% 52/100 [02:23<01:35,  2.00s/it]�[A

53% 53/100 [02:25<01:33,  2.00s/it]�[A

54% 54/100 [02:27<01:31,  2.00s/it]�[A

55% 55/100 [02:29<01:29,  1.99s/it]�[A

56% 56/100 [02:31<01:27,  2.00s/it]�[A

57% 57/100 [02:33<01:25,  2.00s/it]�[A

58% 58/100 [02:35<01:23,  1.99s/it]�[A

59% 59/100 [02:37<01:21,  2.00s/it]�[A

60% 60/100 [02:39<01:19,  2.00s/it]�[A

61% 61/100 [02:41<01:17,  2.00s/it]�[A

62% 62/100 [02:43<01:15,  1.99s/it]�[A

63% 63/100 [02:45<01:13,  1.99s/it]�[A

64% 64/100 [02:47<01:11,  1.99s/it]�[A

65% 65/100 [02:49<01:09,  1.99s/it]�[A

66% 66/100 [02:51<01:07,  1.99s/it]�[A

67% 67/100 [02:53<01:05,  1.99s/it]�[A

68% 68/100 [02:55<01:03,  1.99s/it]�[A

69% 69/100 [02:57<01:01,  1.99s/it]�[A

70% 70/100 [02:59<00:59,  1.99s/it]�[A

71% 71/100 [03:01<00:57,  1.99s/it]�[A

72% 72/100 [03:03<00:55,  1.99s/it]�[A

73% 73/100 [03:05<00:53,  1.99s/it]�[A

74% 74/100 [03:07<00:51,  1.99s/it]�[A

75% 75/100 [03:09<00:49,  2.00s/it]�[A

76% 76/100 [03:11<00:47,  2.00s/it]�[A

77% 77/100 [03:13<00:45,  2.00s/it]�[A

78% 78/100 [03:15<00:43,  2.00s/it]�[A

79% 79/100 [03:17<00:41,  2.00s/it]�[A

80% 80/100 [03:19<00:40,  2.00s/it]�[A

81% 81/100 [03:21<00:37,  2.00s/it]�[A

82% 82/100 [03:23<00:35,  2.00s/it]�[A

83% 83/100 [03:25<00:34,  2.00s/it]�[A

84% 84/100 [03:27<00:32,  2.00s/it]�[A

85% 85/100 [03:29<00:30,  2.00s/it]�[A

86% 86/100 [03:31<00:28,  2.00s/it]�[A

87% 87/100 [03:33<00:25,  2.00s/it]�[A

88% 88/100 [03:35<00:23,  2.00s/it]�[A

89% 89/100 [03:37<00:21,  2.00s/it]�[A

90% 90/100 [03:39<00:19,  2.00s/it]�[A

91% 91/100 [03:41<00:17,  1.99s/it]�[A

92% 92/100 [03:43<00:15,  2.00s/it]�[A

93% 93/100 [03:45<00:13,  1.99s/it]�[A

94% 94/100 [03:47<00:11,  1.99s/it]�[A

95% 95/100 [03:49<00:09,  1.99s/it]�[A

96% 96/100 [03:51<00:07,  1.99s/it]�[A

97% 97/100 [03:53<00:05,  1.99s/it]�[A

98% 98/100 [03:55<00:03,  1.99s/it]�[A

99% 99/100 [03:57<00:01,  1.99s/it]�[A

100% 100/100 [03:59<00:00,  1.99s/it]�[A
100% 100/100 [03:59<00:00,  2.39s/it]
[2024-06-23 18:49:19,286]::[InvokeAI]::INFO --> Graph stats: 9953723f-cef2-4f25-8597-d2c2a79dbf16
                        Node   Calls   Seconds  VRAM Used
           sdxl_model_loader       1    0.001s     5.036G
          sdxl_compel_prompt       2    0.002s     5.084G
                     collect       2    0.001s     5.103G
                       noise       1    0.005s     5.042G
             denoise_latents       1  242.191s     8.332G
               core_metadata       1    0.001s     5.200G
                         l2i       1    3.857s     8.200G
TOTAL GRAPH EXECUTION TIME: 246.059s
TOTAL GRAPH WALL TIME: 246.071s
RAM used by InvokeAI process: 28.92G (+0.001G)
RAM used to load models: 4.94G
VRAM in use: 5.200G
RAM cache statistics:
 Model cache hits: 3
 Model cache misses: 0
 Models cached: 36
 Models cleared from cache: 0
 Cache high water mark: 19.99/64.00G
�[0m
[2024-06-23 18:49:19,287]::[ModelManagerService]::INFO --> Released torch device cuda:1�[0m
[2024-06-23 18:49:19,302]::[uvicorn.access]::INFO --> 172.24.13.33:51897 - "GET /api/v1/images/i/8763c44b-18ea-4ac0-9940-445e31fe9b8b.png HTTP/1.1" 200�[0m
[2024-06-23 18:49:19,345]::[uvicorn.access]::INFO --> 172.24.13.33:51897 - "GET /api/v1/images/i/8763c44b-18ea-4ac0-9940-445e31fe9b8b.png/full HTTP/1.1" 200�[0m
[2024-06-23 18:49:19,371]::[uvicorn.access]::INFO --> 172.24.13.33:51898 - "GET /api/v1/boards/?all=true HTTP/1.1" 200�[0m
[2024-06-23 18:49:19,381]::[uvicorn.access]::INFO --> 172.24.13.33:51899 - "GET /api/v1/images/i/8763c44b-18ea-4ac0-9940-445e31fe9b8b.png/thumbnail HTTP/1.1" 200�[0m
[2024-06-23 18:49:19,386]::[uvicorn.access]::INFO --> 172.24.13.33:51900 - "GET /api/v1/images/i/8763c44b-18ea-4ac0-9940-445e31fe9b8b.png/full HTTP/1.1" 200�[0m
[2024-06-23 18:49:19,670]::[uvicorn.access]::INFO --> 172.24.13.33:51900 - "GET /api/v1/images/i/8763c44b-18ea-4ac0-9940-445e31fe9b8b.png/metadata HTTP/1.1" 200�[0m

@lstein
Copy link
Collaborator Author

lstein commented Jun 24, 2024

I tested your branch again. Model switching still errors out but only sometimes, with Cannot copy out of meta tensor; no data! but invoke can recover without having to restart the server.

Also new errors appeared:

  • RuntimeError: Input type (float) and bias type (c10::Half) should be the same
  • Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument index in method wrapper_CUDA__index_select)

Full Logs

Thanks very much. I've got access to a multi-GPU system now and will be able to test more thoroughly. I'll let you know when there's a new version to look at.

- temporarily disable vram cache
@lstein lstein force-pushed the lstein/feat/multi-gpu branch from c5eec1d to 2219e36 Compare June 24, 2024 15:41
@lstein
Copy link
Collaborator Author

lstein commented Jul 18, 2024

@raldone01 I just committed a series of changes that should make the multi-GPU support more stable. If you have a chance to check it out, let me know how it works in your hands.

@raldone01
Copy link

Awesome I just built the image. I will try to get some proper testing in, next week.

@raldone01
Copy link

raldone01 commented Jul 19, 2024

@lstein Do you know what the default unload timer is?

After queuing up a few images the GPUs are not fully released which causes them to consume 50W instead of 10W per GPU.

I will try to just add one GPU to the container to see if it still occurs.
EDIT: A single gpu also does not get released properly.

No python errors so far. 👍

@lstein
Copy link
Collaborator Author

lstein commented Sep 2, 2024

This PR is no longer being maintained. For InvokeAI Multi-GPU support, please see https://github.com/lstein/InvokeAI-MGPU

@lstein lstein marked this pull request as draft September 2, 2024 16:24
@SurvivaLlama
Copy link

super interested in this. dual 4060 tis ready to test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api backend PRs that change backend files docs PRs that change docs invocations PRs that change invocations python PRs that change python files python-tests PRs that change python tests services PRs that change app services
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants