-
-
Notifications
You must be signed in to change notification settings - Fork 425
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Option to Evaluate Dynamic Prompts #1694
base: main
Are you sure you want to change the base?
Conversation
In my rush to create this feature, I missed the existence of the actual dynamicprompts project (MIT licensed) that drives the parsing of the other various plugins and instead recreated my own version from scratch. Here is the handling of structured wildcard files (json and yaml). I'm sure my own implementation is not as robust as this purpose-built library, so I will need to investigate when I get more time. |
It's generally not possible to have dependencies for the plugin - it runs inside Krita's embedded python. I also feel like it's perhaps a bit overkill (or a lot) - I can see the usefulness of Do you use all of it? Or can there be a "90% of the value with 10% of the code" kind of deal? |
Regarding where substitution happens, these are the stages:
Note that Dynamic prompt evaluation could be applied in step 2 - this would also allow adding the result to job metadata.
|
|
I think it's just handling of
In some cases positive prompt can affect only parts of the image while style prompt affects the whole image.
Regions are just prompts that affect certain part of the image. No I don't think state needs to be shared between region prompts.
Personally I don't see the point, negative prompts have limited use. Doesn't stop some people from going ham on them though... |
Okay, moving it to prepare() works as you said, but the job info is still showing the full prompt with template syntax instead of the final evaluated text. What would I need to modify to capture it properly in the job history? EDIT: Found this issue. model.py stores the positive prompt prior to prepare() and uses that stored value for the job info. Should I do this in model.py instead? EDIT2:
|
2f3d56e
to
18678be
Compare
Okay, I think I finally got it working well in process_regions. I added another string member to the Region and RootRegion classes to avoid modifying the original positive property directly and forcing a UI update with the evaluated text. Then in process_regions, I evaluate the root and all regions and I use the evaluated positive property instead, making sure to clear the evaluation before returning from the function, so that the next generation starts from scratch. |
Feature: Dynamic Prompts
Additions
Limitations: