Skip to content

brayevalerien/SubstanceAI-GUI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SubstanceAI-GUI

A gradio-based UI for accessing the Adobe Substance 3D API, focusing on the "Generate 3D Object Composite" endpoint.

Screenshot of the UI

Installation

Important

In this section it's assumed that you have Git, Python and Anaconda properly installed on your machine.

Follow these steps to install SubstanceAI GUI:

  1. git clone https://github.com/brayevalerien/SubstanceAI-GUI
  2. cd SubstanceAI-GUI
  3. conda create -n substanceai-gui -y python=3.12 && conda activate substanceai-gui
  4. pip install -r requirements.txt

Usage

Once you've installed the project, run python webui.py to start the UI. Follow these steps to prepare and send your request to the Substance API:

  1. Add your Bearer token if you aren't using Client credentials. If you are using Client credentials (client_id and client_secret), set the CLIENT_ID and the CLIENT_SECRET environment variables or create a .env file, and restart the application.
  2. Load a 3D scene. IMPORTANT: Your scene must have been exported either as a .usdz (best) or a .glb file
  3. Write a prompt
  4. Write the exact name of the hero object (your product) and the camera you want to use for rendering. Use the names that are used in the scene, otherwise the API will not be able to find your objects.
  5. Choose an image generation model to render the background and the lighting. Please note that Image 4 Ultra gives higher quality results but is still in an experimental state, so currently only one image can be generated at a time with this model.
  6. Choose the image count and the seed
  7. Set your resolution (note that the resolution set in your scene will be ignored)
  8. Optionally, choose a content class (image style, can be "photo" or "art") and/or a style image (adjust the style image strength using the slider).
  9. Send the request by hitting the "Generate" button.

The generation takes from a few seconds to a couple of minutes and the generated image will appear in the main image frame in the middle of the UI. Note that if you're uploading a large file for the first time, the process will be longer. If you're reusing the same assets though, the files will not have to be re-uploaded and the generation will happen faster.

For debugging purposes, you can inspect what has been sent to the API and what the response by opening the "Data exchange (dev view)".

About

A gradio-based UI for the private beta test of the Adobe Substance API, focusing on the "Compose generative 2D content with 3D scenes" endpoint. It should now work with the public API.

Topics

Resources

License

Stars

Watchers

Forks

Contributors

Languages