Skip to content

Commit

Permalink
Merge pull request #4 from akatz-ai/akatz-dev
Browse files Browse the repository at this point in the history
Update with workflows, assets, and readme.
  • Loading branch information
akatz-ai authored Nov 14, 2024
2 parents 272b13e + 15dfbb5 commit 9f4d026
Show file tree
Hide file tree
Showing 11 changed files with 602 additions and 23 deletions.
30 changes: 30 additions & 0 deletions .github/workflows/publish.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
name: Publish to Comfy registry
on:
workflow_dispatch:
push:
branches:
- main
- master
paths:
- "pyproject.toml"
pull_request:
branches:
- main
types:
- closed # Only trigger when the pull request is closed, which includes when it is merged


jobs:
publish-node:
name: Publish Custom Node to registry
runs-on: ubuntu-latest
# if this is a forked repository. Skipping the workflow.
if: github.event.repository.fork == false && github.event.pull_request.merged == true
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Publish Custom Node
uses: Comfy-Org/publish-node-action@main
with:
## Add your own personal access token to your Github Repository secrets and reference it here.
personal_access_token: ${{ secrets.REGISTRY_ACCESS_TOKEN }}
3 changes: 1 addition & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,4 @@ venv/
**/*.pyc
checkpoint/
outputs/
.vscode/
assets/
.vscode/
14 changes: 14 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,20 @@

**Animate portraits with an input video and a reference image using X-Portrait in ComfyUI.**

[Original X-Portrait Repo](https://github.com/bytedance/X-Portrait)

<img src="assets/x-portrait-nodes.png" alt="x-portrait nodes" width="400"/>

- source_image: the reference image for generation, should be square and max 512x512.
- driving_video: the driving video containing a face, should match the resolution of the source_image and max 512x512.
- seed: controls current generation seed
- ddim_steps: how many samples (steps) for generating the output. Diminishing returns after 15.
- best_frame: which frame of the driving_video most closely matches the source_image.

## ⭐ Example Workflow (in examples/)

![Example workflow 1](assets/x-portrait-workflow.png)

## 🔧 Installation and Usage

1. ComfyUI Manager:
Expand Down
Binary file added assets/driving_video.mp4
Binary file not shown.
Binary file added assets/source_image.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/x-portrait-nodes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/x-portrait-workflow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
7 changes: 0 additions & 7 deletions core/test_xportrait.py
Original file line number Diff line number Diff line change
Expand Up @@ -375,15 +375,8 @@ def visualize_mm(args, name, batch_data, infer_model, nSample, local_image_dir,
output_img = torch.permute(output_img, [0, 2, 3, 1])

output_img = output_img.data.cpu().numpy()
print(f"Output image shape: {output_img.shape}")
print(f"Output image data peek: {output_img[:5, :5, :]}")
# output_img = img_as_ubyte(output_img)
output_img = torch.from_numpy(output_img)
print(f"Output image shape: {output_img.shape}")
output_img = output_img[:,:,:512]
print(f"Output image shape: {output_img.shape}")
# print(f"Output image data peek: {output_img[:5, :5, :]}")
# imageio.mimsave(output_path, output_img[:,:,:512], fps=batch_data['fps'], quality=10, pixelformat='yuv420p', codec='libx264')

# After processing, clear any large temporary variables
del cond_img_cat, pose_cond_list, local_pose_cond_list
Expand Down
Loading

0 comments on commit 9f4d026

Please sign in to comment.