-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add unit 5 #7
Add unit 5 #7
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job with this! I mostly left some nits (some of these might not be applicable as you might have covered it earlier).
Cheers!
units/en/conclusion/conclusion.mdx
Outdated
To receive updates as the course releases, sign up for the course mailing list [here](https://mailchi.mp/911880bcff7d/ml-for-3d-course). | ||
This has been a high-level overview of what's going on at the intersection of Machine Learning and 3D. For further exploration: | ||
|
||
- Keep up with [3D Arena](https://huggingface.co/spaces/dylanebert/3d-arena) for the latest projects. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be cool to add a Hub URL with tag tracking all the open models as well for example: https://huggingface.co/models?pipeline_tag=image-to-3d&sort=trending
|
||
![3D Arena](https://huggingface.co/datasets/dylanebert/ml-for-3d-course/resolve/main/3d-arena.png) | ||
|
||
As the field evolves rapidly, it's easy to get overwhelmed. Stay tuned for more accessible tools and resources as these projects mature. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a Discord channel/ track where people can discuss the course, we can link that here, we did this with audio course and people still use it to discuss things.
- Reference your model URL in the space README. | ||
|
||
When completed, fill out this [form](https://forms.gle/rQedXFktHPYeikrt6) to receive your certificate. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we can give an explicit reference example or two here so that people can try it out, note: a lot of people would do what you show here i.e. the minimum requirements so it's good to leave it just enough open-ended that there's some creativity but not too open-ended in the sense that they take the easiest way out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So a good example would be to make a pipeline with text to image to 3D or other things where people can sequentially stack a couple of different models/APIs together.
- Clone and existing open-source model and change the demo experience. | ||
- Directly clone an open-source model and demo. | ||
|
||
Check out [3D Arena](https://huggingface.co/spaces/dylanebert/3d-arena) for the latest image-to-3D demos to use as starting points. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above, some examples or just a list of ideas would be quite helpful.
units/en/unit5/run-in-notebook.mdx
Outdated
|
||
import gradio as gr | ||
import numpy as np | ||
import spaces |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need this dependency? we can simplify this without the spaces
package stuff (so that people focus on the space instead of black magic packages)
|
||
Define the run function that takes an image and returns a ply file. | ||
|
||
1. Convert the image to a numpy array and normalize it to [0, 1]. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Noob question: Why do we normalise? (might be good to add an explanation)
allow_duplication=True, | ||
) | ||
demo.queue().launch() | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add a screenshot of how the demo would look like?
|
||
The instructions below are tested on an RTX 4090 on WSL2 Ubuntu 22.04. Instructions will differ and may not work, depending on your setup. | ||
|
||
1. Install `git`, `python 3.10`, and `cuda` if not already installed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious q: does this work locally on mac?
Note: people can also run the space via docker locally: https://huggingface.co/docs/hub/en/spaces-run-with-docker
Would be good to add that as an example too (it's okay to skip it if you think it's too much complexity)
demo.queue().launch() | ||
``` | ||
|
||
This will work on CPU, but relies on the original LGM-tiny, instead of your custom model. However, is your focus is on UI/UX or downstream tasks, this may be acceptable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bit counter intuitive, we use gradio_client
to run the same space as another space. Maybe we can add a section above just showing the use of gradio_client
? this way it'll help make it clear that they can use other models/ spaces as well.
|
||
1. **Run in this notebook**: Validate the code quickly. | ||
2. **Run locally**: Clone your space and run it locally. | ||
3. **Community grant**: Building something cool? Apply for a community GPU grant in your space settings. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd add a mention of your Hub username so that you can explicitly see/ filter the community grants if asked.
No description provided.