We release the rendered shadow dataset used in the following paper:
Shadow Generation with Decomposed Mask Prediction and Attentive Shadow Filling [arXiv]
Xinhao Tao, Junyan Cao, Yan Hong, Li Niu
Accepted by AAAI 2024
RdSOBA is a large-scale Rendered Shadow Generation dataset containing object-shadow pairs like DESOBA dataset with 600 2D scenes and 788 3D foreground objects, which is useful for supervised shadow generation methods.
- 788 3D foreground objects
- 4 super-categories for foreground objects, containing "people", "animals", "vehicles", "plants"
- nearly 80,000 pairs of object-shadow pairs
- accurate object and shadow masks
- 30 3D scenes
- 20 viewpoints(2D scene) for each 3D scene
We provide the full dataset at [Baidu_Cloud] (access code: ck81) and [OneDrive].
We use Unity-3D to create 3D scenes and render images. We gather 788 diverse 3D objects from CG websites and 30 representative scenes from Unity Asset Store and CG websites. These collections provide a strong foundation for generating varied rendered images.
For each scene, we select 20 open areas for 3D objects, and choose 10 camera settings per area. After positioning the camera, we place a group of 1-5 3D objects in its view. We do this for 10 object groups per camera setting. Lastly, we render a set of 2D images under 5 different lighting conditions.
After determining an open area, camera setting, group of 3D objects, and lighting condition in a 3D scene, we generate a set of images. First, we render an empty image
We place
Finally, we render an image
After filtering out low-quality tuples, we have 280,000 1080p tuples left. For details such as how the images are named, please check the README.txt file in the above link.