-
Notifications
You must be signed in to change notification settings - Fork 6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[data] add dataloader for lance datasource #49459
base: master
Are you sure you want to change the base?
Conversation
3b19750
to
cb50e72
Compare
Signed-off-by: jukejian <jukejian@bytedance.com>
f7ed8d1
to
05b0e09
Compare
else: | ||
return [item[0] for item in batch] | ||
|
||
dataloader = DataLoader( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ooc can you show example of how this work with ray.train.torch. TorchTrainer
?. Currently it takes Ray Dataset
as input and not Datasource.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is indeed using Dataaset.
def to_torch_dataset( | ||
self, | ||
) -> Dataset: | ||
return self.LanceDataset(ds=self) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In Ray Data, a Datasource is to used to create a Ray Dataset, ray.data.read_datasource(...)
.
Users are not supposed to directly use the Datasource class for training ingestion.
If you want to do this, I think you can directly create a torch Dataset based on LanceDB without using Ray Data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From a theoretical perspective, directly creating a torch Dataset is also possible. it is inherited from Torch Dataset. The reason for placing it here is mainly that it can be directly converted into a dataset through the datasource, facilitating the implementation of the ray train + ray data mode.
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry I still don't think this makes sense.
because "Users are not supposed to directly use the Datasource class for training ingestion."
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.
|
Why are these changes needed?
The storage format of lance will be used in multimodal preprocessing and training. When used for training, it needs to be loaded by Dataloader. Here are two relatively important characteristics:
Therefore, this provides how lance can form a Dataset for torch Dataloader implementation, enabling the unification of ray train and ray data.
Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.