Skip to content
This repository has been archived by the owner on Nov 1, 2024. It is now read-only.

Does torcharrow support industry-level large scale data? #476

Open
circlecrystal opened this issue Aug 19, 2022 · 2 comments
Open

Does torcharrow support industry-level large scale data? #476

circlecrystal opened this issue Aug 19, 2022 · 2 comments

Comments

@circlecrystal
Copy link

circlecrystal commented Aug 19, 2022

I`m asking for myself, and also my algo team members in company.
Currently we got PB level of data, which is separated in parquets across different remote hdfs paths (per day), and need to be trained.

Really wish to get an answer for this question: How well performed is torcharrow for this level of data in industry?

  • Does it download -> store then release couple remote parquets, or just transferring data through network without any local caching?
  • How well does it handle enormously large scale dataset? From TB-level to PB-level, maybe even EB level? Is it a performant solution when compared to other solutions?
@circlecrystal circlecrystal changed the title Does torcharrow support industry-level massive data? Does torcharrow support industry-level large scale data? Aug 19, 2022
@wenleix
Copy link
Contributor

wenleix commented Aug 20, 2022

Thanks for the interests! We have an internal scalable distributed system called Data PreProcessing Service (DPP) [1] that executes traced TorchArrow program at Meta-scale.

It's an open question whether and how we can open source the distributed mode, as DPP has deep integration into Meta's infrastructure. It may be possible to open source just the tracer (thinking PyTorch FX Tracer) with separate integration into OSS big data ecosystem.

Wondering in your use case, is there any preferred big data stack would like to integrate to execute traced TA program? (e.g. Spark, Kafka, Ray, or customized distributed runtime? )

cc @dracifer, @msaroufim, @damianr99

[1] https://arxiv.org/pdf/2108.09373.pdf

@circlecrystal
Copy link
Author

circlecrystal commented Aug 24, 2022

Thanks for the interests! We have an internal scalable distributed system called Data PreProcessing Service (DPP) [1] that executes traced TorchArrow program at Meta-scale.

It's an open question whether and how we can open source the distributed mode, as DPP has deep integration into Meta's infrastructure. It may be possible to open source just the tracer (thinking PyTorch FX Tracer) with separate integration into OSS big data ecosystem.

Wondering in your use case, is there any preferred big data stack would like to integrate to execute traced TA program? (e.g. Spark, Kafka, Ray, or customized distributed runtime? )

cc @dracifer, @msaroufim, @damianr99

[1] https://arxiv.org/pdf/2108.09373.pdf

Thanks for taking time to answer my question. Our current stack mostly prefer Spark or Ray to execute distributed program. The difficulty is that, a solution is still missing if we are aiming for training some large model across multiple training containers with large scale training data in pytorch framework.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants