-
Notifications
You must be signed in to change notification settings - Fork 982
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
data.table spark/databases interface #1828
Comments
Thanks for the encouragement. Fully agree. |
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as duplicate.
This comment was marked as duplicate.
I really hope DT syntax is available someday. Personally, I really prefer the DT syntax and would like to use it consistently in Spark rather than cringe with dplyr. |
This comment was marked as duplicate.
This comment was marked as duplicate.
This comment was marked as duplicate.
This comment was marked as duplicate.
This comment was marked as duplicate.
This comment was marked as duplicate.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as duplicate.
This comment was marked as duplicate.
This comment was marked as duplicate.
This comment was marked as duplicate.
I'm curious what ppl want out of this exactly. Just to be able to use [ on an RDD like you would on a data.table (namely, i/j/by)? certainly the full functionality is a ways away, but i imagine it wouldn't be too earth shaking to make an idiom for filtering, grouping, even joining by sending syntax within [] to the corresponding operations in sparkR. in particular, this would just amount to (in essence) aliasing sparkR functions in a syntax friendlier for data.table regulars. is this what people have in mind? |
No updates, there is a lot to dev around data.table itself, so external interfacing is not that high priority now. Instead of just spark integration it makes more sense to integrate to |
This comment was marked as duplicate.
This comment was marked as duplicate.
@jangorecki The problem with that is that will also be slow/full of bugs/ and very unstable as dplyr interface changes on almost daily basis and sometimes they change the whole idiom at once like they did with lazyeval > rlang > tidyeval and G-d knows else what as I long time lost track. Not to mention that hadley, who once stated (with a "tongue in cheeck") that data.table uses "cryptic shortcuts" now masks few of these shortcuts and suddenly don't consider them so cryptic anymore. In short, creating such an API would be a full time job IMO. I think migrating a few main functionalities from data.table and keep adding them when there is time would be much safer/easier. |
@DavidArenburg agree, thus I would suggest to wait at least till dplyr 1.0 before starting any serious dev of such |
Have you arrived at a conceivable roadmap for a spark integration project (reverse dtplyr or any other form) given that dplyr 1.0 has been released? It would be great to hear your thoughts now that some time has passed. |
@jangorecki if the
While working on #4585, I also worked on functions to process the isub to their end points but did not implement in the PR because with all the variables needed to process the isub, it was not clean. However, to implement a backend, a function to process the isub would be useful so that NSE would be processed consistently. Otherwise, it would be very easy for |
That make sense but for that we have to make multiple new helpers to process internal logic of understanding input arguments, and then export them so such tool can easily mimic this logic. Describing our current API with use of helpers is not that trivial task. See related #852 |
@ jangorecki |
Proposed DT[ subset|order, select, groupby ] That makes it much easier to deliver rather than trying to fit translation inside Then possible usage could look like: library(data.table)
dt = data.table(a=1:4, b=1:2)
library(dplyr.table)
dp = as.dplyr.table(dt)
all.equal(
dt[, sum(a), b],
dp[, sum(a), b] |> as.data.table
) The latter one |
I might be missing something (sorry: long, old thread with several hidden replies). But since we're ultimately talking syntax masking/mimicking, wouldn't it be easier in the long-run to create something like a Having a dedicated (This be might Jan's point, so again apologies if I'm just quibbling over the name.) +1 on DuckDB, although I do think their SQL API is much better than the alternatives. |
Integration of various sources/targets is a lot of dev and maintenance, therefore doing single integration to dplyr, and via it having another backends feels like much more likely to be achieved. If we want to target only spark, or only duckdb, then I agree it's better to translate directly rather than via dplyr. An off topic: DT() has been pulled back for the moment from exported API. |
I agree that it would be possible and preferable to implement this in a separate package, which hopefully would get the seal of approval #5723 |
also based on the new governance this is out of scope -- "Functionality that is out of current scope...Manipulating out-of-memory data, e.g. data stored on disk or remote SQL DB, (as opposed e.g. to sqldf / dbplyr)" and consensus seems to be that this should be implemented in another package, so I am closing this issue. (feel free to re-open if I have mis-understood) |
Bump for the " |
data.table is awesome but most people don't have 100GB memory in order to handle really large data sets in memory.
Big progress has been made making the Apache Spark framework available through R in the last couple of years. Two such projects are Apache's sparkr and Rstudio's sparklyr. Both of these provide a dplyr style interface to spark's data processing engine.
As a heavy data.table user it would be amazing if there were to be a data.table interface for spark. That would make it incredibly easy for data scientists to migrate their projects from the smaller CSV style data sets to the huge data sets that can be processed by spark.
A classic data pipeline for me is
I want to be able to migrate this to
The text was updated successfully, but these errors were encountered: