Skip to content

Conversation

@MCBoarder289
Copy link

@MCBoarder289 MCBoarder289 commented Dec 12, 2025

This is a single PR that addresses multiple issues at once:

I'm closing my other PRs (see list below) because once I got to resolving each issue individually, other rendering issues cropped up, so it's easier to test all of the fixes at once.

Main Goal
Combining all of the previous issues in Spark, it all boils down to the fact that the Spark reports were inaccurate when compared with Pandas. So my main goal here was to make Spark closer to parity with the Pandas output

Example of pandas output on toy dataset:
Pandas_example_stats

Pandas_example_common_values

Example of broken/pre-fixes spark output on toy dataset:
Spark_wrong_stats

Spark_wrong_common_values

Fixed Spark Output
Spark_fixed_stats_new

Spark_fixed_common_values_new

Issues and Root Causes
There are couple of commits in here that address specific root causes of these discrepancies. Here are the summarized issues with their solutions:

  • Issue 1: pandas by default will count "NaN" values as Null in summary stats, but Spark SQL does not, so we explicitly address that in one of the commits.

    • Resolution: This was resolved by ensuring the numeric_stats_spark() method explicitly filters out nulls and Nans to match pandas' default behavior
  • Issue 2: Missing values were not being properly calculated because NaN in Spark is not null, so they weren't considered missing when they should be

    • Resolution: Adding nan filters to the n_missing computation in the describe_spark_counts()method
  • Issue 3: Histogram counts and Common Values counts using the summary["value_counts_without_nan"] Series were not correctly summing counts.

    • Resolution: Adding a sum to the counts, and removing the limit(200) makes everything line up to parity with the Pandas output

    • NOTE: Since we're pre-aggregating data for the value_counts, I don't think the limit(200) is necessary even with Spark. Since we're pulling this down into a Pandas Series anyway, if the data was too big, then that would explicitly fail the process instead of producing misleading reports. If you're running this in Spark, you're likely using a machine that has a good bit of memory anyway.

Concluding Thoughts
While there is still some very slight variation to the computed stats because of how Spark handles nulls/NaNs differently than pandas, I think this new output is acceptably close to the pandas version and any differences are ultimately negligible. Especially when comparing the initial outputs where the differences are misleading without these fixes, or reports would not even complete/render with some edge cases (all null numeric columns, etc.)

@fabclmnt - Apologies for all of the tags, and I'm still open to all feedback on this approach! I'm happy to discuss further, and hope this is helpful to anyone using the Spark backend.

Misc. Notes:

In the Pandas implementation, the numeric stats like min/max/stddev/etc. by default ignore null values.
This commit updates the spark implementation to more closely match that.
Need to add the isnan() check because Pandas isnull check will count NaN as null, but Spark does not
The previous calculation of counts was actually counting an already summarized dataframe, so it wasn't capturing the correct counts for each instance of a value.

This is updated by summing the count value instead of performing a row count operation.
Discovered this edge case with real data, and still need to fix the rendering of an empty histogram.
This change addresses issue ydataai#1602 (ydataai#1602).

Computations in the summarize process result in some floats when computing against decimal columns.
To solution this, we simply convert those types to a DoubleType when performing those numeric operations.
This change addresses issue ydataai#1722 (ydataai#1722).

Assembling a vector column in Spark with no numeric columns results in features with a NULL size, NULL indices, and an empty list of values.
This causes an exception to be raised when computing correlations.

The solution here is to avoid computing the correlation matrix when there are no interval columns (numeric).
This change addresses issue ydataai#1723 (ydataai#1723).
It implements a "N/A" string as the default when formatting NoneType values.
Addresses handling of completely null numeric columns, and gracefully handling empty correlation sets and plots.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant