Skip to content

[SPARK-52401][SQL] Fix DataFrame.collect() cache invalidation after saveAsTable append; add regression test #51240

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

ikshitiz
Copy link

[SPARK-52401][SQL] Fix DataFrame.collect() cache invalidation after saveAsTable append; add regression test

What changes were proposed in this pull request?

This PR fixes a cache invalidation bug in Spark SQL where DataFrame.collect() could return stale results after appending data to a table using saveAsTable with mode="append". The fix ensures that all DataFrame operations (collect(), count(), etc.) consistently reflect the latest table contents after append, overwrite, or similar operations.

Key changes:

  • In DataSourceV2Strategy.scala, the refreshCache function now uses uncacheQuery instead of recacheByPlan for append/overwrite operations. This properly invalidates the cache so that subsequent DataFrame actions read fresh data from the table.
  • Added a regression test suite (DataFrameCacheSuite.scala) to verify correct cache invalidation and DataFrame behavior after table updates.
  • Included a summary document (SPARK-52401_FIX_SUMMARY.md) and Python scripts for easy verification and review.

Why are the changes needed?

Previously, after appending data to a table, .count() would return the correct row count, but .collect() could return outdated results (e.g., an empty list). This was due to the cache manager re-executing the same logical plan, which did not reflect the updated table data. This PR ensures that the cache is invalidated, so all DataFrame operations see the latest data.

Does this PR introduce any user-facing change?

Yes. After this fix:

  • .collect(), .count(), and all DataFrame actions will always reflect the current state of the table after append, overwrite, or similar operations.
  • This resolves a long-standing source of confusion and inconsistency for Spark SQL users.

How was this patch tested?

  • Added a new regression test suite: sql/core/src/test/scala/org/apache/spark/sql/DataFrameCacheSuite.scala
  • Included Python scripts (test_spark_52401.py, test_spark_52401_comprehensive.py) for manual and cross-language verification.
  • All tests verify that DataFrame operations return up-to-date results after table modifications.

Additional context

  • See the included SPARK-52401_FIX_SUMMARY.md for a detailed explanation of the issue, root cause, and solution.
  • This change only affects cache invalidation for operations that modify table data (append, overwrite, delete, etc.) and is backward compatible.

Kshitij and others added 2 commits June 21, 2025 13:19
…-cache-invalidation

[SPARK-52401][SQL] Fix DataFrame.collect() cache invalidation after s…
@github-actions github-actions bot added the SQL label Jun 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant