Skip to content

SPARK-52564 configuration changes not require deleting the checkpoint #51264

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

sohurdc
Copy link

@sohurdc sohurdc commented Jun 24, 2025

What changes were proposed in this pull request?

Why are the changes needed?

Once checkpointing is enabled in Spark Streaming, configuration changes require deleting the checkpoint, which results in loss of state.

code like this:

// Get StreamingContext from checkpoint data or create a new one

val context = StreamingContext.getOrCreate(checkpointDirectory, functionToCreateContext _)

...

context.start()

context.awaitTermination()

I modified the key class org.apache.spark.streaming.StreamingContext by updating the getOrCreate method, so that the latest configurations can be applied when recovering from a checkpoint. This way, there's no need to delete the checkpoint to make configuration changes take effect.

Does this PR introduce any user-facing change?

No

How was this patch tested?

After modifying the Spark source code, I replaced the client-side spark/jars/spark-streaming_2.12-4.0.0.jar. Then, I changed the configuration settings, updating the runtime resources to --num-executors 4 --executor-memory 6G --executor-cores 3. Without touching the checkpoint, I restarted the Spark Streaming application and confirmed that the changes took effect.

Was this patch authored or co-authored using generative AI tooling?

No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant