Skip to content

Conversation

@AMOOOMA
Copy link
Contributor

@AMOOOMA AMOOOMA commented Dec 15, 2025


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@AMOOOMA
Copy link
Contributor Author

AMOOOMA commented Dec 15, 2025

R: @damccorm

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @AMOOOMA, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the MultiProcessShared utility by introducing the capability to host shared objects within dedicated, spawned processes. This change provides more explicit control over the lifecycle of shared resources, allowing for direct termination and cleanup of the serving process, and includes mechanisms to prevent zombie processes and handle server startup failures gracefully.

Highlights

  • Process Spawning for Shared Objects: The MultiProcessShared utility now supports hosting shared objects within dedicated, spawned processes, controlled by a new spawn_process boolean parameter in its constructor.
  • Direct Object Deletion: A new unsafe_hard_delete method has been introduced, allowing for the explicit termination and cleanup of the shared object and its serving process, providing more granular control over resource lifecycle.
  • Robust Process Management: The newly spawned server processes include a 'Suicide Pact' monitor, ensuring they automatically terminate if their parent process dies. The server startup also incorporates robust error handling and logging.
  • Zombie Process Reaping: The acquire method now includes logic to sweep and reap any finished (zombie) child processes, improving overall resource management and preventing accumulation of defunct processes.
  • Serialization Support for Proxies: The _AutoProxyWrapper class now includes __setstate__ and __getstate__ methods, enhancing the serialization capabilities of proxy objects.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Contributor

Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment assign set of reviewers

assert self._SingletonProxy_valid
self._SingletonProxy_valid = False

def unsafe_hard_delete(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you help me understand why we need the unsafe_hard_delete changes? Its not really clear to me what behavior this enables which we can't already do

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's mainly because the way that models are passed around is directly a _SingletonProxy instead of _SingletonEntry so we would need a way to directly call delete with the _SingletonProxy

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok - lets at least give it a name like singletonProxy_unsafe_hard_delete. Otherwise we will run into issues if someone has an object with a function or property called unsafe_hard_delete, which seems like it could happen.

self.__dict__.update(state)

def __getstate__(self):
return self.__dict__
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume this is so that this is pickleable, but is it valid? Normally I'd expect this to not be pickleable since the proxy objects aren't necessarily valid in another context

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this is exactly what was needed for the pickling stuff. It does seems to be valid in testing with the custom built beam version loaded on custom container.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would only be valid if you unpickle onto the same machine (and maybe even in the same process). Could you remind me what unpickling issues you ran into?

"""Checks if parent is alive every second."""
while True:
try:
os.kill(parent_pid, 0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we sending a kill signal to the parent process? Isn't this the opposite of what we want?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not actually a kill signal but uses that interface to send a check, it will fail with OSError if the parent_pid is dead and if alive nothing happens.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if alive nothing happens.

Could you help me understand why this happens? https://www.geeksforgeeks.org/python/python-os-kill-method/ seems to say this will actually send the kill signal. Does the parent just ignore it?

self._get_manager().unsafe_hard_delete_singleton(self._tag)
try:
self._get_manager().unsafe_hard_delete_singleton(self._tag)
except (EOFError, ConnectionResetError, BrokenPipeError):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd typically expect the caller to catch/handle this. As it is, there is no indication passed back that this call failed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good! Updated.

# Trigger a sweep of zombie processes.
# calling active_children() has the side-effect of joining any finished
# processes, effectively reaping zombies from previous unsafe_hard_deletes.
if self._spawn_process: multiprocessing.active_children()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if self._spawn_process: multiprocessing.active_children()
if self._spawn_process:
multiprocessing.active_children()

style nit to be consistent with the rest of the repo.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

Comment on lines 286 to 287
t = threading.Thread(target=_monitor_parent, daemon=True)
t.start()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be better to start this after we've initialized our MPS object to avoid racy unsafe hard deletes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

@damccorm
Copy link
Contributor

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant enhancement to MultiProcessShared by allowing it to spawn a dedicated server process and providing a mechanism for forceful deletion. The implementation is robust, incorporating features like a "suicide pact" for server process lifecycle management and detailed error reporting from the child to the parent process. The accompanying tests are thorough, covering various edge cases. I have a few suggestions to further improve the code, mainly around removing a redundant line of code, enhancing logging in exception handlers, and fixing a minor bug in the test setup.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants