What does run-detach.js do? #630
-
Just wondering - is it doing something different than standard detach option? var child = cp.spawn('myapp.js', [], { detached: true }) |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments
-
I think this is more of a discussion than an issue, ya? I'll convert this to a discussion, then reply ASAP. |
Beta Was this translation helpful? Give feedback.
-
Well, sort of. Cronicle's "detached" mode is more complicated than just that. First, it launches a "controller" (parent) script, which then launches the child job process. The parent is detached from the terminal, so it can keep running even if Cronicle itself exits, but the child job process is launched normally. This is because it needs to maintain and support the "JSON over STDIO" API interface. So each detached job that is running has a "pair" of processes. The parent controller, and the child job process. One is detached, and one is not. If the child job process was "detached" using the Node cp.spawn detach mode, then it couldn't write JSON to STDOUT (it would go to /dev/null), and Cronicle couldn't capture its STDERR either. Since the parent controller script is detached, it "converts" the JSON STDIO API interface to commands that are written to temp files, which Cronicle is polling for. In this way, detached jobs can still report progress, etc. Hope this helps. |
Beta Was this translation helpful? Give feedback.
-
Ahh, I see. So even if cronicle is stopped and then restarted it will take over control of this detached job again. Smart. I really thought it would just keep running in the background uncontrolled. |
Beta Was this translation helpful? Give feedback.
-
One more question. Detached job completion gets reported with a decent lag. Based on this line this can take up to 1 minute Line 127 in 996750a Wondering why so long? I see a comment about randomizing, but it's unclear why it's waiting for over 30 seconds. Moreover, if that child job crashed/killed sounds like that could be reported immediately (if I manually kill that child, it's still hanging as active for quite a while) |
Beta Was this translation helpful? Give feedback.
-
Yeah, so, detached jobs were designed to be "long running" jobs. Like, multiple hours long. Honestly, it was designed for one particular job I have at my work which takes 12 hours to import a feed. In those cases, a delay of 1 minute on updates is negligible. So, Cronicle core only reads the detached update files (in the queue dir) once per minute. In addition to that, I didn't want to "bash" the filesystem if detached jobs were emitting JSON update events to their STDOUT too often, because each one becomes a unique JSON file in the queue dir. So that's why the writes are staggered by 30 seconds with some randomness. In your fork of Cronicle, you are free to change these numbers. Make it 5 seconds if you want! I was just worried about customers that have literally tens of thousands of jobs, with hundreds of them active at any given moment. This was an effort to allow some scalability in the system. If you want detached jobs to report updates (and completion) faster, you'll also have to change something in Cronicle core. This right here: https://github.com/jhuckaby/Cronicle/blob/master/lib/queue.js#L27-L28 This would have to change to running every second. But please note that every time it checks the external queue, it globs the filesystem, listing every file in the queue dir. So in my opinion doing this every second would be "bashing" the filesystem, which is why it is set to every minute. But again, in your fork, you can do whatever you want 😊 |
Beta Was this translation helpful? Give feedback.
-
Oh, OK. It totally makes sense now. I guess the default settings are good enough. Thanks for the explanation. |
Beta Was this translation helpful? Give feedback.
Well, sort of. Cronicle's "detached" mode is more complicated than just that. First, it launches a "controller" (parent) script, which then launches the child job process. The parent is detached from the terminal, so it can keep running even if Cronicle itself exits, but the child job process is launched normally. This is because it needs to maintain and support the "JSON over STDIO" API interface.
So each detached job that is running has a "pair" of processes. The parent controller, and the child job process. One is detached, and one is not.
If the child job process was "detached" using the Node cp.spawn detach mode, then it couldn't write JSON to STDOUT (it would go to /dev/null), and C…