feat(ai): add structured logging to task engine lifecycle#745
feat(ai): add structured logging to task engine lifecycle#745Tejeshyewale wants to merge 45 commits into
Conversation
Fixed wording, numbering, and documentation clarity in README.
- Add task_logger.py with JSON structured formatter - Integrate _logger at 4 lifecycle points in task_engine.py: * start: STARTING state set * subprocess: process created with PID * error: startup failure with exception info * termination: cleanup complete with exit_code and final_state
📝 WalkthroughWalkthroughThe PR introduces a new ChangesML Sklearn Node Implementation
Task Logging Infrastructure
Sequence DiagramsequenceDiagram
participant Node as ML Sklearn Node
participant Global as IGlobal
participant Inst as IInstance
participant Proc as PreProcessor
participant DS as Downstream
Note over Node,DS: Node Initialization
Node->>Global: validateConfig()
Global->>Global: depends(requirements.txt)
Node->>Global: beginGlobal()
Global->>Global: Load dependencies
Global->>Proc: Create PreProcessor(config)
Global->>Global: preprocessor = instance
Note over Node,DS: Request Processing
Inst->>Inst: writeAnswers(question)
Inst->>Inst: Validate preprocessor exists
Inst->>Inst: Deep-copy question
Inst->>Proc: process(text)
Proc->>Proc: Run inference (stub)
Proc-->>Inst: result text
Inst->>Inst: question.text = result
Inst->>DS: writeAnswers(question)
Note over Node,DS: Cleanup
Node->>Global: endGlobal()
Global->>Global: preprocessor = None
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 7/8 reviews remaining, refill in 7 minutes and 30 seconds.Comment |
|
No description provided. |
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@nodes/src/nodes/ml_sklearn/code.py`:
- Around line 16-28: The __init__ currently accepts config but doesn't store it
and lacks a return type; update the constructor signature to include the return
annotation (def __init__(self, config: dict) -> None:) and either store the
config on the instance (self._config = config) so future methods (e.g., reload
helpers) can access it, or rename the parameter to _config to signal intentional
non-use; ensure existing usage of self._model remains unchanged (self._model =
None) after adding the annotation and storing/renaming the config.
In `@nodes/src/nodes/ml_sklearn/IGlobal.py`:
- Around line 22-28: The duplicated requirements path expression used in
beginGlobal and validateConfig should be extracted to a single class-level
constant (e.g. REQUIREMENTS_PATH) so both methods reference that constant;
update the class (in IGlobal) to define REQUIREMENTS_PATH =
os.path.join(os.path.dirname(os.path.realpath(__file__)), 'requirements.txt')
and replace the inline expressions in beginGlobal and validateConfig with that
constant, keeping the existing try/except and depends(requirements) call
semantics.
In `@nodes/src/nodes/ml_sklearn/README.md`:
- Line 7: The README entry that documents the node input as "text (number as
string)" conflicts with the test fixture in services.json which currently
supplies "hello world"; update the test fixture(s) in services.json to provide a
numeric string (for example "250") so the fixture matches the documented input
contract described in README.md (also fix the other instances noted around lines
44-49).
In `@nodes/src/nodes/ml_sklearn/services.json`:
- Around line 9-16: The JSON uses a nested "pipe" -> "lanes" object with boolean
"in"/"out" for the "answers" lane; replace that with a top-level "lanes" object
where "answers" maps to an array (e.g., "answers": ["answers"]) and add a
top-level "input" array that routes from lane "answers" to output lane "answers"
per the other services' schema; specifically remove the "pipe" block and add the
top-level "lanes" and "input" keys for the ml_sklearn service so the registry
can parse "answers" correctly.
In `@packages/ai/src/ai/modules/task/task_engine.py`:
- Around line 1688-1696: The exception handling in the task startup path logs
termination before the error (call to await self._terminated() happens before
logging) and uses _logger.error(..., exc_info=True) instead of the idiomatic
_logger.exception; update the except block in TaskEngine.task startup code so
you first log the failure with _logger.exception('Task startup failed',
extra={...}) and call self.debug_message(...) (or include same context) and only
after logging await self._terminated(); keep the final raise to re-raise the
exception.
In `@packages/ai/src/ai/modules/task/task_logger.py`:
- Around line 36-45: The get_task_logger function currently never sets a logger
level and only sets propagate=False inside the "if not logger.handlers" guard,
causing INFO messages to be dropped and duplicate output in some setups; fix by
explicitly setting the logger level (e.g. logger.setLevel(logging.INFO) or
logger.setLevel(logging.DEBUG) as appropriate) so records pass level filtering,
move logger.propagate = False outside the handlers-creation guard so it is
always applied, and optionally ensure the StreamHandler has an appropriate level
(handler.setLevel or leave NOTSET) so the handler receives the records; these
changes should be made in get_task_logger.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: abe38803-d3f5-42ae-b26f-4f879fa67760
📒 Files selected for processing (9)
nodes/src/nodes/ml_sklearn/IGlobal.pynodes/src/nodes/ml_sklearn/IInstance.pynodes/src/nodes/ml_sklearn/README.mdnodes/src/nodes/ml_sklearn/__init__.pynodes/src/nodes/ml_sklearn/code.pynodes/src/nodes/ml_sklearn/requirements.txtnodes/src/nodes/ml_sklearn/services.jsonpackages/ai/src/ai/modules/task/task_engine.pypackages/ai/src/ai/modules/task/task_logger.py
| def __init__(self, config: dict): | ||
| """ | ||
| Initialize the sklearn model. | ||
|
|
||
| In a real deployment, you'd load a pickled model from a path | ||
| specified in config. This stub returns text unchanged so the | ||
| node is CI-safe without a pre-trained model artifact. | ||
| """ | ||
| # Example: load a real model like this: | ||
| # import joblib | ||
| # model_path = config.get('model_path', '') | ||
| # self._model = joblib.load(model_path) | ||
| self._model = None # Replace with actual model loading |
There was a problem hiding this comment.
config is not stored — silently drops runtime configuration; also missing -> None annotation.
Two related problems:
- Unused
config(ARG002): The parameter is received and ignored —self._config = configis never set. The commented-out loading example works becauseconfigis in-scope within__init__, but if any future method (e.g., a reload helper) needs it, it won't be accessible. At minimum, prefix the parameter_configto signal intentional non-use in the stub, or store it. - Missing
-> Noneannotation (ANN204): Ruff flags this; per the project'srufflint requirement fornodes/**/*.py,__init__should be annotated with-> None.
🛠️ Proposed fix
- def __init__(self, config: dict):
+ def __init__(self, config: dict) -> None:
"""
Initialize the sklearn model.
...
"""
+ self._config = config # Retained for real model loading
# Example: load a real model like this:
# import joblib
- # model_path = config.get('model_path', '')
+ # model_path = self._config.get('model_path', '')
# self._model = joblib.load(model_path)
self._model = None # Replace with actual model loading🧰 Tools
🪛 Ruff (0.15.12)
[warning] 16-16: Missing return type annotation for special method __init__
Add return type annotation: None
(ANN204)
[warning] 16-16: Unused method argument: config
(ARG002)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@nodes/src/nodes/ml_sklearn/code.py` around lines 16 - 28, The __init__
currently accepts config but doesn't store it and lacks a return type; update
the constructor signature to include the return annotation (def __init__(self,
config: dict) -> None:) and either store the config on the instance
(self._config = config) so future methods (e.g., reload helpers) can access it,
or rename the parameter to _config to signal intentional non-use; ensure
existing usage of self._model remains unchanged (self._model = None) after
adding the annotation and storing/renaming the config.
| try: | ||
| from depends import depends | ||
|
|
||
| requirements = os.path.dirname(os.path.realpath(__file__)) + '/requirements.txt' | ||
| depends(requirements) | ||
| except Exception as e: # noqa: BLE001 | ||
| warning(str(e)) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial | ⚡ Quick win
Requirements path is duplicated across validateConfig and beginGlobal.
The expression os.path.dirname(os.path.realpath(__file__)) + '/requirements.txt' appears verbatim in both methods. Extract it to a class-level constant to avoid divergence on future renames.
♻️ Proposed refactor
class IGlobal(IGlobalBase):
"""Global state for the ml_sklearn node — holds the loaded sklearn model."""
preprocessor: object = None # The sklearn model/pipeline instance
+ _REQUIREMENTS = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'requirements.txt')
def validateConfig(self):
"""Validate that scikit-learn and numpy are available."""
try:
from depends import depends
- requirements = os.path.dirname(os.path.realpath(__file__)) + '/requirements.txt'
- depends(requirements)
+ depends(self._REQUIREMENTS)
except Exception as e: # noqa: BLE001
warning(str(e))
def beginGlobal(self):
"""Load the sklearn model at runtime startup."""
if self.IEndpoint.endpoint.openMode == OPEN_MODE.CONFIG:
pass
else:
from depends import depends
- requirements = os.path.dirname(os.path.realpath(__file__)) + '/requirements.txt'
- depends(requirements)
+ depends(self._REQUIREMENTS)Also applies to: 39-40
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@nodes/src/nodes/ml_sklearn/IGlobal.py` around lines 22 - 28, The duplicated
requirements path expression used in beginGlobal and validateConfig should be
extracted to a single class-level constant (e.g. REQUIREMENTS_PATH) so both
methods reference that constant; update the class (in IGlobal) to define
REQUIREMENTS_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)),
'requirements.txt') and replace the inline expressions in beginGlobal and
validateConfig with that constant, keeping the existing try/except and
depends(requirements) call semantics.
|
|
||
| ## Input | ||
|
|
||
| - text (number as string) |
There was a problem hiding this comment.
Test fixture in services.json contradicts the documented input format.
The README specifies the input as text (number as string), but the test fixture in services.json supplies "hello world" — a non-numeric string. These should be consistent so the test fixture exercises the documented contract. Consider aligning the fixture to a numeric string (e.g., "250") to match the stated input type.
Also applies to: 44-49
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@nodes/src/nodes/ml_sklearn/README.md` at line 7, The README entry that
documents the node input as "text (number as string)" conflicts with the test
fixture in services.json which currently supplies "hello world"; update the test
fixture(s) in services.json to provide a numeric string (for example "250") so
the fixture matches the documented input contract described in README.md (also
fix the other instances noted around lines 44-49).
| "pipe": { | ||
| "lanes": { | ||
| "answers": { | ||
| "in": true, | ||
| "out": true | ||
| } | ||
| } | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check what lane schema is used by other existing nodes to confirm
# whether ml_sklearn's pipe.lanes schema matches or deviates from the pattern.
rg -l '"pipe"' nodes/src/nodes --include='services.json' -n
echo "--- agent_langchain services.json ---"
fd 'services.json' nodes/src/nodes/agent_langchain --exec cat {}
echo "--- search_exa services.json ---"
fd 'services.json' nodes/src/nodes/search_exa --exec cat {}
echo "--- ml_sklearn services.json ---"
fd 'services.json' nodes/src/nodes/ml_sklearn --exec cat {}Repository: rocketride-org/rocketride-server
Length of output: 7550
🏁 Script executed:
# Check if any other nodes use the "pipe" pattern
echo "=== Checking for 'pipe' usage in all services.json files ==="
find nodes/src/nodes -name 'services.json' -type f | while read file; do
if grep -q '"pipe"' "$file"; then
echo "Found 'pipe' in: $file"
fi
done
echo ""
echo "=== Total services.json files in codebase ==="
find nodes/src/nodes -name 'services.json' -type f | wc -l
echo ""
echo "=== List all services.json file locations ==="
find nodes/src/nodes -name 'services.json' -type fRepository: rocketride-org/rocketride-server
Length of output: 3314
🏁 Script executed:
# Search for service registry or parser code that validates/processes lanes
echo "=== Searching for service registry or parser related code ==="
fd -t f '(registry|parser|service)' --type f | grep -E '\.(ts|js|py)$' | head -20
echo ""
echo "=== Searching in src directory for registry/parser patterns ==="
find src -type f \( -name '*registry*' -o -name '*parser*' -o -name '*service*' \) 2>/dev/null | head -20Repository: rocketride-org/rocketride-server
Length of output: 796
🏁 Script executed:
# Examine pipelineParser.ts to understand how it processes lanes
echo "=== Checking pipelineParser.ts ==="
wc -l apps/vscode/src/shared/util/pipelineParser.ts
# Check first portion to understand structure
head -100 apps/vscode/src/shared/util/pipelineParser.tsRepository: rocketride-org/rocketride-server
Length of output: 3312
🏁 Script executed:
# Check registry.js for schema handling
echo "=== Checking registry.js ==="
cat scripts/lib/registry.js | head -200Repository: rocketride-org/rocketride-server
Length of output: 4861
🏁 Script executed:
# Sample a standard node to confirm pattern
echo "=== Standard node: accessibility_describe ==="
cat nodes/src/nodes/accessibility_describe/services.json | head -50Repository: rocketride-org/rocketride-server
Length of output: 1884
🏁 Script executed:
# Search for Python code that loads/parses services.json
echo "=== Searching for services.json loading in Python ==="
rg 'services\.json' packages/client-python packages/common --type py -B 2 -A 2 | head -100
echo ""
echo "=== Searching for lane/pipe parsing logic ==="
rg '(lane|pipe)' packages/ai/src/ai/modules/services/ --type py -B 2 -A 2 | head -150Repository: rocketride-org/rocketride-server
Length of output: 839
🏁 Script executed:
# Broader search for services.json loading
echo "=== Searching for services.json file operations ==="
rg 'services\.json' --type py --type ts --type js -B 3 -A 3 | head -200
echo ""
echo "=== Searching for 'lanes' key handling ==="
rg '"lanes"' --type py --type ts --type js -B 2 -A 2 | head -150Repository: rocketride-org/rocketride-server
Length of output: 12607
🏁 Script executed:
# Check the actual service loader in packages/ai
echo "=== Checking services.py in packages/ai ==="
find packages/ai -name 'services.py' -o -name '*service*.py' | head -10Repository: rocketride-org/rocketride-server
Length of output: 168
🏁 Script executed:
# Examine the actual services module that loads and processes lanes
echo "=== Checking packages/ai/src/ai/modules/services/services.py ==="
wc -l packages/ai/src/ai/modules/services/services.py
# Check first 150 lines for import and lane handling
head -150 packages/ai/src/ai/modules/services/services.py
echo ""
echo "=== Look for 'lanes' key access ==="
grep -n 'lanes' packages/ai/src/ai/modules/services/services.py | head -20Repository: rocketride-org/rocketride-server
Length of output: 1432
🏁 Script executed:
# Search for ml_sklearn usage in tests or code
echo "=== Searching for ml_sklearn references ==="
rg 'ml_sklearn' --type py --type ts --type js -B 2 -A 2 | head -100
echo ""
echo "=== Checking if there's lane schema validation ==="
rg '(lane.*schema|schema.*lane|validate.*lane)' --type py --type ts --type js -i | head -50
echo ""
echo "=== Look for any documentation on lane format ==="
find . -name '*.md' -type f -exec grep -l 'lane' {} \; | head -10Repository: rocketride-org/rocketride-server
Length of output: 3505
🏁 Script executed:
# Check if ml_sklearn has any test or implementation files
echo "=== ml_sklearn implementation files ==="
find nodes/src/nodes/ml_sklearn -type f
echo ""
echo "=== Check test files for ml_sklearn ==="
find . -name '*test*' -type f | xargs grep -l 'ml_sklearn' 2>/dev/null | head -5Repository: rocketride-org/rocketride-server
Length of output: 431
🏁 Script executed:
# Examine ml_sklearn's implementation to understand how it expects to receive input
echo "=== ml_sklearn code.py ==="
cat nodes/src/nodes/ml_sklearn/code.py
echo ""
echo "=== ml_sklearn README ==="
cat nodes/src/nodes/ml_sklearn/README.mdRepository: rocketride-org/rocketride-server
Length of output: 2089
🏁 Script executed:
# Check documentation on lane format and expected schema
echo "=== Checking ROCKETRIDE_COMPONENT_REFERENCE for lane schema ==="
grep -A 20 -B 5 'lane' docs/agents/ROCKETRIDE_COMPONENT_REFERENCE.md | head -100Repository: rocketride-org/rocketride-server
Length of output: 3931
Lanes schema must follow the established pattern — replace nested pipe structure with top-level lanes object.
The ml_sklearn node is the only one (out of 71) that nests lanes under "pipe" and uses boolean in/out properties. All other nodes, including agent_langchain and search_exa, declare lanes at the top level with array values:
"lanes": {
"answers": ["output_lane"]
}Additionally, all other nodes include an "input" array for routing configuration, which ml_sklearn lacks entirely. According to RocketRide documentation, the lanes object is "the definitive reference for data flow" and is expected at the top level with keys mapping to output lane arrays. The current structure will not be parsed correctly by the service registry, causing pipeline wiring failures at runtime.
Diff showing required changes
"pipe": {
"lanes": {
"answers": {
"in": true,
"out": true
}
}
},Should be:
"lanes": {
"answers": ["answers"]
},
"input": [
{
"lane": "answers",
"output": [{ "lane": "answers" }]
}
],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@nodes/src/nodes/ml_sklearn/services.json` around lines 9 - 16, The JSON uses
a nested "pipe" -> "lanes" object with boolean "in"/"out" for the "answers"
lane; replace that with a top-level "lanes" object where "answers" maps to an
array (e.g., "answers": ["answers"]) and add a top-level "input" array that
routes from lane "answers" to output lane "answers" per the other services'
schema; specifically remove the "pipe" block and add the top-level "lanes" and
"input" keys for the ml_sklearn service so the registry can parse "answers"
correctly.
| except Exception as e: | ||
| await self._terminated() | ||
| _logger.error( | ||
| 'Task startup failed', | ||
| extra={'task_id': self.id, 'step': 'error', 'error': str(e)}, | ||
| exc_info=True, | ||
| ) | ||
| self.debug_message(f'Task startup failed: {e}') | ||
| raise |
There was a problem hiding this comment.
Error log fires after _terminated() — "Task terminated" will always precede "Task startup failed" in the log stream, inverting the causal order.
Inside _terminated() (line 850), the termination event is logged unconditionally. Because await self._terminated() runs first (line 1689), every startup failure will produce log entries in this order:
{"step": "termination", "message": "Task terminated", ...} ← logged inside _terminated()
{"step": "error", "message": "Task startup failed", ...} ← logged after _terminated() returns
A developer querying for the root cause of a startup failure will see the termination message first and have to work backwards. Moving the error log before the _terminated() call restores the expected causal sequence.
Additionally, Ruff G201 flags the exc_info=True form — _logger.exception() is the idiomatic spelling and drops the redundant kwarg.
🐛 Proposed fix — log the error first, then clean up, use exception()
except Exception as e:
- await self._terminated()
- _logger.error(
+ _logger.exception(
'Task startup failed',
- extra={'task_id': self.id, 'step': 'error', 'error': str(e)},
- exc_info=True,
+ extra={'task_id': self.id, 'step': 'error', 'error': str(e)},
)
+ await self._terminated()
self.debug_message(f'Task startup failed: {e}')
raise🧰 Tools
🪛 Ruff (0.15.12)
[warning] 1690-1690: Logging .exception(...) should be used instead of .error(..., exc_info=True)
(G201)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/ai/src/ai/modules/task/task_engine.py` around lines 1688 - 1696, The
exception handling in the task startup path logs termination before the error
(call to await self._terminated() happens before logging) and uses
_logger.error(..., exc_info=True) instead of the idiomatic _logger.exception;
update the except block in TaskEngine.task startup code so you first log the
failure with _logger.exception('Task startup failed', extra={...}) and call
self.debug_message(...) (or include same context) and only after logging await
self._terminated(); keep the final raise to re-raise the exception.
| def get_task_logger(name: str) -> logging.Logger: | ||
| logger = logging.getLogger(name) | ||
|
|
||
| if not logger.handlers: | ||
| handler = logging.StreamHandler() | ||
| handler.setFormatter(_StructuredFormatter()) | ||
| logger.addHandler(handler) | ||
| logger.propagate = False | ||
|
|
||
| return logger |
There was a problem hiding this comment.
get_task_logger never sets a log level — all _logger.info() calls will be silently dropped in default configurations.
Python's effective-level resolution always traverses the logger hierarchy regardless of propagate. With propagate = False, records don't bubble up to parent handlers, but level filtering still uses getEffectiveLevel(), which walks up to the root until it finds a non-NOTSET level. The root logger's default level is WARNING, so every _logger.info(...) call added by this PR will be filtered out before it ever reaches the StreamHandler.
🐛 Proposed fix — set level so records reach the handler
def get_task_logger(name: str) -> logging.Logger:
logger = logging.getLogger(name)
if not logger.handlers:
handler = logging.StreamHandler()
handler.setFormatter(_StructuredFormatter())
logger.addHandler(handler)
- logger.propagate = False
+ logger.setLevel(logging.DEBUG) # pass everything through; let the handler/app filter
+ logger.propagate = False # always disable propagation, not just on first setup
return loggerMoving propagate = False outside the guard is also necessary: if the logger is already configured (e.g. by a test framework or an early basicConfig call that names this logger), the guard body is skipped and propagate stays True, causing duplicate output to parent handlers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/ai/src/ai/modules/task/task_logger.py` around lines 36 - 45, The
get_task_logger function currently never sets a logger level and only sets
propagate=False inside the "if not logger.handlers" guard, causing INFO messages
to be dropped and duplicate output in some setups; fix by explicitly setting the
logger level (e.g. logger.setLevel(logging.INFO) or
logger.setLevel(logging.DEBUG) as appropriate) so records pass level filtering,
move logger.propagate = False outside the handlers-creation guard so it is
always applied, and optionally ensure the StreamHandler has an appropriate level
(handler.setLevel or leave NOTSET) so the handler receives the records; these
changes should be made in get_task_logger.
Summary
Adds a structured JSON logger to the task execution engine to improve
debuggability and observability of pipeline lifecycle events.
Changes
New File:
packages/ai/src/ai/modules/task/task_logger.py_StructuredFormatter— formats every log record as single-line JSONget_task_logger()— factory function, returns a configured Loggertimestamp,level,logger,messageextra={}:task_id,step, and any additional contextModified:
packages/ai/src/ai/modules/task/task_engine.pyFour structured log points added — zero existing logic changed:
stepvaluestart_task()— after STARTING state setstartstart_task()— aftercreate_subprocess_execsubprocessstart_task()— inexceptblockerror_terminated()— before final debug_messageterminationexit_code+final_stateWhy This Matters
debug_message()outputs free-form text that is hard to filter in logaggregators. The new structured lines emit JSON queryable by any log
pipeline (Datadog, Loki, CloudWatch Insights):
{"timestamp":"2026-05-03T10:22:01Z","level":"INFO","task_id":"abc-123","step":"subprocess","pid":9876}Testing
python -c "from ai.modules.task.task_logger import get_task_logger; print('OK')"Summary by CodeRabbit