Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds optional embedded FalkorDB support via a new falkordb.lite subpackage, updates sync and async FalkorDB constructors to start/manage an EmbeddedServer when requested, wires Unix-domain socket connection pools for embedded mode, adds lifecycle/context-manager methods, new utilities (binaries/config/server), packaging extra, and tests/docs/example updates. Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application
participant Falkor as FalkorDB
participant Lite as falkordb.lite
participant Server as EmbeddedServer
participant Redis as redis-server+module
App->>Falkor: instantiate(embedded=True, db_path, ...)
Falkor->>Lite: resolve binaries (falkordb-bin)
Lite-->>Falkor: redis-server path, falkordb module path
Falkor->>Server: start(db_path, config, startup_timeout)
Server->>Redis: launch redis-server --loadmodule <falkordb_module>
Server->>Redis: wait for unix socket + PING
Redis-->>Server: ready
Server-->>Falkor: unix_socket_path
Falkor->>Falkor: create Unix-domain connection pool
App->>Falkor: execute command
Falkor->>Redis: send command via socket
Redis-->>Falkor: response
Falkor-->>App: result
App->>Falkor: close() / exit context
Falkor->>Server: stop()
Server->>Redis: SHUTDOWN & cleanup
Server-->>Falkor: stopped
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Fix all issues with AI agents
In `@falkordb/asyncio/falkordb.py`:
- Around line 154-159: The close method calls the synchronous
EmbeddedServer.stop() from the async function close, which will block the event
loop; change the call to run the blocking stop on a thread (e.g. await
asyncio.to_thread(self._embedded_server.stop) or use loop.run_in_executor) and
then set self._embedded_server = None; keep the existing aclose for connection
as-is and ensure you import asyncio if not already.
- Around line 86-108: The BlockingConnectionPool is being created with
timeout=None which causes indefinite waiting; update the connection pool
creation (the connection_pool assignment that calls
redis.BlockingConnectionPool) to use a finite, configurable acquire timeout
instead of None — either add/use a parameter like connection_acquire_timeout
(default to a sensible value, e.g. 5s) and pass it as the timeout, or if you
want behavior consistent with the sync redis.ConnectionPool (raise immediately
when exhausted) set timeout=0 to make acquisition non-blocking; adjust any
callers/config to surface this new config and document the chosen default.
In `@falkordb/falkordb.py`:
- Around line 80-81: Remove the dead attribute assignment to self._embedded in
the class where self._embedded_server is set (remove the line "self._embedded =
embedded"); search for any references to self._embedded (there should be none)
and delete the assignment only, keeping self._embedded_server and related logic
intact so behavior is unchanged.
In `@falkordb/lite/config.py`:
- Around line 49-50: user_config is merged after setting loadmodule so a
provided {"loadmodule": "..."} can override the embedded server path; change the
merge order or guard the key: either apply user_config to config first and then
set config["loadmodule"] = embedded_path, or before calling
config.update(user_config) remove or reject the reserved "loadmodule" key (e.g.,
raise ValueError or log/warn) to prevent silent override; update the code around
config.update(user_config) and the loadmodule assignment to enforce this
protection using the symbols user_config, config.update, and the "loadmodule"
key.
- Around line 17-21: PERSISTENT_OVERRIDES currently stores the Redis "save"
rules as one space-separated string ("save": "900 1 300 10 60 10000"), which
produces an invalid single-line directive; change the "save" entry to a list
(e.g. list of tuples or list of strings) so each rule is represented separately,
and update the code that emits the redis.conf (the logic that iterates
PERSISTENT_OVERRIDES when writing the config) to expand the "save" key into
multiple lines (one "save <seconds> <changes>" per rule) rather than joining
them into one line; keep other keys like "appendonly" and "appendfsync"
serialized as single-line directives.
In `@falkordb/lite/server.py`:
- Around line 52-57: The _start method currently passes stdout=PIPE and
stderr=PIPE to subprocess.Popen which can deadlock if redis-server writes enough
output; change _start to avoid piping both streams by redirecting stdout to
subprocess.DEVNULL (or a log file) and direct stderr to a file in self._tmpdir
(e.g., set self._stderr_path and open self._stderr_file) and pass that file
handle as stderr to Popen so the OS pipe cannot fill, keep self._process
assignment but ensure you close the stderr file and read the file on early
failure (where stderr is currently read) to preserve error diagnostics.
- Around line 88-106: The stop method's call to conn.shutdown() raises redis
ConnectionError as part of a normal shutdown, so change the exception handling
in stop to catch redis.exceptions.ConnectionError (or redis.ConnectionError)
separately: call conn.shutdown(...) and then either ignore a ConnectionError
(treat it as successful shutdown) while still ensuring conn.close() in a finally
block, but only call self._process.terminate() for unexpected exceptions; keep
the subprocess wait/kill logic intact and reference the stop method,
conn.shutdown, conn.close, and self._process.terminate to locate and update the
logic.
🧹 Nitpick comments (6)
tests/test_embedded.py (1)
10-19: Consider extracting DummyServer to a shared conftest.Both
tests/test_embedded.pyandtests/test_embedded_async.pydefine nearly identicalDummyServerclasses (differing only in the socket path string). A shared fixture or helper inconftest.pywould reduce duplication.falkordb/lite/server.py (1)
69-76: Readiness check only catchesredis.ConnectionError— consider broadening toredis.RedisError.If the server is mid-startup, other transient errors (e.g.,
redis.TimeoutError) could bubble up and break the startup loop.Proposed fix
try: conn = redis.Redis(unix_socket_path=self._socket_path, decode_responses=True) conn.ping() conn.close() return - except redis.ConnectionError: + except (redis.ConnectionError, redis.TimeoutError): passfalkordb/asyncio/falkordb.py (1)
110-136: Embedded mode skips cluster/sentinel validation — good, but consider guarding explicitly.Lines 138–148 still run
Is_Cluster(conn)on the embedded connection. If the FalkorDB module ever responds with cluster-like metadata, this could unexpectedly wrap the embedded connection in a cluster client. A simple early-return guard would make the intent explicit:if not embedded: if Is_Cluster(conn): ...This is minor since an embedded single-node server is unlikely to trigger the cluster check, but it communicates intent clearly.
falkordb/falkordb.py (3)
83-117: Embedded server setup looks correct for the sync client.The lazy import of
EmbeddedServer, defaultmax_connections=16, and pool construction viaUnixDomainSocketConnectionare well-structured. One note: the sync client usesredis.ConnectionPool(non-blocking) while the async variant usesredis.BlockingConnectionPool. When the pool is exhausted, the sync client will raise immediately whereas the async client waits. Consider aligning the behavior —redis.BlockingConnectionPoolwould be more appropriate here too, to avoid abrupt failures under concurrent load:- connection_pool = redis.ConnectionPool( + connection_pool = redis.BlockingConnectionPool( connection_class=redis.UnixDomainSocketConnection, ... + timeout=20, # seconds to wait for a free connection )
166-182: Sentinel/Cluster checks run on embedded connections.Same as noted for the async variant —
Is_SentinelandIs_Clusterwill probe the embedded single-node server. While unlikely to trigger, an explicit guard makes the intent clear and avoids unnecessary round-trips:Suggested fix
+ if not embedded: + if Is_Sentinel(conn): + self.sentinel, self.service_name = Sentinel_Conn(conn, ssl) + conn = self.sentinel.master_for(self.service_name, ssl=ssl) - if Is_Sentinel(conn): - self.sentinel, self.service_name = Sentinel_Conn(conn, ssl) - conn = self.sentinel.master_for(self.service_name, ssl=ssl) - - if Is_Cluster(conn): + if Is_Cluster(conn):
188-206:close()is not idempotent — double-close risk from__exit__+__del__.
__exit__callsclose(), and later__del__callsclose()again. The second call will invokeself.connection.close()on an already-closed connection. Whileredis.Redis.close()is generally safe to call twice, it's better to be explicit. Also, Ruff flags the bareexcept Exception: passin__del__(S110/BLE001) — this is acceptable for destructors but can silently swallow real bugs during development.Suggested fix — make close() idempotent
def close(self) -> None: """Close the current DB connection and stop the embedded server if present.""" if hasattr(self, "connection") and self.connection is not None: self.connection.close() + self.connection = None if self._embedded_server is not None: self._embedded_server.stop() self._embedded_server = None
| if embedded: | ||
| from ..lite.server import EmbeddedServer | ||
|
|
||
| if max_connections is None: | ||
| max_connections = 16 | ||
|
|
||
| server = EmbeddedServer( | ||
| db_path=db_path, | ||
| config=embedded_config, | ||
| startup_timeout=startup_timeout, | ||
| ) | ||
| self._embedded_server = server | ||
| connection_pool = redis.BlockingConnectionPool( | ||
| connection_class=redis.UnixDomainSocketConnection, | ||
| path=server.unix_socket_path, | ||
| max_connections=max_connections, | ||
| timeout=None, | ||
| socket_timeout=socket_timeout, | ||
| socket_connect_timeout=socket_connect_timeout, | ||
| socket_keepalive=socket_keepalive, | ||
| socket_keepalive_options=socket_keepalive_options, | ||
| encoding=encoding, | ||
| encoding_errors=encoding_errors, | ||
| decode_responses=True, | ||
| retry_on_error=retry_on_error, | ||
| retry=retry, | ||
| health_check_interval=health_check_interval, | ||
| client_name=client_name, | ||
| lib_name=lib_name, | ||
| lib_version=lib_version, | ||
| username=username, | ||
| password=password, | ||
| credential_provider=credential_provider, | ||
| protocol=protocol, | ||
| ) |
There was a problem hiding this comment.
Blocking synchronous calls inside async __init__ will stall the event loop.
EmbeddedServer() (line 80) is fully synchronous — it spawns a subprocess and polls with time.sleep() in a loop for up to startup_timeout seconds (server.py lines 52–82). Calling this directly in the __init__ of an async client blocks the running event loop for the entire startup duration.
Consider wrapping the server startup in asyncio.to_thread() (or loop.run_in_executor()) and providing an async factory method instead of doing this work in __init__:
Example: async factory method
`@classmethod`
async def create(cls, *, embedded=False, db_path=None, embedded_config=None,
startup_timeout=10.0, max_connections=None, **kwargs):
if embedded:
import asyncio
from ..lite.server import EmbeddedServer
if max_connections is None:
max_connections = 16
server = await asyncio.to_thread(
EmbeddedServer,
db_path=db_path,
config=embedded_config,
startup_timeout=startup_timeout,
)
kwargs["connection_pool"] = redis.BlockingConnectionPool(
connection_class=redis.UnixDomainSocketConnection,
path=server.unix_socket_path,
max_connections=max_connections,
...
)
instance = cls(**kwargs)
instance._embedded_server = server
return instance
else:
return cls(**kwargs)
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #164 +/- ##
==========================================
+ Coverage 92.55% 93.32% +0.76%
==========================================
Files 38 45 +7
Lines 3009 3474 +465
==========================================
+ Hits 2785 3242 +457
- Misses 224 232 +8 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@falkordb/lite/config.py`:
- Around line 42-50: The code lets user_config override the port set for unix
sockets: after setting config["port"]="0" when unix_socket_path is provided,
config.update(user_config) can re-enable TCP; fix by treating "port" as a
reserved key just like "loadmodule"—in the branch that checks user_config,
validate that "port" is not in user_config (alongside "loadmodule") and raise
ValueError if found (references: unix_socket_path, config["port"], user_config,
config.update).
🧹 Nitpick comments (5)
falkordb/asyncio/falkordb.py (1)
76-114: Minor: collapse duplicateif embedded:guards into a single block.The
embeddedflag is checked at line 76 and again at line 112. Since the second block directly depends onconnection_poolbeing set by the first, these can be merged into oneif/else.♻️ Suggested simplification
if embedded: from ..lite.server import EmbeddedServer if max_connections is None: max_connections = 16 server = EmbeddedServer( db_path=db_path, config=embedded_config, startup_timeout=startup_timeout, ) self._embedded_server = server connection_pool = redis.BlockingConnectionPool( connection_class=redis.UnixDomainSocketConnection, path=server.unix_socket_path, max_connections=max_connections, timeout=connection_acquire_timeout, ... ) - - if embedded: conn = redis.Redis(connection_pool=connection_pool, single_connection_client=single_connection_client) - else: + else: conn = redis.Redis(host=host, port=port, ...)falkordb/lite/server.py (2)
51-52:atexithandlers accumulate and are never unregistered.Each
EmbeddedServer.__init__registersself.stopviaatexit.register, butstop()never callsatexit.unregister(self.stop). If a user creates and tears down multiple embedded instances over a program's lifetime, stale handlers accumulate. Whilestop()is idempotent so this won't cause errors, it keeps deadEmbeddedServerobjects alive (preventing GC) until interpreter exit, and runs redundant cleanup calls at shutdown.♻️ Suggested fix — unregister in stop()
def stop(self) -> None: """Stop embedded redis-server and cleanup temporary files.""" + atexit.unregister(self.stop) if self._process is not None and self._process.poll() is None:
71-79: Minor: Redis connection leak in_start()ifping()raises a non-ConnectionError.If
conn.ping()raises an exception other thanredis.ConnectionError(e.g.,redis.TimeoutError),conn.close()on line 75 is skipped. Consider atry/finallyorwithpattern for the connection.♻️ Suggested fix
if os.path.exists(self._socket_path): try: conn = redis.Redis(unix_socket_path=self._socket_path, decode_responses=True) - conn.ping() - conn.close() - self._close_stderr_file() - return + try: + conn.ping() + self._close_stderr_file() + return + finally: + conn.close() except redis.ConnectionError: passfalkordb/falkordb.py (1)
82-122: Same minor refactor opportunity: merge duplicateif embedded:checks.Same pattern as the async client — the two
if embedded:guards (lines 82 and 118) could be a singleif/elseblock for clarity.tests/test_embedded_async.py (1)
18-59: Async embedded tests cover the key lifecycle paths.Pool defaults, custom acquire-timeout propagation, and context-manager cleanup are all verified. The
DummyServermock sidesteps the blocking__init__issue, keeping tests fast.One gap: there's no async equivalent of the sync
test_embedded_passes_config_and_db_path(parameter forwarding fordb_path,embedded_config,startup_timeout). Since both clients delegate identically toEmbeddedServer, this is low risk, but adding a mirrored test would strengthen parity.
| if unix_socket_path: | ||
| config["unixsocket"] = unix_socket_path | ||
| config["unixsocketperm"] = "700" | ||
| config["port"] = "0" | ||
|
|
||
| if user_config: | ||
| if "loadmodule" in user_config: | ||
| raise ValueError("'loadmodule' is reserved and cannot be overridden") | ||
| config.update(user_config) |
There was a problem hiding this comment.
user_config can override port after unix socket setup forces it to 0.
Lines 42-45 set port to "0" when a unix socket is configured, but config.update(user_config) on line 50 runs afterward. A user passing {"port": "6379"} would re-enable TCP, which may be unexpected in embedded mode. If this is intentional (allowing advanced users to also expose TCP), consider documenting it; otherwise, add port to the reserved-key validation alongside loadmodule.
🧰 Tools
🪛 Ruff (0.15.0)
[warning] 49-49: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
In `@falkordb/lite/config.py` around lines 42 - 50, The code lets user_config
override the port set for unix sockets: after setting config["port"]="0" when
unix_socket_path is provided, config.update(user_config) can re-enable TCP; fix
by treating "port" as a reserved key just like "loadmodule"—in the branch that
checks user_config, validate that "port" is not in user_config (alongside
"loadmodule") and raise ValueError if found (references: unix_socket_path,
config["port"], user_config, config.update).
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/test_lite_server.py`:
- Around line 114-127: In test_start_timeout_calls_stop the monkeypatch of
time.monotonic returns a constant which makes the server._start loop never reach
its deadline; change the monkeypatch for falkordb.lite.server.time.monotonic to
return an increasing value (e.g., a closure or generator that increments on each
call so it eventually returns a value >= deadline) so the loop can exit and
raise EmbeddedServerError, keeping the rest of the test mocks (os.path.exists ->
False, DummyProcess/poll_result) unchanged; ensure the replacement is callable
from the test function (referenced symbol: test_start_timeout_calls_stop and
server._start).
🧹 Nitpick comments (2)
tests/test_lite_server.py (2)
52-62: Brittle__new__bypass — consider a note or assertion if attributes are added toEmbeddedServer.__init__.Using
__new__+ manual attribute assignment to skip the constructor is a pragmatic approach for unit tests, but any new attribute added toEmbeddedServer.__init__will silently be missing here, potentially causingAttributeErrorin tests rather than a clear failure. This is fine for now, but worth being aware of.
177-186: Minor: file handle left open if_read_stderror assertion fails.
stderr_fileis opened on Line 179 but only closed on Line 186 after the assertion. If anything between those lines raises, the handle leaks. Consider using atry/finallyor awithblock.Proposed fix
def test_read_stderr_flushes_open_file(tmp_path): server = _make_server(tmp_path) stderr_file = open(tmp_path / "redis.stderr.log", "w", encoding="utf-8") - stderr_file.write("stderr-text") - server._stderr_file = stderr_file - - output = server._read_stderr() - assert output == "stderr-text" - - stderr_file.close() + try: + stderr_file.write("stderr-text") + server._stderr_file = stderr_file + output = server._read_stderr() + assert output == "stderr-text" + finally: + stderr_file.close()
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/test_lite_server.py`:
- Around line 178-187: The test test_read_stderr_flushes_open_file opens
stderr_file and assigns it to server._stderr_file but may leak the file if the
assertion fails; wrap the file setup/use in a context manager or a try/finally
so stderr_file.close() always runs (e.g., create the file with with open(... )
as stderr_file: and assign server._stderr_file inside that block or ensure you
call stderr_file.close() in a finally block) ensuring the cleanup happens even
if server._read_stderr() assertion fails.
🧹 Nitpick comments (1)
tests/test_lite_server.py (1)
52-62: Fragile coupling toEmbeddedServerinternals via__new__.Bypassing
__init__means any new attribute added to the constructor will silently be missing here, potentially causingAttributeErrorin tests rather than a clear failure. Consider adding a brief comment noting this coupling, or alternatively asserting the expected attributes exist on a real instance so that new fields are surfaced early.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Resolve conflicts between main branch refactoring (DriverInfo, RedisError handling, close/aclose methods) and embedded lite mode additions. - Use driver_info=DriverInfo() for non-embedded redis.Redis calls - Combine embedded server cleanup with RedisError-safe close() - Rename async close() to aclose() to match main's convention - Fix mypy errors in lite module (type annotations) - Fix ruff line-length violation in server.py Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
falkordb/falkordb.py (1)
1-407:⚠️ Potential issue | 🟡 MinorRuff format check failure — file needs reformatting.
The CI lint pipeline reports this file would be reformatted by Ruff. Run
ruff format falkordb/falkordb.pyand commit the result.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@falkordb/falkordb.py` around lines 1 - 407, The file fails Ruff formatting; run an automatic reformat and commit the changes: run `ruff format falkordb/falkordb.py` (or your editor/IDE integration) to apply Ruff's style fixes across the FalkorDB class and its methods (e.g., __init__, from_url, select_graph, udf_load/udf_list/udf_flush/udf_delete) then stage and commit the rewritten file so the CI lint step passes.falkordb/asyncio/falkordb.py (1)
1-384:⚠️ Potential issue | 🟡 MinorRuff format check failure — file needs reformatting.
The CI lint pipeline reports this file would be reformatted by Ruff. Run
ruff format falkordb/asyncio/falkordb.pyand commit the result.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@falkordb/asyncio/falkordb.py` around lines 1 - 384, The file fails Ruff formatting — run the formatter and commit the changes; specifically, run `ruff format` (e.g. ruff format falkordb/asyncio/falkordb.py) to reformat the FalkorDB class and its methods (e.g. __init__, from_url, select_graph, config_get/config_set, udf_* methods), then add and commit the resulting changes so the CI lint pipeline passes.
🧹 Nitpick comments (1)
falkordb/asyncio/falkordb.py (1)
82-122: Two consecutiveif embedded:blocks can be merged.The second
if embedded:/else:at line 118 is a direct continuation of the first block —connis always derived fromconnection_poolbuilt in the first block. Merging them removes the redundant guard:♻️ Proposed refactor
if embedded: from ..lite.server import EmbeddedServer if max_connections is None: max_connections = 16 server = EmbeddedServer(...) self._embedded_server = server connection_pool = redis.BlockingConnectionPool( ... ) + conn = redis.Redis( + connection_pool=connection_pool, + single_connection_client=single_connection_client, + ) - - if embedded: - conn = redis.Redis( - connection_pool=connection_pool, - single_connection_client=single_connection_client, - ) - else: + else: conn = redis.Redis( host=host, ... )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@falkordb/asyncio/falkordb.py` around lines 82 - 122, Merge the two consecutive "if embedded:" blocks by moving the redis.Redis(...) creation into the first embedded branch so the EmbeddedServer instantiation, assignment to self._embedded_server, creation of connection_pool (via redis.BlockingConnectionPool) and creation of conn (via redis.Redis) all live inside a single "if embedded:" branch; remove the redundant second guard, ensure connection_pool and conn remain in scope for later use, and keep references to EmbeddedServer, self._embedded_server, connection_pool, redis.BlockingConnectionPool, and redis.Redis to locate and update the code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@falkordb/asyncio/falkordb.py`:
- Line 80: The class lacks a class-level default for _embedded_server which
leads to AttributeError when instances are created without running __init__
(f.e. in tests); add a class attribute declaration on the FalkorDB class like
"_embedded_server = None" so any instance (including ones created via
object.__new__ or mocks) always has this attribute defined, and ensure the
aclose() method still checks self._embedded_server before using it (referencing
FalkorDB, _embedded_server, and aclose).
In `@falkordb/falkordb.py`:
- Line 86: The class FalkorDB is missing a class-level default for
_embedded_server causing AttributeError when __init__ is bypassed; add a class
attribute _embedded_server = None on the FalkorDB class (so instances that skip
__init__ still have the attribute) and leave the existing instance assignments
in __init__ and the close() method unchanged; reference the FalkorDB class, the
_embedded_server attribute, the __init__ method, and the close method when
making this change.
In `@falkordb/lite/config.py`:
- Around line 54-63: The serialization loop in config.py currently emits a bare
"save " for an empty string; update the logic in the loop that builds lines (the
block handling key == "save" and the generic lines.append(f"{key} {value}")
path) so that when key == "save" and value == "" (empty string) you append the
explicit string 'save ""' instead of f"{key} {value}"; keep the existing
handling for list values in the key == "save" branch and leave other keys
unchanged.
In `@falkordb/lite/server.py`:
- Around line 73-83: The code creates conn = redis.Redis(...) but only closes it
on redis.ConnectionError paths; if conn.ping() raises a different exception the
connection and stderr file remain open. Wrap ping in a try/except/else/finally:
after creating conn call conn.ping() inside try, except redis.ConnectionError:
pass, else: call self._close_stderr_file() and return, and in a finally block
always call conn.close() to ensure the socket is closed on all error paths;
reference conn, redis.Redis, conn.ping, conn.close, and self._close_stderr_file
when making the change.
---
Outside diff comments:
In `@falkordb/asyncio/falkordb.py`:
- Around line 1-384: The file fails Ruff formatting — run the formatter and
commit the changes; specifically, run `ruff format` (e.g. ruff format
falkordb/asyncio/falkordb.py) to reformat the FalkorDB class and its methods
(e.g. __init__, from_url, select_graph, config_get/config_set, udf_* methods),
then add and commit the resulting changes so the CI lint pipeline passes.
In `@falkordb/falkordb.py`:
- Around line 1-407: The file fails Ruff formatting; run an automatic reformat
and commit the changes: run `ruff format falkordb/falkordb.py` (or your
editor/IDE integration) to apply Ruff's style fixes across the FalkorDB class
and its methods (e.g., __init__, from_url, select_graph,
udf_load/udf_list/udf_flush/udf_delete) then stage and commit the rewritten file
so the CI lint step passes.
---
Duplicate comments:
In `@falkordb/asyncio/falkordb.py`:
- Around line 88-92: The EmbeddedServer startup blocks the event loop because
FalkorDB().__init__ calls EmbeddedServer (which polls with time.sleep); fix by
adding an async factory classmethod (e.g., FalkorDB.create) that constructs a
non-embedded instance via cls(**kwargs) and, when embedded=True, creates the
EmbeddedServer off the event loop (use asyncio.to_thread(EmbeddedServer,
db_path=..., config=..., startup_timeout=...)), set instance._embedded_server,
build the redis BlockingConnectionPool pointing at server.unix_socket_path, and
assign instance.connection, instance.flushdb and instance.execute_command before
returning the instance; leave the synchronous __init__ untouched and document
callers to use FalkorDB.create for embedded starts.
In `@falkordb/lite/config.py`:
- Around line 42-50: The code allows user_config to override the port forced to
"0" for Unix-socket mode; update the logic in the block handling user_config so
that "port" is treated as a reserved key (in addition to "loadmodule") when
unix_socket_path is set: detect if "port" is present in user_config and either
raise ValueError("'port' is reserved and cannot be overridden") or remove/ignore
it before calling config.update(user_config), referencing unix_socket_path,
config, user_config, and the existing "loadmodule" check to locate where to add
this guard.
---
Nitpick comments:
In `@falkordb/asyncio/falkordb.py`:
- Around line 82-122: Merge the two consecutive "if embedded:" blocks by moving
the redis.Redis(...) creation into the first embedded branch so the
EmbeddedServer instantiation, assignment to self._embedded_server, creation of
connection_pool (via redis.BlockingConnectionPool) and creation of conn (via
redis.Redis) all live inside a single "if embedded:" branch; remove the
redundant second guard, ensure connection_pool and conn remain in scope for
later use, and keep references to EmbeddedServer, self._embedded_server,
connection_pool, redis.BlockingConnectionPool, and redis.Redis to locate and
update the code.
| startup_timeout=10.0, | ||
| connection_acquire_timeout=5.0, | ||
| ): | ||
| self._embedded_server = None |
There was a problem hiding this comment.
AttributeError: 'FalkorDB' object has no attribute '_embedded_server' — CI test failure.
Although __init__ sets self._embedded_server = None at line 80, the CI test fails with AttributeError at line 282, indicating instances are reaching aclose() without _embedded_server set (e.g., via object.__new__, a patched __init__, or mock-created instances in existing tests). Add a class-level declaration as a safe default:
🐛 Proposed fix
class FalkorDB:
"""
Asynchronous FalkorDB Class for interacting with a FalkorDB server.
...
"""
+ _embedded_server = None
+
def __init__(
self,
...🧰 Tools
🪛 GitHub Actions: Lint
[error] Ruff format check would reformat this file.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@falkordb/asyncio/falkordb.py` at line 80, The class lacks a class-level
default for _embedded_server which leads to AttributeError when instances are
created without running __init__ (f.e. in tests); add a class attribute
declaration on the FalkorDB class like "_embedded_server = None" so any instance
(including ones created via object.__new__ or mocks) always has this attribute
defined, and ensure the aclose() method still checks self._embedded_server
before using it (referencing FalkorDB, _embedded_server, and aclose).
| embedded_config=None, | ||
| startup_timeout=10.0, | ||
| ): | ||
| self._embedded_server = None |
There was a problem hiding this comment.
AttributeError: 'FalkorDB' object has no attribute '_embedded_server' — CI test failure.
Same root cause as the async variant: instances reaching close() (line 204) without _embedded_server set, because __init__ was bypassed by test infrastructure. Add a class-level default:
🐛 Proposed fix
class FalkorDB:
"""
FalkorDB Class for interacting with a FalkorDB server.
...
"""
+ _embedded_server = None
+
def __init__(
self,
...🧰 Tools
🪛 GitHub Actions: Lint
[error] Ruff format check would reformat this file.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@falkordb/falkordb.py` at line 86, The class FalkorDB is missing a class-level
default for _embedded_server causing AttributeError when __init__ is bypassed;
add a class attribute _embedded_server = None on the FalkorDB class (so
instances that skip __init__ still have the attribute) and leave the existing
instance assignments in __init__ and the close() method unchanged; reference the
FalkorDB class, the _embedded_server attribute, the __init__ method, and the
close method when making this change.
| lines = [] | ||
| for key, value in config.items(): | ||
| if key == "save" and isinstance(value, list): | ||
| for rule in value: | ||
| if isinstance(rule, (tuple, list)) and len(rule) == 2: | ||
| lines.append(f"save {rule[0]} {rule[1]}") | ||
| else: | ||
| lines.append(f"save {rule}") | ||
| continue | ||
| lines.append(f"{key} {value}") |
There was a problem hiding this comment.
save "" serialization omits quotes — Redis may not disable RDB snapshots correctly.
When DEFAULT_CONFIG's "save": "" is serialized via f"{key} {value}", it produces the bare line save (trailing space, no value). Redis's redis.conf requires the explicit form save "" (quoted empty string) to disable snapshots. A bare save with no argument is not guaranteed to have the same effect and may cause a config parse error depending on the Redis version.
🐛 Proposed fix
lines = []
for key, value in config.items():
if key == "save" and isinstance(value, list):
for rule in value:
if isinstance(rule, (tuple, list)) and len(rule) == 2:
lines.append(f"save {rule[0]} {rule[1]}")
else:
lines.append(f"save {rule}")
continue
+ if isinstance(value, str) and value == "":
+ lines.append(f'{key} ""')
+ continue
lines.append(f"{key} {value}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@falkordb/lite/config.py` around lines 54 - 63, The serialization loop in
config.py currently emits a bare "save " for an empty string; update the logic
in the loop that builds lines (the block handling key == "save" and the generic
lines.append(f"{key} {value}") path) so that when key == "save" and value == ""
(empty string) you append the explicit string 'save ""' instead of f"{key}
{value}"; keep the existing handling for list values in the key == "save" branch
and leave other keys unchanged.
| if os.path.exists(self._socket_path): | ||
| try: | ||
| conn = redis.Redis( | ||
| unix_socket_path=self._socket_path, decode_responses=True | ||
| ) | ||
| conn.ping() | ||
| conn.close() | ||
| self._close_stderr_file() | ||
| return | ||
| except redis.ConnectionError: | ||
| pass |
There was a problem hiding this comment.
conn is not closed when ping() raises a non-ConnectionError exception.
If conn.ping() raises anything other than redis.ConnectionError (e.g., a protocol/encoding error), the exception propagates out of the readiness loop without closing conn or self._stderr_file. Use a finally block:
🐛 Proposed fix
if os.path.exists(self._socket_path):
try:
conn = redis.Redis(
unix_socket_path=self._socket_path, decode_responses=True
)
- conn.ping()
- conn.close()
- self._close_stderr_file()
- return
+ try:
+ conn.ping()
+ self._close_stderr_file()
+ return
+ finally:
+ conn.close()
except redis.ConnectionError:
pass📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if os.path.exists(self._socket_path): | |
| try: | |
| conn = redis.Redis( | |
| unix_socket_path=self._socket_path, decode_responses=True | |
| ) | |
| conn.ping() | |
| conn.close() | |
| self._close_stderr_file() | |
| return | |
| except redis.ConnectionError: | |
| pass | |
| if os.path.exists(self._socket_path): | |
| try: | |
| conn = redis.Redis( | |
| unix_socket_path=self._socket_path, decode_responses=True | |
| ) | |
| try: | |
| conn.ping() | |
| self._close_stderr_file() | |
| return | |
| finally: | |
| conn.close() | |
| except redis.ConnectionError: | |
| pass |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@falkordb/lite/server.py` around lines 73 - 83, The code creates conn =
redis.Redis(...) but only closes it on redis.ConnectionError paths; if
conn.ping() raises a different exception the connection and stderr file remain
open. Wrap ping in a try/except/else/finally: after creating conn call
conn.ping() inside try, except redis.ConnectionError: pass, else: call
self._close_stderr_file() and return, and in a finally block always call
conn.close() to ensure the socket is closed on all error paths; reference conn,
redis.Redis, conn.ping, conn.close, and self._close_stderr_file when making the
change.
Use getattr() to safely access _embedded_server in close() and aclose(), handling the case where FalkorDB objects are created via object.__new__() without calling __init__ (as done in test_close.py). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Format PR files to match project's ruff line-length (88) and update uv.lock to reflect the new lite optional dependency. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add class-level _embedded_server = None to both sync and async FalkorDB classes to prevent AttributeError when __init__ is bypassed - Fix user_config overriding port after unix socket setup by re-asserting port=0 after user_config merge - Fix save "" serialization to emit 'save ""' instead of bare 'save ' - Ensure redis conn is always closed via finally block in startup loop - Add explanatory comment to empty except in __del__ - Replace explicit __del__() call with del+gc.collect() in test Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Summary
liteextra withfalkordb-bindependencyfalkordb.litepackage for binary resolution, config generation, and embedded server lifecycleembedded=Truesupport to sync and asyncFalkorDBconstructors with pool defaults and lifecycle cleanup APIsDetails
embedded,db_path,embedded_config,startup_timeoutmax_connections=16)close(),withcontext manager, and destructor cleanup for embedded modeawait close()andasync withcleanup for embedded modeValidation
python3 -m py_compileon all modified/new Python filespytest/pytest-asyncioare unavailable in the active runtimeCloses #163
Summary by CodeRabbit
New Features
Documentation
Tests
Chores