Skip to content

Add embedded FalkorDB lite mode support#164

Open
gkorland wants to merge 14 commits intomainfrom
feat/163-embedded-lite-support
Open

Add embedded FalkorDB lite mode support#164
gkorland wants to merge 14 commits intomainfrom
feat/163-embedded-lite-support

Conversation

@gkorland
Copy link
Contributor

@gkorland gkorland commented Feb 14, 2026

Summary

  • add optional lite extra with falkordb-bin dependency
  • introduce falkordb.lite package for binary resolution, config generation, and embedded server lifecycle
  • add embedded=True support to sync and async FalkorDB constructors with pool defaults and lifecycle cleanup APIs
  • add embedded-focused tests and README usage/install documentation

Details

  • new constructor args: embedded, db_path, embedded_config, startup_timeout
  • embedded mode starts local redis-server + FalkorDB module via unix socket and uses a connection pool (default max_connections=16)
  • sync client now supports close(), with context manager, and destructor cleanup for embedded mode
  • async client now supports await close() and async with cleanup for embedded mode

Validation

  • syntax validation passed with python3 -m py_compile on all modified/new Python files
  • full pytest run could not be executed in this environment because pytest/pytest-asyncio are unavailable in the active runtime

Closes #163

Summary by CodeRabbit

  • New Features

    • In-process "embedded" FalkorDB mode for sync and async clients with lifecycle and context-manager support and sensible default connection sizing (pool=16).
  • Documentation

    • README adds embedded-mode installation (lite extra) and sync/async usage examples demonstrating embedded usage.
  • Tests

    • New tests covering embedded server startup, config generation, connection behavior, timeouts, lifecycle closing, and error paths.
  • Chores

    • Added optional "lite" extras for embedded binaries and utilities to manage embedded server and config.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 14, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds optional embedded FalkorDB support via a new falkordb.lite subpackage, updates sync and async FalkorDB constructors to start/manage an EmbeddedServer when requested, wires Unix-domain socket connection pools for embedded mode, adds lifecycle/context-manager methods, new utilities (binaries/config/server), packaging extra, and tests/docs/example updates.

Changes

Cohort / File(s) Summary
Core FalkorDB (sync)
falkordb/falkordb.py
Constructor extended with embedded, db_path, embedded_config, startup_timeout; starts EmbeddedServer when embedded, uses Unix-domain socket ConnectionPool for embedded mode, adds close(), __enter__/__exit__ lifecycle methods and ensures server shutdown on close.
Core FalkorDB (async)
falkordb/asyncio/falkordb.py
Async constructor extended similarly plus connection_acquire_timeout; starts EmbeddedServer for embedded mode and builds a BlockingConnectionPool over the server socket (default max_connections=16); adds async lifecycle methods close(), __aenter__, __aexit__, and aclose() now stops embedded server.
Embedded subpackage
falkordb/lite/__init__.py, falkordb/lite/binaries.py, falkordb/lite/config.py, falkordb/lite/server.py
New falkordb.lite implements binary resolution (raises BinaryNotFoundError with install guidance), generate_config() to produce redis.conf (persistence, unix-socket, loadmodule), and EmbeddedServer to write config, launch redis-server with FalkorDB module, wait for readiness, expose unix_socket_path, perform graceful shutdown/cleanup, and surface startup errors.
Packaging & Docs
pyproject.toml, README.md
Adds optional extra lite = ["falkordb-bin>=1.4.0,<2.0.0"] and updates README with embedded installation and sync/async embedded usage examples.
Tests (sync & async & lite)
tests/test_embedded.py, tests/test_embedded_async.py, tests/test_lite_binaries.py, tests/test_lite_server.py
New test suites covering embedded startup/shutdown, default/custom pool and timeouts, context-manager shutdown, config generation, binary-not-found handling, and extensive EmbeddedServer lifecycle and error conditions using monkeypatch/DummyProcess/DummyServer fixtures.

Sequence Diagram(s)

sequenceDiagram
    participant App as Application
    participant Falkor as FalkorDB
    participant Lite as falkordb.lite
    participant Server as EmbeddedServer
    participant Redis as redis-server+module

    App->>Falkor: instantiate(embedded=True, db_path, ...)
    Falkor->>Lite: resolve binaries (falkordb-bin)
    Lite-->>Falkor: redis-server path, falkordb module path

    Falkor->>Server: start(db_path, config, startup_timeout)
    Server->>Redis: launch redis-server --loadmodule <falkordb_module>
    Server->>Redis: wait for unix socket + PING
    Redis-->>Server: ready

    Server-->>Falkor: unix_socket_path
    Falkor->>Falkor: create Unix-domain connection pool
    App->>Falkor: execute command
    Falkor->>Redis: send command via socket
    Redis-->>Falkor: response
    Falkor-->>App: result

    App->>Falkor: close() / exit context
    Falkor->>Server: stop()
    Server->>Redis: SHUTDOWN & cleanup
    Server-->>Falkor: stopped
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 I found a socket snug and round,

wrote configs, coaxed a tiny sound.
Modules hopped and PING replied,
pools assembled, servers sighed.
Close the burrow — all safe inside.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 19.40% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'Add embedded FalkorDB lite mode support' clearly and concisely describes the main objective of the changeset: introducing embedded FalkorDB functionality via a new lite mode.
Linked Issues check ✅ Passed The PR fully implements the coding requirements from issue #163: embedded mode constructor params, falkordb.lite subpackage with binaries/config/server modules, EmbeddedServer lifecycle management, Unix-socket connections with pooling, and comprehensive test coverage.
Out of Scope Changes check ✅ Passed All changes are directly scoped to the objectives in issue #163: README updates for embedded usage, new lite subpackage, constructor extensions, lifecycle management, optional dependency, and embedded-focused tests with no unrelated modifications.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/163-embedded-lite-support

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🤖 Fix all issues with AI agents
In `@falkordb/asyncio/falkordb.py`:
- Around line 154-159: The close method calls the synchronous
EmbeddedServer.stop() from the async function close, which will block the event
loop; change the call to run the blocking stop on a thread (e.g. await
asyncio.to_thread(self._embedded_server.stop) or use loop.run_in_executor) and
then set self._embedded_server = None; keep the existing aclose for connection
as-is and ensure you import asyncio if not already.
- Around line 86-108: The BlockingConnectionPool is being created with
timeout=None which causes indefinite waiting; update the connection pool
creation (the connection_pool assignment that calls
redis.BlockingConnectionPool) to use a finite, configurable acquire timeout
instead of None — either add/use a parameter like connection_acquire_timeout
(default to a sensible value, e.g. 5s) and pass it as the timeout, or if you
want behavior consistent with the sync redis.ConnectionPool (raise immediately
when exhausted) set timeout=0 to make acquisition non-blocking; adjust any
callers/config to surface this new config and document the chosen default.

In `@falkordb/falkordb.py`:
- Around line 80-81: Remove the dead attribute assignment to self._embedded in
the class where self._embedded_server is set (remove the line "self._embedded =
embedded"); search for any references to self._embedded (there should be none)
and delete the assignment only, keeping self._embedded_server and related logic
intact so behavior is unchanged.

In `@falkordb/lite/config.py`:
- Around line 49-50: user_config is merged after setting loadmodule so a
provided {"loadmodule": "..."} can override the embedded server path; change the
merge order or guard the key: either apply user_config to config first and then
set config["loadmodule"] = embedded_path, or before calling
config.update(user_config) remove or reject the reserved "loadmodule" key (e.g.,
raise ValueError or log/warn) to prevent silent override; update the code around
config.update(user_config) and the loadmodule assignment to enforce this
protection using the symbols user_config, config.update, and the "loadmodule"
key.
- Around line 17-21: PERSISTENT_OVERRIDES currently stores the Redis "save"
rules as one space-separated string ("save": "900 1 300 10 60 10000"), which
produces an invalid single-line directive; change the "save" entry to a list
(e.g. list of tuples or list of strings) so each rule is represented separately,
and update the code that emits the redis.conf (the logic that iterates
PERSISTENT_OVERRIDES when writing the config) to expand the "save" key into
multiple lines (one "save <seconds> <changes>" per rule) rather than joining
them into one line; keep other keys like "appendonly" and "appendfsync"
serialized as single-line directives.

In `@falkordb/lite/server.py`:
- Around line 52-57: The _start method currently passes stdout=PIPE and
stderr=PIPE to subprocess.Popen which can deadlock if redis-server writes enough
output; change _start to avoid piping both streams by redirecting stdout to
subprocess.DEVNULL (or a log file) and direct stderr to a file in self._tmpdir
(e.g., set self._stderr_path and open self._stderr_file) and pass that file
handle as stderr to Popen so the OS pipe cannot fill, keep self._process
assignment but ensure you close the stderr file and read the file on early
failure (where stderr is currently read) to preserve error diagnostics.
- Around line 88-106: The stop method's call to conn.shutdown() raises redis
ConnectionError as part of a normal shutdown, so change the exception handling
in stop to catch redis.exceptions.ConnectionError (or redis.ConnectionError)
separately: call conn.shutdown(...) and then either ignore a ConnectionError
(treat it as successful shutdown) while still ensuring conn.close() in a finally
block, but only call self._process.terminate() for unexpected exceptions; keep
the subprocess wait/kill logic intact and reference the stop method,
conn.shutdown, conn.close, and self._process.terminate to locate and update the
logic.
🧹 Nitpick comments (6)
tests/test_embedded.py (1)

10-19: Consider extracting DummyServer to a shared conftest.

Both tests/test_embedded.py and tests/test_embedded_async.py define nearly identical DummyServer classes (differing only in the socket path string). A shared fixture or helper in conftest.py would reduce duplication.

falkordb/lite/server.py (1)

69-76: Readiness check only catches redis.ConnectionError — consider broadening to redis.RedisError.

If the server is mid-startup, other transient errors (e.g., redis.TimeoutError) could bubble up and break the startup loop.

Proposed fix
                 try:
                     conn = redis.Redis(unix_socket_path=self._socket_path, decode_responses=True)
                     conn.ping()
                     conn.close()
                     return
-                except redis.ConnectionError:
+                except (redis.ConnectionError, redis.TimeoutError):
                     pass
falkordb/asyncio/falkordb.py (1)

110-136: Embedded mode skips cluster/sentinel validation — good, but consider guarding explicitly.

Lines 138–148 still run Is_Cluster(conn) on the embedded connection. If the FalkorDB module ever responds with cluster-like metadata, this could unexpectedly wrap the embedded connection in a cluster client. A simple early-return guard would make the intent explicit:

if not embedded:
    if Is_Cluster(conn):
        ...

This is minor since an embedded single-node server is unlikely to trigger the cluster check, but it communicates intent clearly.

falkordb/falkordb.py (3)

83-117: Embedded server setup looks correct for the sync client.

The lazy import of EmbeddedServer, default max_connections=16, and pool construction via UnixDomainSocketConnection are well-structured. One note: the sync client uses redis.ConnectionPool (non-blocking) while the async variant uses redis.BlockingConnectionPool. When the pool is exhausted, the sync client will raise immediately whereas the async client waits. Consider aligning the behavior — redis.BlockingConnectionPool would be more appropriate here too, to avoid abrupt failures under concurrent load:

-            connection_pool = redis.ConnectionPool(
+            connection_pool = redis.BlockingConnectionPool(
                 connection_class=redis.UnixDomainSocketConnection,
                 ...
+                timeout=20,  # seconds to wait for a free connection
             )

166-182: Sentinel/Cluster checks run on embedded connections.

Same as noted for the async variant — Is_Sentinel and Is_Cluster will probe the embedded single-node server. While unlikely to trigger, an explicit guard makes the intent clear and avoids unnecessary round-trips:

Suggested fix
+        if not embedded:
+            if Is_Sentinel(conn):
+                self.sentinel, self.service_name = Sentinel_Conn(conn, ssl)
+                conn = self.sentinel.master_for(self.service_name, ssl=ssl)

-        if Is_Sentinel(conn):
-            self.sentinel, self.service_name = Sentinel_Conn(conn, ssl)
-            conn = self.sentinel.master_for(self.service_name, ssl=ssl)
-
-        if Is_Cluster(conn):
+            if Is_Cluster(conn):

188-206: close() is not idempotent — double-close risk from __exit__ + __del__.

__exit__ calls close(), and later __del__ calls close() again. The second call will invoke self.connection.close() on an already-closed connection. While redis.Redis.close() is generally safe to call twice, it's better to be explicit. Also, Ruff flags the bare except Exception: pass in __del__ (S110/BLE001) — this is acceptable for destructors but can silently swallow real bugs during development.

Suggested fix — make close() idempotent
     def close(self) -> None:
         """Close the current DB connection and stop the embedded server if present."""
         if hasattr(self, "connection") and self.connection is not None:
             self.connection.close()
+            self.connection = None
         if self._embedded_server is not None:
             self._embedded_server.stop()
             self._embedded_server = None

Comment on lines +74 to +108
if embedded:
from ..lite.server import EmbeddedServer

if max_connections is None:
max_connections = 16

server = EmbeddedServer(
db_path=db_path,
config=embedded_config,
startup_timeout=startup_timeout,
)
self._embedded_server = server
connection_pool = redis.BlockingConnectionPool(
connection_class=redis.UnixDomainSocketConnection,
path=server.unix_socket_path,
max_connections=max_connections,
timeout=None,
socket_timeout=socket_timeout,
socket_connect_timeout=socket_connect_timeout,
socket_keepalive=socket_keepalive,
socket_keepalive_options=socket_keepalive_options,
encoding=encoding,
encoding_errors=encoding_errors,
decode_responses=True,
retry_on_error=retry_on_error,
retry=retry,
health_check_interval=health_check_interval,
client_name=client_name,
lib_name=lib_name,
lib_version=lib_version,
username=username,
password=password,
credential_provider=credential_provider,
protocol=protocol,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Blocking synchronous calls inside async __init__ will stall the event loop.

EmbeddedServer() (line 80) is fully synchronous — it spawns a subprocess and polls with time.sleep() in a loop for up to startup_timeout seconds (server.py lines 52–82). Calling this directly in the __init__ of an async client blocks the running event loop for the entire startup duration.

Consider wrapping the server startup in asyncio.to_thread() (or loop.run_in_executor()) and providing an async factory method instead of doing this work in __init__:

Example: async factory method
`@classmethod`
async def create(cls, *, embedded=False, db_path=None, embedded_config=None,
                 startup_timeout=10.0, max_connections=None, **kwargs):
    if embedded:
        import asyncio
        from ..lite.server import EmbeddedServer

        if max_connections is None:
            max_connections = 16

        server = await asyncio.to_thread(
            EmbeddedServer,
            db_path=db_path,
            config=embedded_config,
            startup_timeout=startup_timeout,
        )
        kwargs["connection_pool"] = redis.BlockingConnectionPool(
            connection_class=redis.UnixDomainSocketConnection,
            path=server.unix_socket_path,
            max_connections=max_connections,
            ...
        )
        instance = cls(**kwargs)
        instance._embedded_server = server
        return instance
    else:
        return cls(**kwargs)

@codecov
Copy link

codecov bot commented Feb 14, 2026

Codecov Report

❌ Patch coverage is 97.89916% with 10 lines in your changes missing coverage. Please review.
✅ Project coverage is 93.32%. Comparing base (8026173) to head (1b1b8ab).

Files with missing lines Patch % Lines
falkordb/lite/server.py 94.44% 5 Missing ⚠️
falkordb/lite/config.py 92.30% 3 Missing ⚠️
falkordb/falkordb.py 92.00% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #164      +/-   ##
==========================================
+ Coverage   92.55%   93.32%   +0.76%     
==========================================
  Files          38       45       +7     
  Lines        3009     3474     +465     
==========================================
+ Hits         2785     3242     +457     
- Misses        224      232       +8     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@falkordb/lite/config.py`:
- Around line 42-50: The code lets user_config override the port set for unix
sockets: after setting config["port"]="0" when unix_socket_path is provided,
config.update(user_config) can re-enable TCP; fix by treating "port" as a
reserved key just like "loadmodule"—in the branch that checks user_config,
validate that "port" is not in user_config (alongside "loadmodule") and raise
ValueError if found (references: unix_socket_path, config["port"], user_config,
config.update).
🧹 Nitpick comments (5)
falkordb/asyncio/falkordb.py (1)

76-114: Minor: collapse duplicate if embedded: guards into a single block.

The embedded flag is checked at line 76 and again at line 112. Since the second block directly depends on connection_pool being set by the first, these can be merged into one if/else.

♻️ Suggested simplification
         if embedded:
             from ..lite.server import EmbeddedServer
 
             if max_connections is None:
                 max_connections = 16
 
             server = EmbeddedServer(
                 db_path=db_path,
                 config=embedded_config,
                 startup_timeout=startup_timeout,
             )
             self._embedded_server = server
             connection_pool = redis.BlockingConnectionPool(
                 connection_class=redis.UnixDomainSocketConnection,
                 path=server.unix_socket_path,
                 max_connections=max_connections,
                 timeout=connection_acquire_timeout,
                 ...
             )
-
-        if embedded:
             conn = redis.Redis(connection_pool=connection_pool,
                                single_connection_client=single_connection_client)
-        else:
+        else:
             conn = redis.Redis(host=host, port=port, ...)
falkordb/lite/server.py (2)

51-52: atexit handlers accumulate and are never unregistered.

Each EmbeddedServer.__init__ registers self.stop via atexit.register, but stop() never calls atexit.unregister(self.stop). If a user creates and tears down multiple embedded instances over a program's lifetime, stale handlers accumulate. While stop() is idempotent so this won't cause errors, it keeps dead EmbeddedServer objects alive (preventing GC) until interpreter exit, and runs redundant cleanup calls at shutdown.

♻️ Suggested fix — unregister in stop()
     def stop(self) -> None:
         """Stop embedded redis-server and cleanup temporary files."""
+        atexit.unregister(self.stop)
         if self._process is not None and self._process.poll() is None:

71-79: Minor: Redis connection leak in _start() if ping() raises a non-ConnectionError.

If conn.ping() raises an exception other than redis.ConnectionError (e.g., redis.TimeoutError), conn.close() on line 75 is skipped. Consider a try/finally or with pattern for the connection.

♻️ Suggested fix
             if os.path.exists(self._socket_path):
                 try:
                     conn = redis.Redis(unix_socket_path=self._socket_path, decode_responses=True)
-                    conn.ping()
-                    conn.close()
-                    self._close_stderr_file()
-                    return
+                    try:
+                        conn.ping()
+                        self._close_stderr_file()
+                        return
+                    finally:
+                        conn.close()
                 except redis.ConnectionError:
                     pass
falkordb/falkordb.py (1)

82-122: Same minor refactor opportunity: merge duplicate if embedded: checks.

Same pattern as the async client — the two if embedded: guards (lines 82 and 118) could be a single if/else block for clarity.

tests/test_embedded_async.py (1)

18-59: Async embedded tests cover the key lifecycle paths.

Pool defaults, custom acquire-timeout propagation, and context-manager cleanup are all verified. The DummyServer mock sidesteps the blocking __init__ issue, keeping tests fast.

One gap: there's no async equivalent of the sync test_embedded_passes_config_and_db_path (parameter forwarding for db_path, embedded_config, startup_timeout). Since both clients delegate identically to EmbeddedServer, this is low risk, but adding a mirrored test would strengthen parity.

Comment on lines +42 to +50
if unix_socket_path:
config["unixsocket"] = unix_socket_path
config["unixsocketperm"] = "700"
config["port"] = "0"

if user_config:
if "loadmodule" in user_config:
raise ValueError("'loadmodule' is reserved and cannot be overridden")
config.update(user_config)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

user_config can override port after unix socket setup forces it to 0.

Lines 42-45 set port to "0" when a unix socket is configured, but config.update(user_config) on line 50 runs afterward. A user passing {"port": "6379"} would re-enable TCP, which may be unexpected in embedded mode. If this is intentional (allowing advanced users to also expose TCP), consider documenting it; otherwise, add port to the reserved-key validation alongside loadmodule.

🧰 Tools
🪛 Ruff (0.15.0)

[warning] 49-49: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
In `@falkordb/lite/config.py` around lines 42 - 50, The code lets user_config
override the port set for unix sockets: after setting config["port"]="0" when
unix_socket_path is provided, config.update(user_config) can re-enable TCP; fix
by treating "port" as a reserved key just like "loadmodule"—in the branch that
checks user_config, validate that "port" is not in user_config (alongside
"loadmodule") and raise ValueError if found (references: unix_socket_path,
config["port"], user_config, config.update).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@tests/test_lite_server.py`:
- Around line 114-127: In test_start_timeout_calls_stop the monkeypatch of
time.monotonic returns a constant which makes the server._start loop never reach
its deadline; change the monkeypatch for falkordb.lite.server.time.monotonic to
return an increasing value (e.g., a closure or generator that increments on each
call so it eventually returns a value >= deadline) so the loop can exit and
raise EmbeddedServerError, keeping the rest of the test mocks (os.path.exists ->
False, DummyProcess/poll_result) unchanged; ensure the replacement is callable
from the test function (referenced symbol: test_start_timeout_calls_stop and
server._start).
🧹 Nitpick comments (2)
tests/test_lite_server.py (2)

52-62: Brittle __new__ bypass — consider a note or assertion if attributes are added to EmbeddedServer.__init__.

Using __new__ + manual attribute assignment to skip the constructor is a pragmatic approach for unit tests, but any new attribute added to EmbeddedServer.__init__ will silently be missing here, potentially causing AttributeError in tests rather than a clear failure. This is fine for now, but worth being aware of.


177-186: Minor: file handle left open if _read_stderr or assertion fails.

stderr_file is opened on Line 179 but only closed on Line 186 after the assertion. If anything between those lines raises, the handle leaks. Consider using a try/finally or a with block.

Proposed fix
 def test_read_stderr_flushes_open_file(tmp_path):
     server = _make_server(tmp_path)
     stderr_file = open(tmp_path / "redis.stderr.log", "w", encoding="utf-8")
-    stderr_file.write("stderr-text")
-    server._stderr_file = stderr_file
-
-    output = server._read_stderr()
-    assert output == "stderr-text"
-
-    stderr_file.close()
+    try:
+        stderr_file.write("stderr-text")
+        server._stderr_file = stderr_file
+        output = server._read_stderr()
+        assert output == "stderr-text"
+    finally:
+        stderr_file.close()

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@tests/test_lite_server.py`:
- Around line 178-187: The test test_read_stderr_flushes_open_file opens
stderr_file and assigns it to server._stderr_file but may leak the file if the
assertion fails; wrap the file setup/use in a context manager or a try/finally
so stderr_file.close() always runs (e.g., create the file with with open(... )
as stderr_file: and assign server._stderr_file inside that block or ensure you
call stderr_file.close() in a finally block) ensuring the cleanup happens even
if server._read_stderr() assertion fails.
🧹 Nitpick comments (1)
tests/test_lite_server.py (1)

52-62: Fragile coupling to EmbeddedServer internals via __new__.

Bypassing __init__ means any new attribute added to the constructor will silently be missing here, potentially causing AttributeError in tests rather than a clear failure. Consider adding a brief comment noting this coupling, or alternatively asserting the expected attributes exist on a real instance so that new fields are surfaced early.

gkorland and others added 2 commits February 14, 2026 21:35
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Resolve conflicts between main branch refactoring (DriverInfo,
RedisError handling, close/aclose methods) and embedded lite mode
additions.

- Use driver_info=DriverInfo() for non-embedded redis.Redis calls
- Combine embedded server cleanup with RedisError-safe close()
- Rename async close() to aclose() to match main's convention
- Fix mypy errors in lite module (type annotations)
- Fix ruff line-length violation in server.py

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
falkordb/falkordb.py (1)

1-407: ⚠️ Potential issue | 🟡 Minor

Ruff format check failure — file needs reformatting.

The CI lint pipeline reports this file would be reformatted by Ruff. Run ruff format falkordb/falkordb.py and commit the result.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@falkordb/falkordb.py` around lines 1 - 407, The file fails Ruff formatting;
run an automatic reformat and commit the changes: run `ruff format
falkordb/falkordb.py` (or your editor/IDE integration) to apply Ruff's style
fixes across the FalkorDB class and its methods (e.g., __init__, from_url,
select_graph, udf_load/udf_list/udf_flush/udf_delete) then stage and commit the
rewritten file so the CI lint step passes.
falkordb/asyncio/falkordb.py (1)

1-384: ⚠️ Potential issue | 🟡 Minor

Ruff format check failure — file needs reformatting.

The CI lint pipeline reports this file would be reformatted by Ruff. Run ruff format falkordb/asyncio/falkordb.py and commit the result.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@falkordb/asyncio/falkordb.py` around lines 1 - 384, The file fails Ruff
formatting — run the formatter and commit the changes; specifically, run `ruff
format` (e.g. ruff format falkordb/asyncio/falkordb.py) to reformat the FalkorDB
class and its methods (e.g. __init__, from_url, select_graph,
config_get/config_set, udf_* methods), then add and commit the resulting changes
so the CI lint pipeline passes.
🧹 Nitpick comments (1)
falkordb/asyncio/falkordb.py (1)

82-122: Two consecutive if embedded: blocks can be merged.

The second if embedded: / else: at line 118 is a direct continuation of the first block — conn is always derived from connection_pool built in the first block. Merging them removes the redundant guard:

♻️ Proposed refactor
         if embedded:
             from ..lite.server import EmbeddedServer

             if max_connections is None:
                 max_connections = 16

             server = EmbeddedServer(...)
             self._embedded_server = server
             connection_pool = redis.BlockingConnectionPool(
                 ...
             )
+            conn = redis.Redis(
+                connection_pool=connection_pool,
+                single_connection_client=single_connection_client,
+            )
-
-        if embedded:
-            conn = redis.Redis(
-                connection_pool=connection_pool,
-                single_connection_client=single_connection_client,
-            )
-        else:
+        else:
             conn = redis.Redis(
                 host=host,
                 ...
             )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@falkordb/asyncio/falkordb.py` around lines 82 - 122, Merge the two
consecutive "if embedded:" blocks by moving the redis.Redis(...) creation into
the first embedded branch so the EmbeddedServer instantiation, assignment to
self._embedded_server, creation of connection_pool (via
redis.BlockingConnectionPool) and creation of conn (via redis.Redis) all live
inside a single "if embedded:" branch; remove the redundant second guard, ensure
connection_pool and conn remain in scope for later use, and keep references to
EmbeddedServer, self._embedded_server, connection_pool,
redis.BlockingConnectionPool, and redis.Redis to locate and update the code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@falkordb/asyncio/falkordb.py`:
- Line 80: The class lacks a class-level default for _embedded_server which
leads to AttributeError when instances are created without running __init__
(f.e. in tests); add a class attribute declaration on the FalkorDB class like
"_embedded_server = None" so any instance (including ones created via
object.__new__ or mocks) always has this attribute defined, and ensure the
aclose() method still checks self._embedded_server before using it (referencing
FalkorDB, _embedded_server, and aclose).

In `@falkordb/falkordb.py`:
- Line 86: The class FalkorDB is missing a class-level default for
_embedded_server causing AttributeError when __init__ is bypassed; add a class
attribute _embedded_server = None on the FalkorDB class (so instances that skip
__init__ still have the attribute) and leave the existing instance assignments
in __init__ and the close() method unchanged; reference the FalkorDB class, the
_embedded_server attribute, the __init__ method, and the close method when
making this change.

In `@falkordb/lite/config.py`:
- Around line 54-63: The serialization loop in config.py currently emits a bare
"save " for an empty string; update the logic in the loop that builds lines (the
block handling key == "save" and the generic lines.append(f"{key} {value}")
path) so that when key == "save" and value == "" (empty string) you append the
explicit string 'save ""' instead of f"{key} {value}"; keep the existing
handling for list values in the key == "save" branch and leave other keys
unchanged.

In `@falkordb/lite/server.py`:
- Around line 73-83: The code creates conn = redis.Redis(...) but only closes it
on redis.ConnectionError paths; if conn.ping() raises a different exception the
connection and stderr file remain open. Wrap ping in a try/except/else/finally:
after creating conn call conn.ping() inside try, except redis.ConnectionError:
pass, else: call self._close_stderr_file() and return, and in a finally block
always call conn.close() to ensure the socket is closed on all error paths;
reference conn, redis.Redis, conn.ping, conn.close, and self._close_stderr_file
when making the change.

---

Outside diff comments:
In `@falkordb/asyncio/falkordb.py`:
- Around line 1-384: The file fails Ruff formatting — run the formatter and
commit the changes; specifically, run `ruff format` (e.g. ruff format
falkordb/asyncio/falkordb.py) to reformat the FalkorDB class and its methods
(e.g. __init__, from_url, select_graph, config_get/config_set, udf_* methods),
then add and commit the resulting changes so the CI lint pipeline passes.

In `@falkordb/falkordb.py`:
- Around line 1-407: The file fails Ruff formatting; run an automatic reformat
and commit the changes: run `ruff format falkordb/falkordb.py` (or your
editor/IDE integration) to apply Ruff's style fixes across the FalkorDB class
and its methods (e.g., __init__, from_url, select_graph,
udf_load/udf_list/udf_flush/udf_delete) then stage and commit the rewritten file
so the CI lint step passes.

---

Duplicate comments:
In `@falkordb/asyncio/falkordb.py`:
- Around line 88-92: The EmbeddedServer startup blocks the event loop because
FalkorDB().__init__ calls EmbeddedServer (which polls with time.sleep); fix by
adding an async factory classmethod (e.g., FalkorDB.create) that constructs a
non-embedded instance via cls(**kwargs) and, when embedded=True, creates the
EmbeddedServer off the event loop (use asyncio.to_thread(EmbeddedServer,
db_path=..., config=..., startup_timeout=...)), set instance._embedded_server,
build the redis BlockingConnectionPool pointing at server.unix_socket_path, and
assign instance.connection, instance.flushdb and instance.execute_command before
returning the instance; leave the synchronous __init__ untouched and document
callers to use FalkorDB.create for embedded starts.

In `@falkordb/lite/config.py`:
- Around line 42-50: The code allows user_config to override the port forced to
"0" for Unix-socket mode; update the logic in the block handling user_config so
that "port" is treated as a reserved key (in addition to "loadmodule") when
unix_socket_path is set: detect if "port" is present in user_config and either
raise ValueError("'port' is reserved and cannot be overridden") or remove/ignore
it before calling config.update(user_config), referencing unix_socket_path,
config, user_config, and the existing "loadmodule" check to locate where to add
this guard.

---

Nitpick comments:
In `@falkordb/asyncio/falkordb.py`:
- Around line 82-122: Merge the two consecutive "if embedded:" blocks by moving
the redis.Redis(...) creation into the first embedded branch so the
EmbeddedServer instantiation, assignment to self._embedded_server, creation of
connection_pool (via redis.BlockingConnectionPool) and creation of conn (via
redis.Redis) all live inside a single "if embedded:" branch; remove the
redundant second guard, ensure connection_pool and conn remain in scope for
later use, and keep references to EmbeddedServer, self._embedded_server,
connection_pool, redis.BlockingConnectionPool, and redis.Redis to locate and
update the code.

startup_timeout=10.0,
connection_acquire_timeout=5.0,
):
self._embedded_server = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

AttributeError: 'FalkorDB' object has no attribute '_embedded_server' — CI test failure.

Although __init__ sets self._embedded_server = None at line 80, the CI test fails with AttributeError at line 282, indicating instances are reaching aclose() without _embedded_server set (e.g., via object.__new__, a patched __init__, or mock-created instances in existing tests). Add a class-level declaration as a safe default:

🐛 Proposed fix
 class FalkorDB:
     """
     Asynchronous FalkorDB Class for interacting with a FalkorDB server.
     ...
     """
+    _embedded_server = None
+
     def __init__(
         self,
         ...
🧰 Tools
🪛 GitHub Actions: Lint

[error] Ruff format check would reformat this file.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@falkordb/asyncio/falkordb.py` at line 80, The class lacks a class-level
default for _embedded_server which leads to AttributeError when instances are
created without running __init__ (f.e. in tests); add a class attribute
declaration on the FalkorDB class like "_embedded_server = None" so any instance
(including ones created via object.__new__ or mocks) always has this attribute
defined, and ensure the aclose() method still checks self._embedded_server
before using it (referencing FalkorDB, _embedded_server, and aclose).

embedded_config=None,
startup_timeout=10.0,
):
self._embedded_server = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

AttributeError: 'FalkorDB' object has no attribute '_embedded_server' — CI test failure.

Same root cause as the async variant: instances reaching close() (line 204) without _embedded_server set, because __init__ was bypassed by test infrastructure. Add a class-level default:

🐛 Proposed fix
 class FalkorDB:
     """
     FalkorDB Class for interacting with a FalkorDB server.
     ...
     """
+    _embedded_server = None
+
     def __init__(
         self,
         ...
🧰 Tools
🪛 GitHub Actions: Lint

[error] Ruff format check would reformat this file.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@falkordb/falkordb.py` at line 86, The class FalkorDB is missing a class-level
default for _embedded_server causing AttributeError when __init__ is bypassed;
add a class attribute _embedded_server = None on the FalkorDB class (so
instances that skip __init__ still have the attribute) and leave the existing
instance assignments in __init__ and the close() method unchanged; reference the
FalkorDB class, the _embedded_server attribute, the __init__ method, and the
close method when making this change.

Comment on lines +54 to +63
lines = []
for key, value in config.items():
if key == "save" and isinstance(value, list):
for rule in value:
if isinstance(rule, (tuple, list)) and len(rule) == 2:
lines.append(f"save {rule[0]} {rule[1]}")
else:
lines.append(f"save {rule}")
continue
lines.append(f"{key} {value}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

save "" serialization omits quotes — Redis may not disable RDB snapshots correctly.

When DEFAULT_CONFIG's "save": "" is serialized via f"{key} {value}", it produces the bare line save (trailing space, no value). Redis's redis.conf requires the explicit form save "" (quoted empty string) to disable snapshots. A bare save with no argument is not guaranteed to have the same effect and may cause a config parse error depending on the Redis version.

🐛 Proposed fix
     lines = []
     for key, value in config.items():
         if key == "save" and isinstance(value, list):
             for rule in value:
                 if isinstance(rule, (tuple, list)) and len(rule) == 2:
                     lines.append(f"save {rule[0]} {rule[1]}")
                 else:
                     lines.append(f"save {rule}")
             continue
+        if isinstance(value, str) and value == "":
+            lines.append(f'{key} ""')
+            continue
         lines.append(f"{key} {value}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@falkordb/lite/config.py` around lines 54 - 63, The serialization loop in
config.py currently emits a bare "save " for an empty string; update the logic
in the loop that builds lines (the block handling key == "save" and the generic
lines.append(f"{key} {value}") path) so that when key == "save" and value == ""
(empty string) you append the explicit string 'save ""' instead of f"{key}
{value}"; keep the existing handling for list values in the key == "save" branch
and leave other keys unchanged.

Comment on lines +73 to +83
if os.path.exists(self._socket_path):
try:
conn = redis.Redis(
unix_socket_path=self._socket_path, decode_responses=True
)
conn.ping()
conn.close()
self._close_stderr_file()
return
except redis.ConnectionError:
pass
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

conn is not closed when ping() raises a non-ConnectionError exception.

If conn.ping() raises anything other than redis.ConnectionError (e.g., a protocol/encoding error), the exception propagates out of the readiness loop without closing conn or self._stderr_file. Use a finally block:

🐛 Proposed fix
             if os.path.exists(self._socket_path):
                 try:
                     conn = redis.Redis(
                         unix_socket_path=self._socket_path, decode_responses=True
                     )
-                    conn.ping()
-                    conn.close()
-                    self._close_stderr_file()
-                    return
+                    try:
+                        conn.ping()
+                        self._close_stderr_file()
+                        return
+                    finally:
+                        conn.close()
                 except redis.ConnectionError:
                     pass
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if os.path.exists(self._socket_path):
try:
conn = redis.Redis(
unix_socket_path=self._socket_path, decode_responses=True
)
conn.ping()
conn.close()
self._close_stderr_file()
return
except redis.ConnectionError:
pass
if os.path.exists(self._socket_path):
try:
conn = redis.Redis(
unix_socket_path=self._socket_path, decode_responses=True
)
try:
conn.ping()
self._close_stderr_file()
return
finally:
conn.close()
except redis.ConnectionError:
pass
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@falkordb/lite/server.py` around lines 73 - 83, The code creates conn =
redis.Redis(...) but only closes it on redis.ConnectionError paths; if
conn.ping() raises a different exception the connection and stderr file remain
open. Wrap ping in a try/except/else/finally: after creating conn call
conn.ping() inside try, except redis.ConnectionError: pass, else: call
self._close_stderr_file() and return, and in a finally block always call
conn.close() to ensure the socket is closed on all error paths; reference conn,
redis.Redis, conn.ping, conn.close, and self._close_stderr_file when making the
change.

gkorland and others added 2 commits February 22, 2026 08:38
Use getattr() to safely access _embedded_server in close() and
aclose(), handling the case where FalkorDB objects are created via
object.__new__() without calling __init__ (as done in test_close.py).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Format PR files to match project's ruff line-length (88) and update
uv.lock to reflect the new lite optional dependency.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
gkorland and others added 2 commits February 22, 2026 15:48
- Add class-level _embedded_server = None to both sync and async FalkorDB
  classes to prevent AttributeError when __init__ is bypassed
- Fix user_config overriding port after unix socket setup by re-asserting
  port=0 after user_config merge
- Fix save "" serialization to emit 'save ""' instead of bare 'save '
- Ensure redis conn is always closed via finally block in startup loop
- Add explanatory comment to empty except in __del__
- Replace explicit __del__() call with del+gc.collect() in test

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
return
finally:
conn.close()
except redis.ConnectionError:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Extend the package with the falkordblite capabilities

1 participant