feat: add Excel export via execute_sql_xlsx tool#168
Conversation
- Add openpyxl dependency for Excel generation - Create formatter.py with format_to_excel() function - Add execute_sql_xlsx MCP tool for SQL-to-Excel export - Dynamic registration based on access mode (like execute_sql) - Add max_rows parameter (default 10000) with truncation warning - Fix empty result handling (informational, not error) - Add comprehensive unit tests (12 test cases) - Update README tools table Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 9be2590b51
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| """Executes a SQL query and exports results to an Excel file.""" | ||
| try: | ||
| sql_driver = await get_sql_driver() | ||
| rows = await sql_driver.execute_query(sql) # type: ignore |
There was a problem hiding this comment.
Enforce max_rows during query execution
execute_sql_xlsx fetches the full result set before truncating (rows[:max_rows]), so max_rows does not actually protect the server from large queries. In this codebase, SqlDriver._execute_with_connection materializes all rows with cursor.fetchall() (src/postgres_mcp/sql/sql_driver.py), so a query returning millions of rows can still consume large memory/CPU and fail even when max_rows is small. Apply the row cap while executing (e.g., limit/stream/fetchmany) instead of after full fetch.
Useful? React with 👍 / 👎.
| row_dicts = [r.cells for r in rows[:max_rows]] | ||
| columns = list(row_dicts[0].keys()) |
There was a problem hiding this comment.
Validate max_rows as a positive integer
max_rows currently allows 0 or negative values, and those inputs make row_dicts empty while the next line indexes row_dicts[0], which raises an exception and returns a generic error response. This makes the tool fail for a validly-typed but invalid semantic input; add a ge=1 constraint (or equivalent guard) and return a clear validation error.
Useful? React with 👍 / 👎.
| timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") | ||
| filename = f"query_{timestamp}.xlsx" | ||
| filepath = os.path.join(output_dir, filename) |
There was a problem hiding this comment.
Generate unique export filenames
The Excel path is derived from a timestamp with second-level precision, so concurrent/rapid exports in the same second will target the same filename and overwrite each other. This can return a stale/incorrect file to one caller and lose prior output; use a uniqueness source (UUID, monotonic suffix, or tempfile APIs) when constructing the filename.
Useful? React with 👍 / 👎.
- P1: inject LIMIT at SQL level to protect server memory (not after fetchall) - P1: serialize dict/list (json/jsonb/array cols) to JSON strings before writing - P2: add ge=1 constraint on max_rows, reject 0 and negative values - P2: add UUID suffix to filename to prevent concurrent collision - add 4 new test cases for reviewer fixes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
execute_sql_xlsxMCP tool that executes a SQL query and exports results to an Excel (.xlsx) fileformatter.pywithformat_to_excel()function (openpyxl-based, auto-adjusts column widths)destructiveHint=True(unrestricted) vsreadOnlyHint=True(restricted), consistent withexecute_sqlpatternmax_rowsparameter (default 10000) with truncation warning to prevent excessive outputChanges
pyproject.tomlsrc/postgres_mcp/formatter.pysrc/postgres_mcp/server.pyexecute_sql_xlsxtool + dynamic registrationtests/unit/test_excel_export.pyREADME.mdTest plan
pytest tests/unit/test_excel_export.py -v— 12/12 passedruff check— no errors🤖 Generated with Claude Code