Skip to content

feat: add Excel export via execute_sql_xlsx tool#168

Open
ccclucky wants to merge 2 commits into
crystaldba:mainfrom
ccclucky:feature/excel-export
Open

feat: add Excel export via execute_sql_xlsx tool#168
ccclucky wants to merge 2 commits into
crystaldba:mainfrom
ccclucky:feature/excel-export

Conversation

@ccclucky
Copy link
Copy Markdown

@ccclucky ccclucky commented Apr 15, 2026

Summary

  • Add execute_sql_xlsx MCP tool that executes a SQL query and exports results to an Excel (.xlsx) file
  • Create formatter.py with format_to_excel() function (openpyxl-based, auto-adjusts column widths)
  • Dynamic tool registration based on access mode: destructiveHint=True (unrestricted) vs readOnlyHint=True (restricted), consistent with execute_sql pattern
  • Add max_rows parameter (default 10000) with truncation warning to prevent excessive output
  • Fix empty result handling: informational response instead of error response
  • Add comprehensive unit tests (12 test cases covering formatter and tool logic)
  • Update README tools table

Changes

File Change
pyproject.toml Add openpyxl dependency
src/postgres_mcp/formatter.py New file: Excel formatting logic
src/postgres_mcp/server.py New execute_sql_xlsx tool + dynamic registration
tests/unit/test_excel_export.py New file: 12 unit tests
README.md Document new tool in tools table

Test plan

  • pytest tests/unit/test_excel_export.py -v — 12/12 passed
  • ruff check — no errors

🤖 Generated with Claude Code

- Add openpyxl dependency for Excel generation
- Create formatter.py with format_to_excel() function
- Add execute_sql_xlsx MCP tool for SQL-to-Excel export
- Dynamic registration based on access mode (like execute_sql)
- Add max_rows parameter (default 10000) with truncation warning
- Fix empty result handling (informational, not error)
- Add comprehensive unit tests (12 test cases)
- Update README tools table

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9be2590b51

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/postgres_mcp/server.py Outdated
"""Executes a SQL query and exports results to an Excel file."""
try:
sql_driver = await get_sql_driver()
rows = await sql_driver.execute_query(sql) # type: ignore
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Enforce max_rows during query execution

execute_sql_xlsx fetches the full result set before truncating (rows[:max_rows]), so max_rows does not actually protect the server from large queries. In this codebase, SqlDriver._execute_with_connection materializes all rows with cursor.fetchall() (src/postgres_mcp/sql/sql_driver.py), so a query returning millions of rows can still consume large memory/CPU and fail even when max_rows is small. Apply the row cap while executing (e.g., limit/stream/fetchmany) instead of after full fetch.

Useful? React with 👍 / 👎.

Comment thread src/postgres_mcp/server.py Outdated
Comment on lines +575 to +576
row_dicts = [r.cells for r in rows[:max_rows]]
columns = list(row_dicts[0].keys())
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Validate max_rows as a positive integer

max_rows currently allows 0 or negative values, and those inputs make row_dicts empty while the next line indexes row_dicts[0], which raises an exception and returns a generic error response. This makes the tool fail for a validly-typed but invalid semantic input; add a ge=1 constraint (or equivalent guard) and return a clear validation error.

Useful? React with 👍 / 👎.

Comment on lines +30 to +32
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"query_{timestamp}.xlsx"
filepath = os.path.join(output_dir, filename)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Generate unique export filenames

The Excel path is derived from a timestamp with second-level precision, so concurrent/rapid exports in the same second will target the same filename and overwrite each other. This can return a stale/incorrect file to one caller and lose prior output; use a uniqueness source (UUID, monotonic suffix, or tempfile APIs) when constructing the filename.

Useful? React with 👍 / 👎.

- P1: inject LIMIT at SQL level to protect server memory (not after fetchall)
- P1: serialize dict/list (json/jsonb/array cols) to JSON strings before writing
- P2: add ge=1 constraint on max_rows, reject 0 and negative values
- P2: add UUID suffix to filename to prevent concurrent collision
- add 4 new test cases for reviewer fixes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant