feat: Add native PDF export for items#1387
Conversation
Add server-side PDF generation for inventory items, enabling
insurance-grade documentation exports directly from Homebox.
Backend:
- New PDF export service using gofpdf with themed report generation
- GET /api/v1/items/{id}/export/pdf — single item export
- GET /api/v1/items/export/pdf — all items export
- POST /api/v1/items/export/pdf — bulk export by IDs
- GET /api/v1/items/export/pdf/themes — list available themes
- Supports cover page, summary table, per-item detail pages with
photos, maintenance history, custom fields, and receipts
- Three built-in themes: classic (navy), modern (slate), forest (green)
Frontend:
- Export PDF option in item detail dropdown menu
- Export PDF Report button in collection tools page
- API client methods for all export endpoints
- i18n support for export labels
Closes discussion sysadminsmedia#735
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
WalkthroughAdds end-to-end PDF export: backend PDF generation service and new v1 HTTP handlers for single, bulk, and group exports plus theme listing; frontend API methods and UI actions to trigger downloads; and a PDF generation dependency. Changes
Sequence DiagramsequenceDiagram
participant User as User/Browser
participant Handler as API Handler
participant Service as PDFExportService
participant Repo as Repository
participant Blob as Blob Storage
User->>Handler: Request export (single/bulk/all, theme, photos, owner)
Handler->>Handler: Parse params / validate UUIDs
Handler->>Service: NewPDFExportService(repo) + Export...(ctx, groupID, itemIDs, opts)
Service->>Repo: Load items (and maintenance records per item)
loop per photo/attachment
Service->>Blob: ReadAttachment(path)
Blob-->>Service: Image bytes
end
Service->>Service: Render cover, summary, item pages
Service-->>Handler: PDF bytes + filename
Handler->>User: HTTP 200 with PDF body and Content-Disposition
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested labels
Suggested reviewers
Security Recommendations
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can validate your CodeRabbit configuration file in your editor.If your editor has YAML language server, you can enable auto-completion and validation by adding |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (4)
backend/internal/core/services/pdf_export.go (2)
145-161: N+1 query pattern when loading multiple items.
ExportMultipleItemsloads items one-by-one in a loop (line 151), resulting in N database queries for N items. For large exports, this could impact performance.♻️ Consider batch loading items
If the repository supports it, loading items in a single query would be more efficient:
// Ideal: Load all items in one query items, err := svc.repo.Items.GetManyByGroup(ctx, groupID, itemIDs)This would require adding a batch-fetch method to the repository if one doesn't exist. Given the current scope, this is a nice-to-have optimization.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/core/services/pdf_export.go` around lines 145 - 161, ExportMultipleItems currently issues N separate repo calls in the loop, causing an N+1 query problem; replace the per-item lookup in ExportMultipleItems with a single batch fetch (e.g., add and call svc.repo.Items.GetManyByGroup(ctx, groupID, itemIDs)) that returns all found items and errors, then use that result (or if the repo cannot be changed, implement a new GetManyByGroup on the Items repository and call it from ExportMultipleItems); ensure you preserve the original behavior of skipping missing items and returning an error when no valid items are found, and surface/log any batch-fetch error via the existing logging pattern (log.Warn()/log.Error()) while still returning the proper (nil, "", error) or (bytes, filename, nil) response as before.
387-406: Memory usage scales with image size — consider size limits or thumbnail fallback.Each photo is read fully into memory via
readAttachment. For high-resolution images, this could consume significant memory, especially in multi-item exports with many photos.Consider:
- Using thumbnail versions when available (the data model appears to support thumbnails)
- Adding a maximum image size limit
- Resizing images before embedding
🔧 Suggested thumbnail fallback
// Find the primary photo attachment for _, att := range item.Attachments { if att.Primary && att.Type == attachment.TypePhoto.String() { - imgBytes, imgType, err := svc.readAttachment(ctx, att) + // Prefer thumbnail if available to reduce memory usage + attachmentToRead := att + if att.Thumbnail != nil { + attachmentToRead = *att.Thumbnail + } + imgBytes, imgType, err := svc.readAttachment(ctx, attachmentToRead) if err != nil { log.Warn().Err(err).Msg("failed to read primary photo for PDF") break }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/core/services/pdf_export.go` around lines 387 - 406, The code currently reads full-resolution photos via svc.readAttachment for each att in item.Attachments and embeds them, which can blow memory; change the flow in the opts.IncludePhotos block to (1) prefer a thumbnail version when present on the attachment (e.g., att.ThumbnailID or att.Thumbnail field) by calling svc.readAttachment for the thumbnail first, (2) if no thumbnail exists, call svc.readAttachment but enforce a maximum allowed size (define a constant like MaxImageBytes) and if the returned byte slice exceeds that or the image dimensions exceed a MaxImageWidth/Height, decode the image (image.Decode), resize it down to the max dimensions (using a resize library or Go image operations) and re-encode to a suitable format before passing to pdf.RegisterImageOptionsReader, and (3) keep the same registration and pdf.ImageOptions usage (pdf.RegisterImageOptionsReader, pdf.ImageOptions) and still set photoEmbedded = true; reference svc.readAttachment, item.Attachments, att.Primary, attachment.TypePhoto, pdf.RegisterImageOptionsReader, pdf.ImageOptions, and photoEmbedded when locating the code to change.backend/app/api/handlers/v1/v1_ctrl_pdf_export.go (2)
130-191: Large inventory exports could cause memory pressure and timeouts.
HandleItemsExportAllPDFloads all items into memory, then generates the entire PDF in memory before responding. For users with large inventories (thousands of items with photos), this could:
- Cause high memory usage on the server
- Lead to request timeouts before the PDF is ready
- Block the request thread for extended periods
Consider implementing:
- Pagination/chunking: Process items in batches
- Item limit: Cap the maximum number of items per export
- Async generation: Generate PDFs in the background and notify when ready
- Streaming: Stream the PDF as it's generated (though this is complex with gofpdf)
For now, at minimum, consider adding a limit:
🔧 Add a maximum item limit
+const maxPDFExportItems = 500 // Prevent excessive memory usage if len(allItems.Items) == 0 { return validate.NewRequestError( fmt.Errorf("no items found to export"), http.StatusNotFound, ) } +if len(allItems.Items) > maxPDFExportItems { + return validate.NewRequestError( + fmt.Errorf("too many items to export (%d); maximum is %d", len(allItems.Items), maxPDFExportItems), + http.StatusBadRequest, + ) +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/api/handlers/v1/v1_ctrl_pdf_export.go` around lines 130 - 191, HandleItemsExportAllPDF currently loads all items and builds the entire PDF in memory (via pdfSvc.ExportMultipleItems), which risks OOM/timeouts; enforce a maximum exportable items cap (e.g., MAX_EXPORT_ITEMS) and validate against it before generating the PDF: check the length of itemIDs (or limit the repo.Items.QueryByGroup call to PageSize = MAX_EXPORT_ITEMS) and return a clear HTTP 413/400 error if exceeded. Update the handler's logic around ctrl.repo.Items.QueryByGroup, the itemIDs collection, and the call to services.NewPDFExportService/ExportMultipleItems to ensure you never pass more than the cap; optionally add config/constant for MAX_EXPORT_ITEMS and surface a helpful message suggesting chunked/async export.
53-56: Consider sanitizing filenames to prevent header injection.The filename is constructed from user data (asset ID) and embedded directly into the
Content-Dispositionheader. While the current implementation uses quotes, filenames containing special characters (quotes, newlines, semicolons) could potentially cause issues.🛡️ Suggested sanitization
+// sanitizeFilename removes or escapes characters that could cause issues in HTTP headers +func sanitizeFilename(name string) string { + // Replace characters that could break header parsing + replacer := strings.NewReplacer( + "\"", "'", + "\n", "", + "\r", "", + ";", "_", + ) + return replacer.Replace(name) +} // Set response headers for PDF file download w.Header().Set("Content-Type", "application/pdf") -w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename)) +w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", sanitizeFilename(filename))) w.Header().Set("Content-Length", fmt.Sprintf("%d", len(pdfBytes)))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/api/handlers/v1/v1_ctrl_pdf_export.go` around lines 53 - 56, Sanitize the filename variable before embedding it in the Content-Disposition header to prevent header injection: in the handler that sets w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename)) (and where filename is built from the asset ID), strip or replace control characters (CR/LF), quotes, semicolons and other unsafe characters, and fallback to a safe token if the result is empty; after sanitization use that sanitizedFilename in the Content-Disposition value and optionally add a percent-encoded filename* form (RFC 5987) or url.PathEscape to safely represent Unicode characters. Ensure you update all uses of filename in this handler (the variable named filename and the Content-Disposition header setting) to use the sanitized value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/go.mod`:
- Line 17: Replace the deprecated GitHub dependency with the maintained Codeberg
fork by updating the module path in go.mod from "github.com/go-pdf/fpdf v0.9.0"
to "codeberg.org/go-pdf/fpdf v0.9.0" (or the latest compatible version), then
run module resolution (e.g., go get/code tidy) to fetch the new import and
verify builds; ensure any imports in the codebase referencing
github.com/go-pdf/fpdf are updated to codeberg.org/go-pdf/fpdf if present.
In `@backend/internal/core/services/pdf_export.go`:
- Around line 773-783: The MIME-to-image-type switch in pdf_export.go silently
defaults imgType to "jpg" for unknown MIME types (using att.MimeType), which
breaks WebP/HEIC/AVIF; update the switch to explicitly handle "webp",
"heic"/"heif", and "avif" by setting imgType to "webp", "heic"/"heif" or "avif"
as appropriate (matching the values expected by the fpdf registration code) and
add a default branch that logs a warning including att.MimeType and the
attachment ID so unsupported types are visible rather than being misregistered
as "jpg"; ensure you reference the same imgType variable and the surrounding
image registration code so behavior remains consistent.
- Around line 751-786: Although paths are server-generated, add a defensive
check in readAttachment to guard against DB-tampered paths: call filepath.Clean
on att.Path, ensure the cleaned path has no ".." components and that it begins
with the expected prefix (e.g., "*/documents/" or the deterministic
"{group-uuid}/documents/") before using repo.Attachments.GetFullPath(att.Path);
if the check fails, return an error. This change should be made inside
PDFExportService.readAttachment before opening the bucket/creating the reader
and can reuse normalizePath semantics for separator/trimming consistency.
In `@frontend/pages/collection/index/tools.vue`:
- Around line 157-161: The PDF export currently uses getExportPDF ->
api.items.exportAllPDFURL() which ignores prefs.value.collectionId and exports
all items; update getExportPDF to pass prefs.value.collectionId (like the CSV
export uses api.items.exportURL(prefs.value.collectionId ?? undefined)) and then
modify items.ts's exportAllPDFURL to accept an optional collectionId parameter
and include it in the generated URL; alternatively, if exporting all collections
is intentional, add a clear comment or UI note in getExportPDF explaining the
behavior instead of changing the API.
---
Nitpick comments:
In `@backend/app/api/handlers/v1/v1_ctrl_pdf_export.go`:
- Around line 130-191: HandleItemsExportAllPDF currently loads all items and
builds the entire PDF in memory (via pdfSvc.ExportMultipleItems), which risks
OOM/timeouts; enforce a maximum exportable items cap (e.g., MAX_EXPORT_ITEMS)
and validate against it before generating the PDF: check the length of itemIDs
(or limit the repo.Items.QueryByGroup call to PageSize = MAX_EXPORT_ITEMS) and
return a clear HTTP 413/400 error if exceeded. Update the handler's logic around
ctrl.repo.Items.QueryByGroup, the itemIDs collection, and the call to
services.NewPDFExportService/ExportMultipleItems to ensure you never pass more
than the cap; optionally add config/constant for MAX_EXPORT_ITEMS and surface a
helpful message suggesting chunked/async export.
- Around line 53-56: Sanitize the filename variable before embedding it in the
Content-Disposition header to prevent header injection: in the handler that sets
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"",
filename)) (and where filename is built from the asset ID), strip or replace
control characters (CR/LF), quotes, semicolons and other unsafe characters, and
fallback to a safe token if the result is empty; after sanitization use that
sanitizedFilename in the Content-Disposition value and optionally add a
percent-encoded filename* form (RFC 5987) or url.PathEscape to safely represent
Unicode characters. Ensure you update all uses of filename in this handler (the
variable named filename and the Content-Disposition header setting) to use the
sanitized value.
In `@backend/internal/core/services/pdf_export.go`:
- Around line 145-161: ExportMultipleItems currently issues N separate repo
calls in the loop, causing an N+1 query problem; replace the per-item lookup in
ExportMultipleItems with a single batch fetch (e.g., add and call
svc.repo.Items.GetManyByGroup(ctx, groupID, itemIDs)) that returns all found
items and errors, then use that result (or if the repo cannot be changed,
implement a new GetManyByGroup on the Items repository and call it from
ExportMultipleItems); ensure you preserve the original behavior of skipping
missing items and returning an error when no valid items are found, and
surface/log any batch-fetch error via the existing logging pattern
(log.Warn()/log.Error()) while still returning the proper (nil, "", error) or
(bytes, filename, nil) response as before.
- Around line 387-406: The code currently reads full-resolution photos via
svc.readAttachment for each att in item.Attachments and embeds them, which can
blow memory; change the flow in the opts.IncludePhotos block to (1) prefer a
thumbnail version when present on the attachment (e.g., att.ThumbnailID or
att.Thumbnail field) by calling svc.readAttachment for the thumbnail first, (2)
if no thumbnail exists, call svc.readAttachment but enforce a maximum allowed
size (define a constant like MaxImageBytes) and if the returned byte slice
exceeds that or the image dimensions exceed a MaxImageWidth/Height, decode the
image (image.Decode), resize it down to the max dimensions (using a resize
library or Go image operations) and re-encode to a suitable format before
passing to pdf.RegisterImageOptionsReader, and (3) keep the same registration
and pdf.ImageOptions usage (pdf.RegisterImageOptionsReader, pdf.ImageOptions)
and still set photoEmbedded = true; reference svc.readAttachment,
item.Attachments, att.Primary, attachment.TypePhoto,
pdf.RegisterImageOptionsReader, pdf.ImageOptions, and photoEmbedded when
locating the code to change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: a3570b27-f175-4b70-ae10-5e5015521d43
⛔ Files ignored due to path filters (1)
backend/go.sumis excluded by!**/*.sum
📒 Files selected for processing (8)
backend/app/api/handlers/v1/v1_ctrl_pdf_export.gobackend/app/api/routes.gobackend/go.modbackend/internal/core/services/pdf_export.gofrontend/lib/api/classes/items.tsfrontend/locales/en.jsonfrontend/pages/collection/index/tools.vuefrontend/pages/item/[id]/index.vue
1. Replace deprecated github.com/go-pdf/fpdf with maintained codeberg.org/go-pdf/fpdf fork (v0.11.1) 2. Handle WebP/HEIC/AVIF image types explicitly — log warning and skip unsupported formats instead of silently misregistering 3. Add path traversal guard in readAttachment to reject paths containing ".." components 4. Pass collection filter (tenant) to PDF export in tools.vue so it respects the current collection view 5. Add max item limit (500) to HandleItemsExportAllPDF to prevent memory exhaustion and request timeouts 6. Sanitize filenames in Content-Disposition headers to prevent header injection via special characters in asset IDs 7. Add MaxImageBytes (10MB) constant — skip oversized images with a warning instead of consuming unbounded memory 8. Document N+1 query pattern in ExportMultipleItems as a known optimization opportunity for future batch-fetch support Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
backend/internal/core/services/pdf_export.go (3)
256-275: Currency formatting assumes USD - potential i18n concern.The total value and individual prices use hardcoded
$prefix (e.g.,$%.2f). If HomeBox is intended for international users, this may not reflect their local currency.💡 Consider currency-aware formatting
If currency information is available on items or user preferences, consider using it for display. For now, documenting this assumption in the code would be helpful:
+ // NOTE: Currency symbol is hardcoded to USD ($). If multi-currency support + // is needed in the future, retrieve currency from user/group settings. if totalValue > 0 { pdf.SetY(yPos + 20)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/core/services/pdf_export.go` around lines 256 - 275, The PDF uses a hardcoded dollar sign when printing totals (see totalValue, items loop and the pdf.CellFormat calls), which breaks i18n; update the formatting to be currency-aware by using a currency value/symbol from the item's data or user preferences (e.g., check for a Currency field on items or a tenant/user setting) and replace the "$%.2f" formatting with a format that injects the resolved currency symbol or ISO currency formatting; if no currency info exists, either fall back to a documented default (add a comment near totalValue calculation indicating USD default) or integrate a currency formatting helper (using a standard library/package) and apply it to both the total and the insured-count line output.
150-203: N+1 query pattern is documented but worth tracking.The comment on lines 156-159 acknowledges the N+1 query pattern. While acceptable for typical export sizes, this could become a performance concern for exports approaching the 500-item limit.
Would you like me to open an issue to track the future optimization of adding a batch-fetch method (
GetManyByGroup) to the repository layer?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/core/services/pdf_export.go` around lines 150 - 203, The ExportMultipleItems function currently uses an N+1 pattern via svc.repo.Items.GetOneByGroup; open a tracking issue to add a batch-fetch method (e.g., Items.GetManyByGroup(ctx context.Context, groupID uuid.UUID, ids []uuid.UUID) ([]repo.ItemOut, error)) and in the code replace or annotate the existing comment with a TODO referencing that issue (include issue number/URL) so future work is tracked; also mention in the issue the need to update ExportMultipleItems to call GetManyByGroup and to handle partial/missing items consistently.
820-824: Unknown MIME types default to JPEG with a warning, but add validation for image format integrity.For truly unknown MIME types (not explicitly webp/heic/avif), the code defaults to
jpgand logs a warning. This is reasonable given fpdf's native format support and explicit rejection of unsupported formats. However, from a security perspective, consider validating that the actual image bytes match the detected MIME type—defaulting unknown types to JPEG without format verification could allow malicious uploads to bypass validation, potentially causing PDF corruption or DoS issues if the data isn't actually valid JPEG. The path traversal checks and size limits are good defensive measures, but format validation would provide defense-in-depth.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/internal/core/services/pdf_export.go` around lines 820 - 824, The default-to-jpg branch currently sets imgType = "jpg" for unknown MIME types (see the switch handling att.MimeType and the log warning using att.ID), but you must also validate the actual bytes match the chosen format before embedding; after determining imgType (or before falling back to "jpg"), inspect the attachment bytes (att.Data or whatever holds the image payload) and verify format integrity (e.g., use image/jpeg.DecodeConfig for JPEG or check magic bytes/header for JPEG/WEBP/AVIF/HEIC) and if the check fails return an error instead of proceeding; update the code around the switch and the image-embedding path to perform this validation and only set/accept imgType when the content verification succeeds.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/lib/api/classes/items.ts`:
- Around line 218-249: exportBulkPDF currently uses fetch with credentials:
"same-origin" only, which prevents API token auth; update exportBulkPDF to
include an Authorization: Bearer <token> header when an API token is available
(e.g., accept an optional token param or read from the existing auth utility),
while keeping credentials: "same-origin" so cookie auth still works; add the
header into the fetch headers (alongside "Content-Type") only when token is
non-empty so Blob handling and POST body remain unchanged, and confirm with the
backend that /items/export/pdf accepts bearer tokens.
---
Nitpick comments:
In `@backend/internal/core/services/pdf_export.go`:
- Around line 256-275: The PDF uses a hardcoded dollar sign when printing totals
(see totalValue, items loop and the pdf.CellFormat calls), which breaks i18n;
update the formatting to be currency-aware by using a currency value/symbol from
the item's data or user preferences (e.g., check for a Currency field on items
or a tenant/user setting) and replace the "$%.2f" formatting with a format that
injects the resolved currency symbol or ISO currency formatting; if no currency
info exists, either fall back to a documented default (add a comment near
totalValue calculation indicating USD default) or integrate a currency
formatting helper (using a standard library/package) and apply it to both the
total and the insured-count line output.
- Around line 150-203: The ExportMultipleItems function currently uses an N+1
pattern via svc.repo.Items.GetOneByGroup; open a tracking issue to add a
batch-fetch method (e.g., Items.GetManyByGroup(ctx context.Context, groupID
uuid.UUID, ids []uuid.UUID) ([]repo.ItemOut, error)) and in the code replace or
annotate the existing comment with a TODO referencing that issue (include issue
number/URL) so future work is tracked; also mention in the issue the need to
update ExportMultipleItems to call GetManyByGroup and to handle partial/missing
items consistently.
- Around line 820-824: The default-to-jpg branch currently sets imgType = "jpg"
for unknown MIME types (see the switch handling att.MimeType and the log warning
using att.ID), but you must also validate the actual bytes match the chosen
format before embedding; after determining imgType (or before falling back to
"jpg"), inspect the attachment bytes (att.Data or whatever holds the image
payload) and verify format integrity (e.g., use image/jpeg.DecodeConfig for JPEG
or check magic bytes/header for JPEG/WEBP/AVIF/HEIC) and if the check fails
return an error instead of proceeding; update the code around the switch and the
image-embedding path to perform this validation and only set/accept imgType when
the content verification succeeds.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 4762076c-103e-4024-927b-2ea34870c0b3
⛔ Files ignored due to path filters (1)
backend/go.sumis excluded by!**/*.sum
📒 Files selected for processing (5)
backend/app/api/handlers/v1/v1_ctrl_pdf_export.gobackend/go.modbackend/internal/core/services/pdf_export.gofrontend/lib/api/classes/items.tsfrontend/pages/collection/index/tools.vue
✅ Files skipped from review due to trivial changes (1)
- backend/go.mod
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/pages/collection/index/tools.vue
| // exportBulkPDF triggers a POST request to download a PDF for multiple selected items. | ||
| // Returns a Blob that can be used to trigger a file download in the browser. | ||
| // Auth is handled by same-origin cookies (same mechanism as window.open for GET exports). | ||
| async exportBulkPDF( | ||
| itemIds: string[], | ||
| options: { theme?: string; photos?: boolean; owner?: string } = {} | ||
| ): Promise<{ data: Blob | null; error: boolean }> { | ||
| const params: Record<string, string> = {}; | ||
| if (options.theme) params.theme = options.theme; | ||
| if (options.photos === false) params.photos = "false"; | ||
| if (options.owner) params.owner = options.owner; | ||
|
|
||
| const url = route("/items/export/pdf", params); | ||
|
|
||
| try { | ||
| const response = await fetch(url, { | ||
| method: "POST", | ||
| headers: { "Content-Type": "application/json" }, | ||
| credentials: "same-origin", | ||
| body: JSON.stringify({ itemIds }), | ||
| }); | ||
|
|
||
| if (!response.ok) { | ||
| return { data: null, error: true }; | ||
| } | ||
|
|
||
| const blob = await response.blob(); | ||
| return { data: blob, error: false }; | ||
| } catch { | ||
| return { data: null, error: true }; | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check what this.http.post does that raw fetch might miss
echo "=== Check Requests class implementation ==="
rg -n "class Requests" frontend/lib/requests.ts -A 30
echo ""
echo "=== Check if there are custom headers or interceptors ==="
rg -n "headers|interceptor|Authorization|X-CSRF" frontend/lib/requests.ts -B 2 -A 2
echo ""
echo "=== Check how other POST methods handle blob responses ==="
rg -n "blob\(\)|Blob" frontend/lib/api/ -B 3 -A 3Repository: sysadminsmedia/homebox
Length of output: 3898
🏁 Script executed:
# Find the BaseAPI class
find frontend/lib/api -type f -name "*.ts" | head -20Repository: sysadminsmedia/homebox
Length of output: 869
🏁 Script executed:
# Look for the base API implementation
rg -n "class BaseAPI|this\.http" frontend/lib/api/ -B 2 -A 5 | head -100Repository: sysadminsmedia/homebox
Length of output: 6260
🏁 Script executed:
# Check what methods are available on this.http and how auth is handled
rg -n "http\s*[:=]|fetchJson|setHeader|interceptor" frontend/lib/api/ -B 2 -A 3Repository: sysadminsmedia/homebox
Length of output: 2084
🏁 Script executed:
# Find the Requests class definition
rg -n "class Requests|export class Requests" frontend/lib/api/ -A 50Repository: sysadminsmedia/homebox
Length of output: 48
🏁 Script executed:
# Check the base-api.ts file for the full context
cat -n frontend/lib/api/base/base-api.tsRepository: sysadminsmedia/homebox
Length of output: 3979
🏁 Script executed:
# Find the Requests class
find frontend/lib -name "requests.ts" -o -name "requests.js"Repository: sysadminsmedia/homebox
Length of output: 99
🏁 Script executed:
# Search for Requests class definition more broadly
rg -n "class Requests|export.*Requests" frontend/lib/ --type ts --type jsRepository: sysadminsmedia/homebox
Length of output: 214
🏁 Script executed:
# Read the Requests class implementation
cat -n frontend/lib/requests/requests.tsRepository: sysadminsmedia/homebox
Length of output: 4195
🏁 Script executed:
# Search for other raw fetch() calls in the API
rg -n "fetch\(" frontend/lib/api/ -B 3 -A 5Repository: sysadminsmedia/homebox
Length of output: 682
🏁 Script executed:
# Check the Requests class again - does it set credentials?
rg -n "credentials" frontend/lib/requests/Repository: sysadminsmedia/homebox
Length of output: 48
🏁 Script executed:
# Check if there are export endpoints that might require specific auth
rg -n "export" frontend/lib/api/classes/items.ts -B 2 -A 10Repository: sysadminsmedia/homebox
Length of output: 4723
🏁 Script executed:
# Check if there's a pattern of Authorization header being used elsewhere
rg -n "Authorization" frontend/lib/ -B 2 -A 2Repository: sysadminsmedia/homebox
Length of output: 423
🏁 Script executed:
# Check if there are comments about the export endpoint auth in the codebase
rg -n "export.*auth|cookie|same-origin" frontend/lib/api/ -B 2 -A 2Repository: sysadminsmedia/homebox
Length of output: 927
🏁 Script executed:
# Check the Requests constructor to see what headers are available
rg -n "constructor" frontend/lib/requests/requests.ts -A 10Repository: sysadminsmedia/homebox
Length of output: 509
Raw fetch() is necessary here, but consider adding Authorization header for API token support.
The code correctly uses raw fetch() because the Requests class cannot properly handle Blob responses—its response parsing returns ReadableStream, not Blob. However, the export endpoint currently only uses cookie-based authentication (credentials: "same-origin") and omits the Authorization header.
This design is intentional per the code comment (matching GET exports via window.open), but it means API token-based authentication is not supported for this endpoint. If the backend supports bearer token auth for exports, consider adding the Authorization header:
const response = await fetch(url, {
method: "POST",
headers: {
"Content-Type": "application/json",
+ ...(token && { "Authorization": token }),
},
credentials: "same-origin",
body: JSON.stringify({ itemIds }),
});Verify with backend whether the /items/export/pdf endpoint supports both cookie and bearer token authentication, or if cookie-only is acceptable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/lib/api/classes/items.ts` around lines 218 - 249, exportBulkPDF
currently uses fetch with credentials: "same-origin" only, which prevents API
token auth; update exportBulkPDF to include an Authorization: Bearer <token>
header when an API token is available (e.g., accept an optional token param or
read from the existing auth utility), while keeping credentials: "same-origin"
so cookie auth still works; add the header into the fetch headers (alongside
"Content-Type") only when token is non-empty so Blob handling and POST body
remain unchanged, and confirm with the backend that /items/export/pdf accepts
bearer tokens.
| const ( | ||
| // maxPDFExportItems caps the number of items in a single PDF export | ||
| // to prevent excessive memory usage and request timeouts. | ||
| maxPDFExportItems = 500 |
There was a problem hiding this comment.
I would recommend making this a configurable option, those with weak systems might want it lower, and those with very powerful home labs might be able to go much higher.
| // addItemPages renders one or more pages of detailed information for a single item. | ||
| // Includes: header bar, primary photo, details, purchase/warranty/sold info, | ||
| // custom fields, notes, additional photos, receipts, and maintenance history. | ||
| func (svc *PDFExportService) addItemPages( |
There was a problem hiding this comment.
The cyclomatic complexity can be resolved by breaking this function down into several smaller functions and peices.
| defer bucket.Close() | ||
|
|
||
| // Read the full file from storage | ||
| reader, err := bucket.NewReader(ctx, svc.repo.Attachments.GetFullPath(att.Path), nil) | ||
| if err != nil { | ||
| return nil, "", fmt.Errorf("failed to read attachment: %w", err) | ||
| } | ||
| defer reader.Close() |
There was a problem hiding this comment.
Errors from the close functions should be handled gracefully.
Summary
Adds server-side PDF generation for inventory items, enabling insurance-grade documentation exports directly from Homebox. This implements the feature requested in discussion #735.
gofpdfwith four API endpoints for single-item, all-items, bulk, and theme listingWhat the PDF includes
API Endpoints
/api/v1/items/{id}/export/pdf/api/v1/items/export/pdf/api/v1/items/export/pdf/api/v1/items/export/pdf/themesAll endpoints support
?theme=classic|modern|forestand?photos=true|falsequery params.Files Changed
backend/internal/core/services/pdf_export.gobackend/app/api/handlers/v1/v1_ctrl_pdf_export.gobackend/app/api/routes.gobackend/go.mod/go.sumgofpdfdependencyfrontend/lib/api/classes/items.tsfrontend/pages/item/[id]/index.vuefrontend/pages/collection/index/tools.vuefrontend/locales/en.jsonPrior Art
Based on the standalone companion tool Homebox-Export, which served as the proof of concept for this feature.
Test Plan
?theme=modern,?theme=forest)?photos=false)🤖 Generated with Claude Code
Summary by CodeRabbit