Skip to content

feat: Add native PDF export for items#1387

Open
JonGaydos wants to merge 2 commits into
sysadminsmedia:mainfrom
JonGaydos:feature/pdf-export
Open

feat: Add native PDF export for items#1387
JonGaydos wants to merge 2 commits into
sysadminsmedia:mainfrom
JonGaydos:feature/pdf-export

Conversation

@JonGaydos
Copy link
Copy Markdown

@JonGaydos JonGaydos commented Mar 20, 2026

Summary

Adds server-side PDF generation for inventory items, enabling insurance-grade documentation exports directly from Homebox. This implements the feature requested in discussion #735.

  • Backend: New PDF export service using gofpdf with four API endpoints for single-item, all-items, bulk, and theme listing
  • Frontend: Export PDF option added to item detail dropdown and collection tools page
  • Minimal footprint: 2 new files, 6 modified files (1.3% of codebase)

What the PDF includes

  • Cover page with title, owner name, date
  • Summary page with item table (multi-item exports)
  • Per-item detail pages: photo, location, serial/model/manufacturer, purchase info, warranty, custom fields, notes, maintenance history, receipt images, sold info
  • Three built-in color themes: classic (navy), modern (slate), forest (green)

API Endpoints

Method Endpoint Description
GET /api/v1/items/{id}/export/pdf Export single item
GET /api/v1/items/export/pdf Export all items
POST /api/v1/items/export/pdf Export selected items by ID
GET /api/v1/items/export/pdf/themes List available themes

All endpoints support ?theme=classic|modern|forest and ?photos=true|false query params.

Files Changed

File Change
backend/internal/core/services/pdf_export.go New — PDF generation service
backend/app/api/handlers/v1/v1_ctrl_pdf_export.go New — HTTP handlers
backend/app/api/routes.go Added 4 route registrations
backend/go.mod / go.sum Added gofpdf dependency
frontend/lib/api/classes/items.ts API client methods
frontend/pages/item/[id]/index.vue Export PDF dropdown option
frontend/pages/collection/index/tools.vue Export PDF Report button
frontend/locales/en.json i18n keys

Prior Art

Based on the standalone companion tool Homebox-Export, which served as the proof of concept for this feature.

Test Plan

  • Single item PDF export from item detail page dropdown
  • All items PDF export from collection tools page
  • Bulk export via POST with selected item IDs
  • Theme switching (?theme=modern, ?theme=forest)
  • Photos toggle (?photos=false)
  • Items with no photos, no maintenance, no custom fields
  • Items with sold information
  • Large collections (50+ items)

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features
    • Added PDF export for single items, selected items (bulk), and entire collections.
    • Exports support selectable themes, optional inclusion of photos, and owner metadata.
    • New UI actions: "Export PDF" on item pages and collection tools.
    • API endpoints and client helpers to download PDFs and fetch available themes.

Add server-side PDF generation for inventory items, enabling
insurance-grade documentation exports directly from Homebox.

Backend:
- New PDF export service using gofpdf with themed report generation
- GET /api/v1/items/{id}/export/pdf — single item export
- GET /api/v1/items/export/pdf — all items export
- POST /api/v1/items/export/pdf — bulk export by IDs
- GET /api/v1/items/export/pdf/themes — list available themes
- Supports cover page, summary table, per-item detail pages with
  photos, maintenance history, custom fields, and receipts
- Three built-in themes: classic (navy), modern (slate), forest (green)

Frontend:
- Export PDF option in item detail dropdown menu
- Export PDF Report button in collection tools page
- API client methods for all export endpoints
- i18n support for export labels

Closes discussion sysadminsmedia#735

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 20, 2026

Walkthrough

Adds end-to-end PDF export: backend PDF generation service and new v1 HTTP handlers for single, bulk, and group exports plus theme listing; frontend API methods and UI actions to trigger downloads; and a PDF generation dependency.

Changes

Cohort / File(s) Summary
PDF Export Service
backend/internal/core/services/pdf_export.go
New PDFExportService with theming, single/multi-item export methods, cover/summary/item pages, image/attachment handling, and exported types/constants (themes, options, MaxImageBytes).
API Handlers
backend/app/api/handlers/v1/v1_ctrl_pdf_export.go
New v1 handlers: HandleItemExportPDF, HandleItemsExportPDF, HandleItemsExportAllPDF, HandlePDFThemes; request validation, UUID parsing helper, filename sanitization, and streaming PDF responses with proper headers and error mapping.
API Routes
backend/app/api/routes.go
Registered authenticated routes: GET /items/{id}/export/pdf, POST /items/export/pdf, GET /items/export/pdf, GET /items/export/pdf/themes.
Go Module
backend/go.mod
Added PDF generation dependency: codeberg.org/go-pdf/fpdf v0.11.1.
Frontend API
frontend/lib/api/classes/items.ts
Added exportPDFURL(), exportAllPDFURL(), exportBulkPDF() (POST), and getPDFThemes() methods for PDF export flows and fetching themes.
Frontend UI & Localization
frontend/pages/collection/index/tools.vue, frontend/pages/item/[id]/index.vue, frontend/locales/en.json
Added "Export PDF" UI actions in collection tools and item actions; added English localization strings for PDF export labels.

Sequence Diagram

sequenceDiagram
    participant User as User/Browser
    participant Handler as API Handler
    participant Service as PDFExportService
    participant Repo as Repository
    participant Blob as Blob Storage

    User->>Handler: Request export (single/bulk/all, theme, photos, owner)
    Handler->>Handler: Parse params / validate UUIDs
    Handler->>Service: NewPDFExportService(repo) + Export...(ctx, groupID, itemIDs, opts)
    Service->>Repo: Load items (and maintenance records per item)
    loop per photo/attachment
        Service->>Blob: ReadAttachment(path)
        Blob-->>Service: Image bytes
    end
    Service->>Service: Render cover, summary, item pages
    Service-->>Handler: PDF bytes + filename
    Handler->>User: HTTP 200 with PDF body and Content-Disposition
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested labels

⬆️ enhancement, go

Suggested reviewers

  • tankerkiller125
  • katosdev
  • tonyaellie

Security Recommendations

  1. Validate and limit owner and other free-text query params (e.g., max length ~255) to prevent PDF bloat or DoS.
  2. Ensure blob/attachment reads in readAttachment are authorized and scoped to the requesting group to prevent cross-group data exposure.
  3. Apply rate limiting or export quotas for bulk/all export endpoints to mitigate resource exhaustion.
  4. Sanitize/validate filenames and Content-Disposition values (already added) and ensure header injection is impossible.
  5. Avoid leaking sensitive backend errors in responses for bulk operations; return minimal error messages and log details server-side.

"PDFs dispatched with care and theme,
Photos placed where they should seem,
One click to export the lot,
Neat pages, no bytes forgot —
Inventory served up like a dream ✨"

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change—adding native PDF export for items—and clearly identifies the feature being introduced.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Description check ✅ Passed PR description covers all required sections: type (feature), what it does with file-by-file changes, related issue (#735), and testing plan with checkboxes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can validate your CodeRabbit configuration file in your editor.

If your editor has YAML language server, you can enable auto-completion and validation by adding # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json at the top of your CodeRabbit configuration file.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (4)
backend/internal/core/services/pdf_export.go (2)

145-161: N+1 query pattern when loading multiple items.

ExportMultipleItems loads items one-by-one in a loop (line 151), resulting in N database queries for N items. For large exports, this could impact performance.

♻️ Consider batch loading items

If the repository supports it, loading items in a single query would be more efficient:

// Ideal: Load all items in one query
items, err := svc.repo.Items.GetManyByGroup(ctx, groupID, itemIDs)

This would require adding a batch-fetch method to the repository if one doesn't exist. Given the current scope, this is a nice-to-have optimization.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/internal/core/services/pdf_export.go` around lines 145 - 161,
ExportMultipleItems currently issues N separate repo calls in the loop, causing
an N+1 query problem; replace the per-item lookup in ExportMultipleItems with a
single batch fetch (e.g., add and call svc.repo.Items.GetManyByGroup(ctx,
groupID, itemIDs)) that returns all found items and errors, then use that result
(or if the repo cannot be changed, implement a new GetManyByGroup on the Items
repository and call it from ExportMultipleItems); ensure you preserve the
original behavior of skipping missing items and returning an error when no valid
items are found, and surface/log any batch-fetch error via the existing logging
pattern (log.Warn()/log.Error()) while still returning the proper (nil, "",
error) or (bytes, filename, nil) response as before.

387-406: Memory usage scales with image size — consider size limits or thumbnail fallback.

Each photo is read fully into memory via readAttachment. For high-resolution images, this could consume significant memory, especially in multi-item exports with many photos.

Consider:

  1. Using thumbnail versions when available (the data model appears to support thumbnails)
  2. Adding a maximum image size limit
  3. Resizing images before embedding
🔧 Suggested thumbnail fallback
  // Find the primary photo attachment
  for _, att := range item.Attachments {
      if att.Primary && att.Type == attachment.TypePhoto.String() {
-         imgBytes, imgType, err := svc.readAttachment(ctx, att)
+         // Prefer thumbnail if available to reduce memory usage
+         attachmentToRead := att
+         if att.Thumbnail != nil {
+             attachmentToRead = *att.Thumbnail
+         }
+         imgBytes, imgType, err := svc.readAttachment(ctx, attachmentToRead)
          if err != nil {
              log.Warn().Err(err).Msg("failed to read primary photo for PDF")
              break
          }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/internal/core/services/pdf_export.go` around lines 387 - 406, The
code currently reads full-resolution photos via svc.readAttachment for each att
in item.Attachments and embeds them, which can blow memory; change the flow in
the opts.IncludePhotos block to (1) prefer a thumbnail version when present on
the attachment (e.g., att.ThumbnailID or att.Thumbnail field) by calling
svc.readAttachment for the thumbnail first, (2) if no thumbnail exists, call
svc.readAttachment but enforce a maximum allowed size (define a constant like
MaxImageBytes) and if the returned byte slice exceeds that or the image
dimensions exceed a MaxImageWidth/Height, decode the image (image.Decode),
resize it down to the max dimensions (using a resize library or Go image
operations) and re-encode to a suitable format before passing to
pdf.RegisterImageOptionsReader, and (3) keep the same registration and
pdf.ImageOptions usage (pdf.RegisterImageOptionsReader, pdf.ImageOptions) and
still set photoEmbedded = true; reference svc.readAttachment, item.Attachments,
att.Primary, attachment.TypePhoto, pdf.RegisterImageOptionsReader,
pdf.ImageOptions, and photoEmbedded when locating the code to change.
backend/app/api/handlers/v1/v1_ctrl_pdf_export.go (2)

130-191: Large inventory exports could cause memory pressure and timeouts.

HandleItemsExportAllPDF loads all items into memory, then generates the entire PDF in memory before responding. For users with large inventories (thousands of items with photos), this could:

  1. Cause high memory usage on the server
  2. Lead to request timeouts before the PDF is ready
  3. Block the request thread for extended periods

Consider implementing:

  • Pagination/chunking: Process items in batches
  • Item limit: Cap the maximum number of items per export
  • Async generation: Generate PDFs in the background and notify when ready
  • Streaming: Stream the PDF as it's generated (though this is complex with gofpdf)

For now, at minimum, consider adding a limit:

🔧 Add a maximum item limit
+const maxPDFExportItems = 500 // Prevent excessive memory usage

 if len(allItems.Items) == 0 {
     return validate.NewRequestError(
         fmt.Errorf("no items found to export"),
         http.StatusNotFound,
     )
 }

+if len(allItems.Items) > maxPDFExportItems {
+    return validate.NewRequestError(
+        fmt.Errorf("too many items to export (%d); maximum is %d", len(allItems.Items), maxPDFExportItems),
+        http.StatusBadRequest,
+    )
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/api/handlers/v1/v1_ctrl_pdf_export.go` around lines 130 - 191,
HandleItemsExportAllPDF currently loads all items and builds the entire PDF in
memory (via pdfSvc.ExportMultipleItems), which risks OOM/timeouts; enforce a
maximum exportable items cap (e.g., MAX_EXPORT_ITEMS) and validate against it
before generating the PDF: check the length of itemIDs (or limit the
repo.Items.QueryByGroup call to PageSize = MAX_EXPORT_ITEMS) and return a clear
HTTP 413/400 error if exceeded. Update the handler's logic around
ctrl.repo.Items.QueryByGroup, the itemIDs collection, and the call to
services.NewPDFExportService/ExportMultipleItems to ensure you never pass more
than the cap; optionally add config/constant for MAX_EXPORT_ITEMS and surface a
helpful message suggesting chunked/async export.

53-56: Consider sanitizing filenames to prevent header injection.

The filename is constructed from user data (asset ID) and embedded directly into the Content-Disposition header. While the current implementation uses quotes, filenames containing special characters (quotes, newlines, semicolons) could potentially cause issues.

🛡️ Suggested sanitization
+// sanitizeFilename removes or escapes characters that could cause issues in HTTP headers
+func sanitizeFilename(name string) string {
+	// Replace characters that could break header parsing
+	replacer := strings.NewReplacer(
+		"\"", "'",
+		"\n", "",
+		"\r", "",
+		";", "_",
+	)
+	return replacer.Replace(name)
+}

 // Set response headers for PDF file download
 w.Header().Set("Content-Type", "application/pdf")
-w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
+w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", sanitizeFilename(filename)))
 w.Header().Set("Content-Length", fmt.Sprintf("%d", len(pdfBytes)))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/api/handlers/v1/v1_ctrl_pdf_export.go` around lines 53 - 56,
Sanitize the filename variable before embedding it in the Content-Disposition
header to prevent header injection: in the handler that sets
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"",
filename)) (and where filename is built from the asset ID), strip or replace
control characters (CR/LF), quotes, semicolons and other unsafe characters, and
fallback to a safe token if the result is empty; after sanitization use that
sanitizedFilename in the Content-Disposition value and optionally add a
percent-encoded filename* form (RFC 5987) or url.PathEscape to safely represent
Unicode characters. Ensure you update all uses of filename in this handler (the
variable named filename and the Content-Disposition header setting) to use the
sanitized value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/go.mod`:
- Line 17: Replace the deprecated GitHub dependency with the maintained Codeberg
fork by updating the module path in go.mod from "github.com/go-pdf/fpdf v0.9.0"
to "codeberg.org/go-pdf/fpdf v0.9.0" (or the latest compatible version), then
run module resolution (e.g., go get/code tidy) to fetch the new import and
verify builds; ensure any imports in the codebase referencing
github.com/go-pdf/fpdf are updated to codeberg.org/go-pdf/fpdf if present.

In `@backend/internal/core/services/pdf_export.go`:
- Around line 773-783: The MIME-to-image-type switch in pdf_export.go silently
defaults imgType to "jpg" for unknown MIME types (using att.MimeType), which
breaks WebP/HEIC/AVIF; update the switch to explicitly handle "webp",
"heic"/"heif", and "avif" by setting imgType to "webp", "heic"/"heif" or "avif"
as appropriate (matching the values expected by the fpdf registration code) and
add a default branch that logs a warning including att.MimeType and the
attachment ID so unsupported types are visible rather than being misregistered
as "jpg"; ensure you reference the same imgType variable and the surrounding
image registration code so behavior remains consistent.
- Around line 751-786: Although paths are server-generated, add a defensive
check in readAttachment to guard against DB-tampered paths: call filepath.Clean
on att.Path, ensure the cleaned path has no ".." components and that it begins
with the expected prefix (e.g., "*/documents/" or the deterministic
"{group-uuid}/documents/") before using repo.Attachments.GetFullPath(att.Path);
if the check fails, return an error. This change should be made inside
PDFExportService.readAttachment before opening the bucket/creating the reader
and can reuse normalizePath semantics for separator/trimming consistency.

In `@frontend/pages/collection/index/tools.vue`:
- Around line 157-161: The PDF export currently uses getExportPDF ->
api.items.exportAllPDFURL() which ignores prefs.value.collectionId and exports
all items; update getExportPDF to pass prefs.value.collectionId (like the CSV
export uses api.items.exportURL(prefs.value.collectionId ?? undefined)) and then
modify items.ts's exportAllPDFURL to accept an optional collectionId parameter
and include it in the generated URL; alternatively, if exporting all collections
is intentional, add a clear comment or UI note in getExportPDF explaining the
behavior instead of changing the API.

---

Nitpick comments:
In `@backend/app/api/handlers/v1/v1_ctrl_pdf_export.go`:
- Around line 130-191: HandleItemsExportAllPDF currently loads all items and
builds the entire PDF in memory (via pdfSvc.ExportMultipleItems), which risks
OOM/timeouts; enforce a maximum exportable items cap (e.g., MAX_EXPORT_ITEMS)
and validate against it before generating the PDF: check the length of itemIDs
(or limit the repo.Items.QueryByGroup call to PageSize = MAX_EXPORT_ITEMS) and
return a clear HTTP 413/400 error if exceeded. Update the handler's logic around
ctrl.repo.Items.QueryByGroup, the itemIDs collection, and the call to
services.NewPDFExportService/ExportMultipleItems to ensure you never pass more
than the cap; optionally add config/constant for MAX_EXPORT_ITEMS and surface a
helpful message suggesting chunked/async export.
- Around line 53-56: Sanitize the filename variable before embedding it in the
Content-Disposition header to prevent header injection: in the handler that sets
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"",
filename)) (and where filename is built from the asset ID), strip or replace
control characters (CR/LF), quotes, semicolons and other unsafe characters, and
fallback to a safe token if the result is empty; after sanitization use that
sanitizedFilename in the Content-Disposition value and optionally add a
percent-encoded filename* form (RFC 5987) or url.PathEscape to safely represent
Unicode characters. Ensure you update all uses of filename in this handler (the
variable named filename and the Content-Disposition header setting) to use the
sanitized value.

In `@backend/internal/core/services/pdf_export.go`:
- Around line 145-161: ExportMultipleItems currently issues N separate repo
calls in the loop, causing an N+1 query problem; replace the per-item lookup in
ExportMultipleItems with a single batch fetch (e.g., add and call
svc.repo.Items.GetManyByGroup(ctx, groupID, itemIDs)) that returns all found
items and errors, then use that result (or if the repo cannot be changed,
implement a new GetManyByGroup on the Items repository and call it from
ExportMultipleItems); ensure you preserve the original behavior of skipping
missing items and returning an error when no valid items are found, and
surface/log any batch-fetch error via the existing logging pattern
(log.Warn()/log.Error()) while still returning the proper (nil, "", error) or
(bytes, filename, nil) response as before.
- Around line 387-406: The code currently reads full-resolution photos via
svc.readAttachment for each att in item.Attachments and embeds them, which can
blow memory; change the flow in the opts.IncludePhotos block to (1) prefer a
thumbnail version when present on the attachment (e.g., att.ThumbnailID or
att.Thumbnail field) by calling svc.readAttachment for the thumbnail first, (2)
if no thumbnail exists, call svc.readAttachment but enforce a maximum allowed
size (define a constant like MaxImageBytes) and if the returned byte slice
exceeds that or the image dimensions exceed a MaxImageWidth/Height, decode the
image (image.Decode), resize it down to the max dimensions (using a resize
library or Go image operations) and re-encode to a suitable format before
passing to pdf.RegisterImageOptionsReader, and (3) keep the same registration
and pdf.ImageOptions usage (pdf.RegisterImageOptionsReader, pdf.ImageOptions)
and still set photoEmbedded = true; reference svc.readAttachment,
item.Attachments, att.Primary, attachment.TypePhoto,
pdf.RegisterImageOptionsReader, pdf.ImageOptions, and photoEmbedded when
locating the code to change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: a3570b27-f175-4b70-ae10-5e5015521d43

📥 Commits

Reviewing files that changed from the base of the PR and between 87f2696 and c399656.

⛔ Files ignored due to path filters (1)
  • backend/go.sum is excluded by !**/*.sum
📒 Files selected for processing (8)
  • backend/app/api/handlers/v1/v1_ctrl_pdf_export.go
  • backend/app/api/routes.go
  • backend/go.mod
  • backend/internal/core/services/pdf_export.go
  • frontend/lib/api/classes/items.ts
  • frontend/locales/en.json
  • frontend/pages/collection/index/tools.vue
  • frontend/pages/item/[id]/index.vue

Comment thread backend/go.mod Outdated
Comment thread backend/internal/core/services/pdf_export.go
Comment thread backend/internal/core/services/pdf_export.go Outdated
Comment thread frontend/pages/collection/index/tools.vue Outdated
1. Replace deprecated github.com/go-pdf/fpdf with maintained
   codeberg.org/go-pdf/fpdf fork (v0.11.1)
2. Handle WebP/HEIC/AVIF image types explicitly — log warning
   and skip unsupported formats instead of silently misregistering
3. Add path traversal guard in readAttachment to reject paths
   containing ".." components
4. Pass collection filter (tenant) to PDF export in tools.vue
   so it respects the current collection view
5. Add max item limit (500) to HandleItemsExportAllPDF to prevent
   memory exhaustion and request timeouts
6. Sanitize filenames in Content-Disposition headers to prevent
   header injection via special characters in asset IDs
7. Add MaxImageBytes (10MB) constant — skip oversized images with
   a warning instead of consuming unbounded memory
8. Document N+1 query pattern in ExportMultipleItems as a known
   optimization opportunity for future batch-fetch support

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coderabbitai coderabbitai Bot added ⬆️ enhancement New feature or request go Pull requests that update Go code labels Mar 20, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
backend/internal/core/services/pdf_export.go (3)

256-275: Currency formatting assumes USD - potential i18n concern.

The total value and individual prices use hardcoded $ prefix (e.g., $%.2f). If HomeBox is intended for international users, this may not reflect their local currency.

💡 Consider currency-aware formatting

If currency information is available on items or user preferences, consider using it for display. For now, documenting this assumption in the code would be helpful:

+	// NOTE: Currency symbol is hardcoded to USD ($). If multi-currency support
+	// is needed in the future, retrieve currency from user/group settings.
 	if totalValue > 0 {
 		pdf.SetY(yPos + 20)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/internal/core/services/pdf_export.go` around lines 256 - 275, The PDF
uses a hardcoded dollar sign when printing totals (see totalValue, items loop
and the pdf.CellFormat calls), which breaks i18n; update the formatting to be
currency-aware by using a currency value/symbol from the item's data or user
preferences (e.g., check for a Currency field on items or a tenant/user setting)
and replace the "$%.2f" formatting with a format that injects the resolved
currency symbol or ISO currency formatting; if no currency info exists, either
fall back to a documented default (add a comment near totalValue calculation
indicating USD default) or integrate a currency formatting helper (using a
standard library/package) and apply it to both the total and the insured-count
line output.

150-203: N+1 query pattern is documented but worth tracking.

The comment on lines 156-159 acknowledges the N+1 query pattern. While acceptable for typical export sizes, this could become a performance concern for exports approaching the 500-item limit.

Would you like me to open an issue to track the future optimization of adding a batch-fetch method (GetManyByGroup) to the repository layer?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/internal/core/services/pdf_export.go` around lines 150 - 203, The
ExportMultipleItems function currently uses an N+1 pattern via
svc.repo.Items.GetOneByGroup; open a tracking issue to add a batch-fetch method
(e.g., Items.GetManyByGroup(ctx context.Context, groupID uuid.UUID, ids
[]uuid.UUID) ([]repo.ItemOut, error)) and in the code replace or annotate the
existing comment with a TODO referencing that issue (include issue number/URL)
so future work is tracked; also mention in the issue the need to update
ExportMultipleItems to call GetManyByGroup and to handle partial/missing items
consistently.

820-824: Unknown MIME types default to JPEG with a warning, but add validation for image format integrity.

For truly unknown MIME types (not explicitly webp/heic/avif), the code defaults to jpg and logs a warning. This is reasonable given fpdf's native format support and explicit rejection of unsupported formats. However, from a security perspective, consider validating that the actual image bytes match the detected MIME type—defaulting unknown types to JPEG without format verification could allow malicious uploads to bypass validation, potentially causing PDF corruption or DoS issues if the data isn't actually valid JPEG. The path traversal checks and size limits are good defensive measures, but format validation would provide defense-in-depth.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/internal/core/services/pdf_export.go` around lines 820 - 824, The
default-to-jpg branch currently sets imgType = "jpg" for unknown MIME types (see
the switch handling att.MimeType and the log warning using att.ID), but you must
also validate the actual bytes match the chosen format before embedding; after
determining imgType (or before falling back to "jpg"), inspect the attachment
bytes (att.Data or whatever holds the image payload) and verify format integrity
(e.g., use image/jpeg.DecodeConfig for JPEG or check magic bytes/header for
JPEG/WEBP/AVIF/HEIC) and if the check fails return an error instead of
proceeding; update the code around the switch and the image-embedding path to
perform this validation and only set/accept imgType when the content
verification succeeds.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@frontend/lib/api/classes/items.ts`:
- Around line 218-249: exportBulkPDF currently uses fetch with credentials:
"same-origin" only, which prevents API token auth; update exportBulkPDF to
include an Authorization: Bearer <token> header when an API token is available
(e.g., accept an optional token param or read from the existing auth utility),
while keeping credentials: "same-origin" so cookie auth still works; add the
header into the fetch headers (alongside "Content-Type") only when token is
non-empty so Blob handling and POST body remain unchanged, and confirm with the
backend that /items/export/pdf accepts bearer tokens.

---

Nitpick comments:
In `@backend/internal/core/services/pdf_export.go`:
- Around line 256-275: The PDF uses a hardcoded dollar sign when printing totals
(see totalValue, items loop and the pdf.CellFormat calls), which breaks i18n;
update the formatting to be currency-aware by using a currency value/symbol from
the item's data or user preferences (e.g., check for a Currency field on items
or a tenant/user setting) and replace the "$%.2f" formatting with a format that
injects the resolved currency symbol or ISO currency formatting; if no currency
info exists, either fall back to a documented default (add a comment near
totalValue calculation indicating USD default) or integrate a currency
formatting helper (using a standard library/package) and apply it to both the
total and the insured-count line output.
- Around line 150-203: The ExportMultipleItems function currently uses an N+1
pattern via svc.repo.Items.GetOneByGroup; open a tracking issue to add a
batch-fetch method (e.g., Items.GetManyByGroup(ctx context.Context, groupID
uuid.UUID, ids []uuid.UUID) ([]repo.ItemOut, error)) and in the code replace or
annotate the existing comment with a TODO referencing that issue (include issue
number/URL) so future work is tracked; also mention in the issue the need to
update ExportMultipleItems to call GetManyByGroup and to handle partial/missing
items consistently.
- Around line 820-824: The default-to-jpg branch currently sets imgType = "jpg"
for unknown MIME types (see the switch handling att.MimeType and the log warning
using att.ID), but you must also validate the actual bytes match the chosen
format before embedding; after determining imgType (or before falling back to
"jpg"), inspect the attachment bytes (att.Data or whatever holds the image
payload) and verify format integrity (e.g., use image/jpeg.DecodeConfig for JPEG
or check magic bytes/header for JPEG/WEBP/AVIF/HEIC) and if the check fails
return an error instead of proceeding; update the code around the switch and the
image-embedding path to perform this validation and only set/accept imgType when
the content verification succeeds.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 4762076c-103e-4024-927b-2ea34870c0b3

📥 Commits

Reviewing files that changed from the base of the PR and between c399656 and 51ffba8.

⛔ Files ignored due to path filters (1)
  • backend/go.sum is excluded by !**/*.sum
📒 Files selected for processing (5)
  • backend/app/api/handlers/v1/v1_ctrl_pdf_export.go
  • backend/go.mod
  • backend/internal/core/services/pdf_export.go
  • frontend/lib/api/classes/items.ts
  • frontend/pages/collection/index/tools.vue
✅ Files skipped from review due to trivial changes (1)
  • backend/go.mod
🚧 Files skipped from review as they are similar to previous changes (1)
  • frontend/pages/collection/index/tools.vue

Comment on lines +218 to +249
// exportBulkPDF triggers a POST request to download a PDF for multiple selected items.
// Returns a Blob that can be used to trigger a file download in the browser.
// Auth is handled by same-origin cookies (same mechanism as window.open for GET exports).
async exportBulkPDF(
itemIds: string[],
options: { theme?: string; photos?: boolean; owner?: string } = {}
): Promise<{ data: Blob | null; error: boolean }> {
const params: Record<string, string> = {};
if (options.theme) params.theme = options.theme;
if (options.photos === false) params.photos = "false";
if (options.owner) params.owner = options.owner;

const url = route("/items/export/pdf", params);

try {
const response = await fetch(url, {
method: "POST",
headers: { "Content-Type": "application/json" },
credentials: "same-origin",
body: JSON.stringify({ itemIds }),
});

if (!response.ok) {
return { data: null, error: true };
}

const blob = await response.blob();
return { data: blob, error: false };
} catch {
return { data: null, error: true };
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check what this.http.post does that raw fetch might miss
echo "=== Check Requests class implementation ==="
rg -n "class Requests" frontend/lib/requests.ts -A 30

echo ""
echo "=== Check if there are custom headers or interceptors ==="
rg -n "headers|interceptor|Authorization|X-CSRF" frontend/lib/requests.ts -B 2 -A 2

echo ""
echo "=== Check how other POST methods handle blob responses ==="
rg -n "blob\(\)|Blob" frontend/lib/api/ -B 3 -A 3

Repository: sysadminsmedia/homebox

Length of output: 3898


🏁 Script executed:

# Find the BaseAPI class
find frontend/lib/api -type f -name "*.ts" | head -20

Repository: sysadminsmedia/homebox

Length of output: 869


🏁 Script executed:

# Look for the base API implementation
rg -n "class BaseAPI|this\.http" frontend/lib/api/ -B 2 -A 5 | head -100

Repository: sysadminsmedia/homebox

Length of output: 6260


🏁 Script executed:

# Check what methods are available on this.http and how auth is handled
rg -n "http\s*[:=]|fetchJson|setHeader|interceptor" frontend/lib/api/ -B 2 -A 3

Repository: sysadminsmedia/homebox

Length of output: 2084


🏁 Script executed:

# Find the Requests class definition
rg -n "class Requests|export class Requests" frontend/lib/api/ -A 50

Repository: sysadminsmedia/homebox

Length of output: 48


🏁 Script executed:

# Check the base-api.ts file for the full context
cat -n frontend/lib/api/base/base-api.ts

Repository: sysadminsmedia/homebox

Length of output: 3979


🏁 Script executed:

# Find the Requests class
find frontend/lib -name "requests.ts" -o -name "requests.js"

Repository: sysadminsmedia/homebox

Length of output: 99


🏁 Script executed:

# Search for Requests class definition more broadly
rg -n "class Requests|export.*Requests" frontend/lib/ --type ts --type js

Repository: sysadminsmedia/homebox

Length of output: 214


🏁 Script executed:

# Read the Requests class implementation
cat -n frontend/lib/requests/requests.ts

Repository: sysadminsmedia/homebox

Length of output: 4195


🏁 Script executed:

# Search for other raw fetch() calls in the API
rg -n "fetch\(" frontend/lib/api/ -B 3 -A 5

Repository: sysadminsmedia/homebox

Length of output: 682


🏁 Script executed:

# Check the Requests class again - does it set credentials?
rg -n "credentials" frontend/lib/requests/

Repository: sysadminsmedia/homebox

Length of output: 48


🏁 Script executed:

# Check if there are export endpoints that might require specific auth
rg -n "export" frontend/lib/api/classes/items.ts -B 2 -A 10

Repository: sysadminsmedia/homebox

Length of output: 4723


🏁 Script executed:

# Check if there's a pattern of Authorization header being used elsewhere
rg -n "Authorization" frontend/lib/ -B 2 -A 2

Repository: sysadminsmedia/homebox

Length of output: 423


🏁 Script executed:

# Check if there are comments about the export endpoint auth in the codebase
rg -n "export.*auth|cookie|same-origin" frontend/lib/api/ -B 2 -A 2

Repository: sysadminsmedia/homebox

Length of output: 927


🏁 Script executed:

# Check the Requests constructor to see what headers are available
rg -n "constructor" frontend/lib/requests/requests.ts -A 10

Repository: sysadminsmedia/homebox

Length of output: 509


Raw fetch() is necessary here, but consider adding Authorization header for API token support.

The code correctly uses raw fetch() because the Requests class cannot properly handle Blob responses—its response parsing returns ReadableStream, not Blob. However, the export endpoint currently only uses cookie-based authentication (credentials: "same-origin") and omits the Authorization header.

This design is intentional per the code comment (matching GET exports via window.open), but it means API token-based authentication is not supported for this endpoint. If the backend supports bearer token auth for exports, consider adding the Authorization header:

const response = await fetch(url, {
  method: "POST",
  headers: { 
    "Content-Type": "application/json",
+   ...(token && { "Authorization": token }),
  },
  credentials: "same-origin",
  body: JSON.stringify({ itemIds }),
});

Verify with backend whether the /items/export/pdf endpoint supports both cookie and bearer token authentication, or if cookie-only is acceptable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/lib/api/classes/items.ts` around lines 218 - 249, exportBulkPDF
currently uses fetch with credentials: "same-origin" only, which prevents API
token auth; update exportBulkPDF to include an Authorization: Bearer <token>
header when an API token is available (e.g., accept an optional token param or
read from the existing auth utility), while keeping credentials: "same-origin"
so cookie auth still works; add the header into the fetch headers (alongside
"Content-Type") only when token is non-empty so Blob handling and POST body
remain unchanged, and confirm with the backend that /items/export/pdf accepts
bearer tokens.

const (
// maxPDFExportItems caps the number of items in a single PDF export
// to prevent excessive memory usage and request timeouts.
maxPDFExportItems = 500
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend making this a configurable option, those with weak systems might want it lower, and those with very powerful home labs might be able to go much higher.

// addItemPages renders one or more pages of detailed information for a single item.
// Includes: header bar, primary photo, details, purchase/warranty/sold info,
// custom fields, notes, additional photos, receipts, and maintenance history.
func (svc *PDFExportService) addItemPages(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cyclomatic complexity can be resolved by breaking this function down into several smaller functions and peices.

Comment on lines +787 to +794
defer bucket.Close()

// Read the full file from storage
reader, err := bucket.NewReader(ctx, svc.repo.Attachments.GetFullPath(att.Path), nil)
if err != nil {
return nil, "", fmt.Errorf("failed to read attachment: %w", err)
}
defer reader.Close()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Errors from the close functions should be handled gracefully.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⬆️ enhancement New feature or request go Pull requests that update Go code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants