Skip to content

ALT-38: Desplegar agentic-core como sidecar K8s para matching de rescatistas#9

Merged
lapc506 merged 3 commits intomainfrom
ALT-38-agent-ai-sidecar
Mar 28, 2026
Merged

ALT-38: Desplegar agentic-core como sidecar K8s para matching de rescatistas#9
lapc506 merged 3 commits intomainfrom
ALT-38-agent-ai-sidecar

Conversation

@lapc506
Copy link
Copy Markdown
Collaborator

@lapc506 lapc506 commented Mar 28, 2026

Summary

  • Crea apps/agent-sidecar/ — app Python con LangGraph StateGraph de 4 nodos para matching inteligente de rescatistas
  • Pipeline: fetch_candidates (PostGIS/Haversine) -> enrich_from_graph (FalkorDB Cypher) -> score_candidates (weighted heuristic) -> rank_and_explain
  • Servidor gRPC (MatchingService: FindBestRescuers, RecordRescueOutcome, HealthCheck)
  • NestJS MatchingModule con gRPC client, GraphQL resolver, DTOs
  • K8s manifests: sidecar container en backend pod, ConfigMap, readiness/liveness probes
  • Makefile targets: dev-agent-sidecar-build, dev-agent-sidecar-deploy

Also in agentic-core repo

  • LangGraph integration: ReAct template funcional, HandleMessage streaming, planner + actor nodes

Linear Issue

ALT-38

OpenSpec Change

openspec/changes/agent-ai-sidecar/ (8 fases, Fases 1-6 implementadas)

Files Changed (40)

  • 26 new in apps/agent-sidecar/ (Python: graphs, gRPC, config, proto, Dockerfile, tests)
  • 10 new/modified in apps/backend/src/matching/ (NestJS: module, service, resolver, DTOs, proto)
  • 2 K8s manifests (configmap-sidecar.yaml, deployment.yaml updated)
  • 2 Makefile targets added

Depends On

Test plan

  • Verificar backend npm run build compila
  • Verificar 32 backend tests pasan
  • Verificar sidecar app structure: ls apps/agent-sidecar/graphs/nodes/
  • Verificar proto definition: cat apps/agent-sidecar/proto/matching.proto
  • Verificar K8s deployment tiene sidecar container
  • Verificar Makefile targets: make help | grep sidecar
  • Deploy: make dev-agent-sidecar-build && make dev-agent-sidecar-deploy

Created by Claude Code on behalf of @lapc506

🤖 Generated with Claude Code

Summary by CodeRabbit

Notas de Lanzamiento

  • New Features

    • Nuevo sistema de emparejamiento de rescatistas con búsqueda basada en geolocalización y puntuación automática.
    • Alertas de rescate con gestión completa del ciclo de vida y escalado de búsqueda.
    • Sistema de notificaciones con soporte para múltiples dispositivos y plataformas.
    • Integración con Firebase Cloud Messaging para notificaciones push.
  • Infrastructure

    • Nuevo servicio sidecar con gRPC para procesamiento de emparejamientos.
    • Configuración de Kubernetes y contenedor dedicado.

@linear
Copy link
Copy Markdown

linear bot commented Mar 28, 2026

Copy link
Copy Markdown

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

@lapc506
Copy link
Copy Markdown
Collaborator Author

lapc506 commented Mar 28, 2026

@greptile review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 28, 2026

Warning

Rate limit exceeded

@lapc506 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 6 minutes and 23 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 6 minutes and 23 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 036b0f7b-0333-476e-aad2-2434e15c6cf4

📥 Commits

Reviewing files that changed from the base of the PR and between a056180 and e79987a.

⛔ Files ignored due to path filters (2)
  • apps/agent-sidecar/grpc/generated/__init__.py is excluded by !**/generated/**
  • apps/backend/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (38)
  • Makefile
  • apps/agent-sidecar/Dockerfile
  • apps/agent-sidecar/agents/rescuer-matching.yaml
  • apps/agent-sidecar/config/__init__.py
  • apps/agent-sidecar/config/settings.py
  • apps/agent-sidecar/graphs/__init__.py
  • apps/agent-sidecar/graphs/nodes/__init__.py
  • apps/agent-sidecar/graphs/nodes/enrich_from_graph.py
  • apps/agent-sidecar/graphs/nodes/fetch_candidates.py
  • apps/agent-sidecar/graphs/nodes/rank_and_explain.py
  • apps/agent-sidecar/graphs/nodes/score_candidates.py
  • apps/agent-sidecar/graphs/rescuer_matching_graph.py
  • apps/agent-sidecar/graphs/state.py
  • apps/agent-sidecar/grpc/__init__.py
  • apps/agent-sidecar/grpc/handlers/__init__.py
  • apps/agent-sidecar/grpc/handlers/find_rescuers.py
  • apps/agent-sidecar/grpc/handlers/record_outcome.py
  • apps/agent-sidecar/grpc/interceptors.py
  • apps/agent-sidecar/grpc/server.py
  • apps/agent-sidecar/main.py
  • apps/agent-sidecar/proto/matching.proto
  • apps/agent-sidecar/pyproject.toml
  • apps/agent-sidecar/scripts/gen-proto.sh
  • apps/agent-sidecar/tests/__init__.py
  • apps/agent-sidecar/tests/test_grpc_server.py
  • apps/agent-sidecar/tests/test_matching_graph.py
  • apps/backend/nest-cli.json
  • apps/backend/package.json
  • apps/backend/src/app.module.ts
  • apps/backend/src/matching/dto/matching-input.input.ts
  • apps/backend/src/matching/dto/matching-result.output.ts
  • apps/backend/src/matching/dto/rescuer-candidate.output.ts
  • apps/backend/src/matching/matching.module.ts
  • apps/backend/src/matching/matching.resolver.ts
  • apps/backend/src/matching/matching.service.ts
  • apps/backend/src/matching/proto/matching.proto
  • k8s/base/backend/configmap-sidecar.yaml
  • k8s/base/backend/deployment.yaml
📝 Walkthrough

Walkthrough

Se introduce un microservicio agent-sidecar en Python que implementa un pipeline LangGraph para matching de rescatistas. El backend NestJS se integra a través de gRPC. Se añaden módulos para notificaciones, rescates y matching en el backend, junto con configuración de Kubernetes.

Changes

Cohort / Archivo(s) Resumen
Agent-Sidecar - Configuración Base
Makefile, apps/agent-sidecar/Dockerfile, apps/agent-sidecar/pyproject.toml
Nuevos targets dev-agent-sidecar-build y dev-agent-sidecar-deploy para compilar y desplegar la imagen del sidecar; Dockerfile multi-etapa con Python 3.12; definición de dependencias del proyecto.
Agent-Sidecar - Configuración
apps/agent-sidecar/config/__init__.py, apps/agent-sidecar/config/settings.py
Módulo Settings con Pydantic para cargar variables de entorno (SIDECAR_*); parámetros de BD, gRPC, FalkorDB, observabilidad y matching (radios, candidatos máximos).
Agent-Sidecar - Estado y Tipos
apps/agent-sidecar/graphs/state.py, apps/agent-sidecar/agents/rescuer-matching.yaml
Definición de tipos AnimalInfo, CandidateData, MatchingState para el pipeline; configuración del agente rescuer-matching con modelo Claude Sonnet 4.6.
Agent-Sidecar - Nodos del Grafo
apps/agent-sidecar/graphs/nodes/fetch_candidates.py, apps/agent-sidecar/graphs/nodes/enrich_from_graph.py, apps/agent-sidecar/graphs/nodes/score_candidates.py, apps/agent-sidecar/graphs/nodes/rank_and_explain.py, apps/agent-sidecar/graphs/nodes/__init__.py
Cuatro nodos secuenciales: obtención de candidatos por proximidad (Haversine), enriquecimiento desde FalkorDB (historial de rescates, especializaciones), puntuación heurística (distancia, capacidad, reputación, especialización) y ranking con explicaciones.
Agent-Sidecar - Pipeline LangGraph
apps/agent-sidecar/graphs/rescuer_matching_graph.py, apps/agent-sidecar/graphs/__init__.py
Definición del StateGraph con flujo secuencial entre los cuatro nodos; export de matching_graph compilado.
Agent-Sidecar - Servidor gRPC
apps/agent-sidecar/grpc/server.py, apps/agent-sidecar/grpc/interceptors.py, apps/agent-sidecar/grpc/handlers/find_rescuers.py, apps/agent-sidecar/grpc/handlers/record_outcome.py, apps/agent-sidecar/grpc/handlers/__init__.py
Servidor gRPC asíncrono con handlers para FindBestRescuers (ejecuta pipeline de matching) y RecordRescueOutcome (registra resultados en FalkorDB); interceptor de propagación de trace ID.
Agent-Sidecar - Protobuf y Entry Point
apps/agent-sidecar/proto/matching.proto, apps/agent-sidecar/scripts/gen-proto.sh, apps/agent-sidecar/main.py
Definición de servicios RPC (FindBestRescuers, RecordRescueOutcome, HealthCheck); script de generación de código gRPC; bootstrap de aplicación con servidores de métricas Prometheus, health HTTP y gRPC.
Agent-Sidecar - Testing
apps/agent-sidecar/tests/test_grpc_server.py, apps/agent-sidecar/tests/test_matching_graph.py
Tests unitarios para handlers gRPC y nodos del grafo (puntuación, ranking, explicaciones).
Backend - Integración gRPC Matching
apps/backend/src/matching/matching.service.ts, apps/backend/src/matching/matching.resolver.ts, apps/backend/src/matching/matching.module.ts, apps/backend/src/matching/proto/matching.proto, apps/backend/src/matching/dto/matching-input.input.ts, apps/backend/src/matching/dto/matching-result.output.ts, apps/backend/src/matching/dto/rescuer-candidate.output.ts
Cliente gRPC que invoca al sidecar; resolvers GraphQL para requestRescuerMatching (mutación) y matchingResult (query); DTOs de entrada/salida tipados.
Backend - Módulo Notificaciones
apps/backend/src/notifications/notifications.service.ts, apps/backend/src/notifications/notifications.resolver.ts, apps/backend/src/notifications/notifications.module.ts, apps/backend/src/notifications/entities/notification.entity.ts, apps/backend/src/notifications/entities/device-token.entity.ts, apps/backend/src/notifications/dto/notification.output.ts, apps/backend/src/notifications/dto/register-device-token.input.ts, apps/backend/src/notifications/enums/notification-type.enum.ts, apps/backend/src/notifications/enums/device-platform.enum.ts, apps/backend/src/notifications/scalars/json.scalar.ts
Servicio completo de notificaciones con FCM (Firebase Cloud Messaging), gestión de tokens de dispositivos, tracking de lectura, limpieza cron; entidades TypeORM; resolver GraphQL con operaciones de registro, lectura y consulta; tipos y escalares GraphQL.
Backend - Módulo Rescates
apps/backend/src/rescues/rescues.service.ts, apps/backend/src/rescues/rescues.resolver.ts, apps/backend/src/rescues/rescues.module.ts, apps/backend/src/rescues/rescue-matching.service.ts, apps/backend/src/rescues/rescue-state-machine.ts, apps/backend/src/rescues/entities/rescue-alert.entity.ts, apps/backend/src/rescues/dto/create-rescue-alert.input.ts, apps/backend/src/rescues/dto/complete-rescue.input.ts, apps/backend/src/rescues/dto/update-rescue-progress.input.ts, apps/backend/src/rescues/enums/rescue-status.enum.ts, apps/backend/src/rescues/enums/rescue-urgency.enum.ts
Ciclo de vida completo de alertas de rescate: creación, aceptación, progreso, transferencia, finalización; máquina de estados para transiciones válidas; matching geoespacial de auxiliares/rescatistas; consultas con PostGIS; notificaciones integradas en operaciones clave; DTOs y enums GraphQL.
Backend - Módulos Actualizados
apps/backend/src/app.module.ts, apps/backend/src/abuse-reports/abuse-reports.module.ts, apps/backend/src/abuse-reports/abuse-reports.service.ts, apps/backend/src/captures/captures.module.ts, apps/backend/src/captures/captures.resolver.ts, apps/backend/src/subsidies/subsidies.module.ts, apps/backend/src/subsidies/subsidies.service.ts
Integración de NotificationsModule, RescuesModule, MatchingModule en el app principal; adición de notificaciones a flujos existentes (reportes de abuso, captures, subsidios); nuevas entidades TypeORM (DeviceToken, Notification, RescueAlert).
Backend - Configuración
apps/backend/.env.example, apps/backend/nest-cli.json, apps/backend/package.json
Plantilla de variables de entorno; adición de assets Webpack para protobuf; dependencias gRPC (@grpc/grpc-js, @grpc/proto-loader), Firebase Admin SDK.
Kubernetes
k8s/base/backend/configmap-sidecar.yaml, k8s/base/backend/deployment.yaml
ConfigMap con parámetros del sidecar (puertos, radios); nuevo contenedor agent-sidecar en el Deployment con probes de salud, variables de entorno y recursos.

Sequence Diagrams

sequenceDiagram
    participant Backend as Backend NestJS
    participant Sidecar as Agent-Sidecar<br/>(gRPC)
    participant PG as PostgreSQL
    participant FalkorDB as FalkorDB
    
    Backend->>Sidecar: FindBestRescuers(alertId, coords, urgency)
    activate Sidecar
    
    Sidecar->>PG: Fetch candidates by distance<br/>(radius by urgency)
    activate PG
    PG-->>Sidecar: List of nearby users
    deactivate PG
    
    Sidecar->>FalkorDB: Query rescue history,<br/>specializations per candidate
    activate FalkorDB
    FalkorDB-->>Sidecar: Enriched candidate data
    deactivate FalkorDB
    
    Sidecar->>Sidecar: Score candidates<br/>(distance, capacity,<br/>reputation, specialization)
    
    Sidecar->>Sidecar: Rank & explain<br/>top N candidates
    
    Sidecar-->>Backend: FindRescuersResponse<br/>(candidates, scores,<br/>explanations)
    deactivate Sidecar
    
    Backend->>Backend: Map to GraphQL<br/>RescuerCandidate DTOs
Loading
sequenceDiagram
    participant Client as Cliente GraphQL
    participant Backend as Backend NestJS
    participant Sidecar as Agent-Sidecar
    participant Notify as Notifications
    
    Client->>Backend: requestRescuerMatching(MatchingInput)
    activate Backend
    
    Backend->>Sidecar: gRPC FindBestRescuers
    activate Sidecar
    Sidecar-->>Backend: candidates, scores
    deactivate Sidecar
    
    Backend->>Notify: sendToRole(UserRole.HELPER,<br/>RESCUE_ALERT)
    activate Notify
    Notify-->>Backend: OK
    deactivate Notify
    
    Backend-->>Client: MatchingResult
    deactivate Backend
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutos

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed El título describe claramente el cambio principal: desplegar agentic-core como sidecar Kubernetes para matching de rescatistas, lo cual se alinea con los cambios reales (nuevo apps/agent-sidecar, integración K8s, módulo NestJS).
Docstring Coverage ✅ Passed Docstring coverage is 83.33% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch ALT-38-agent-ai-sidecar

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

Copy link
Copy Markdown

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
k8s/base/backend/deployment.yaml (1)

16-92: ⚠️ Potential issue | 🟠 Major

Faltan securityContext mínimos y esto ya está rompiendo el pipeline.

Ambos contenedores y el pod siguen con el contexto por defecto: permiten root, allowPrivilegeEscalation no está deshabilitado y readOnlyRootFilesystem no está activado. Además de los findings de Trivy/Checkov, esto deja al backend y al sidecar más expuestos de lo necesario.

💡 Cambio sugerido
     spec:
+      securityContext:
+        runAsNonRoot: true
+        seccompProfile:
+          type: RuntimeDefault
       containers:
         - name: backend
+          securityContext:
+            allowPrivilegeEscalation: false
+            readOnlyRootFilesystem: true
+            capabilities:
+              drop: ["ALL"]
           image: localhost/altrupets-backend:dev
           imagePullPolicy: IfNotPresent
@@
         - name: agent-sidecar
+          securityContext:
+            allowPrivilegeEscalation: false
+            readOnlyRootFilesystem: true
+            capabilities:
+              drop: ["ALL"]
           image: altrupets-agent-sidecar:dev
           imagePullPolicy: IfNotPresent
As per coding guidelines, `k8s/**`: Kustomize base/overlays pattern. Check for resource limits, probes, and security contexts.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@k8s/base/backend/deployment.yaml` around lines 16 - 92, Añade contextos de
seguridad mínimos al Pod y a ambos contenedores (nombres: backend y
agent-sidecar) para evitar ejecución como root y desactivar privilegios;
concretamente agrega a spec a nivel de Pod un podSecurityContext con
runAsNonRoot: true y fsGroup (por ejemplo 1000), y en cada container (backend y
agent-sidecar) añade securityContext con runAsNonRoot: true, runAsUser: un UID
no-0, allowPrivilegeEscalation: false, capabilities.drop: ["ALL"],
readOnlyRootFilesystem: true (y asegúrate de no usar privileged: true ni
allowPrivilegeEscalation omitido); actualiza los bloques correspondientes de
resources/probes sin cambiar nombres de container para mantener compatibilidad
con Kustomize overlays.
🟡 Minor comments (8)
apps/agent-sidecar/grpc/interceptors.py-32-36 (1)

32-36: ⚠️ Potential issue | 🟡 Minor

El trace_id_var no se resetea después de completar la solicitud.

En escenarios async con reutilización de contexto, el trace ID podría persistir incorrectamente entre solicitudes. Considera usar un token para resetear el contexto después de la llamada.

🛡️ Corrección propuesta
     async def intercept_service(
         self,
         continuation: Callable,
         handler_call_details: grpc.HandlerCallDetails,
     ) -> Any:
         # Extract trace_id from invocation metadata
         metadata = dict(handler_call_details.invocation_metadata or [])
         incoming_trace_id = metadata.get("trace-id") or metadata.get("x-trace-id")
 
+        token = trace_id_var.set(incoming_trace_id)
-        if incoming_trace_id:
-            trace_id_var.set(incoming_trace_id)
-            logger.debug("trace_id_extracted", trace_id=incoming_trace_id)
+        if incoming_trace_id:
+            logger.debug("trace_id_extracted", trace_id=incoming_trace_id)
 
-        return await continuation(handler_call_details)
+        try:
+            return await continuation(handler_call_details)
+        finally:
+            trace_id_var.reset(token)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/agent-sidecar/grpc/interceptors.py` around lines 32 - 36, El
trace_id_var se establece pero no se resetea, lo que puede provocar
contaminación de contexto entre solicitudes; modifica el interceptor para
capturar el token devuelto por trace_id_var.set(incoming_trace_id) (ej. token =
trace_id_var.set(...)) y envolver la llamada await
continuation(handler_call_details) en un bloque try/finally donde en finally
llames trace_id_var.reset(token) si token fue creado; asegúrate de usar las
mismas variables (trace_id_var, incoming_trace_id, continuation,
handler_call_details) y no olvidar resetear solo cuando se haya seteado el
token.
apps/backend/src/notifications/scalars/json.scalar.ts-24-26 (1)

24-26: ⚠️ Potential issue | 🟡 Minor

Lanza un error explícito en lugar de retornar null para literales inválidos en el scalar JSON.

El código en las líneas 24-26 retorna null para tipos AST no soportados. Según la documentación oficial de graphql-js v16, los scalars personalizados deben validar el tipo de nodo AST y lanzar un TypeError para tipos no soportados, evitando aceptar silenciosamente entradas inválidas.

Cambio recomendado
     default:
-      return null;
+      throw new TypeError(`JSON no puede representar literales de tipo: ${ast.kind}`);
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/notifications/scalars/json.scalar.ts` around lines 24 - 26,
The JSON scalar's AST handling currently returns null for unsupported AST node
kinds; update the parseLiteral function in json.scalar.ts to validate the
node.kind and throw a TypeError instead of returning null for any unsupported
node type (include node.kind in the error message for clarity), so invalid
literal nodes are rejected per graphql-js v16 guidance.
apps/backend/src/rescues/dto/update-rescue-progress.input.ts-16-19 (1)

16-19: ⚠️ Potential issue | 🟡 Minor

Falta validación de elementos individuales en el array imageBase64s.

@IsArray() solo valida que el valor sea un array, pero no valida que cada elemento sea un string. Considera agregar @IsString({ each: true }) para validar los elementos individuales.

🛡️ Corrección propuesta
   `@Field`(() => [String], { nullable: true })
   `@IsArray`()
+  `@IsString`({ each: true })
   `@IsOptional`()
   imageBase64s?: string[];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/rescues/dto/update-rescue-progress.input.ts` around lines 16
- 19, The imageBase64s property currently only has `@IsArray`() and thus doesn't
validate element types; update the validation decorators on the imageBase64s
field (property name imageBase64s in update-rescue-progress.input) to include
per-item string validation by adding `@IsString`({ each: true }) alongside
`@IsArray`() and `@IsOptional`(), so each array element is validated as a string.
Makefile-768-772 (1)

768-772: ⚠️ Potential issue | 🟡 Minor

Falta namespace en kubectl apply del configmap.

El comando en línea 770 aplica el configmap sin especificar el namespace -n altrupets-dev, mientras que el rollout restart sí lo especifica. Si el contexto default de kubectl apunta a otro namespace, el configmap se creará en el lugar incorrecto.

🔧 Corrección propuesta
 dev-agent-sidecar-deploy: ## Deploy agent-sidecar (rebuild + apply)
 	@$(MAKE) dev-agent-sidecar-build
-	`@kubectl` apply -f k8s/base/backend/configmap-sidecar.yaml
+	`@kubectl` apply -f k8s/base/backend/configmap-sidecar.yaml -n altrupets-dev
 	`@kubectl` rollout restart deployment/backend -n altrupets-dev
 	`@echo` "$(GREEN)Agent sidecar deployed$(NC)"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Makefile` around lines 768 - 772, La regla dev-agent-sidecar-deploy en el
Makefile ejecuta kubectl apply -f k8s/base/backend/configmap-sidecar.yaml sin
especificar namespace, lo que puede crear el ConfigMap en el namespace
incorrecto; actualizar la invocación de kubectl apply dentro de
dev-agent-sidecar-deploy para añadir -n altrupets-dev (la misma namespace usada
en kubectl rollout restart deployment/backend -n altrupets-dev) de modo que el
archivo configmap-sidecar.yaml siempre se aplique en el namespace correcto.
apps/agent-sidecar/grpc/server.py-59-67 (1)

59-67: ⚠️ Potential issue | 🟡 Minor

Posible fuga de conexión en HealthCheck si fetchval falla.

Si conn.fetchval() lanza una excepción, conn.close() no se ejecutará, dejando la conexión abierta.

🛡️ Propuesta para asegurar cierre de conexión
 # Check PostgreSQL
 try:
     import asyncpg
 
     conn = await asyncpg.connect(settings.database_url)
-    await conn.fetchval("SELECT 1")
-    await conn.close()
-    dependencies["postgresql"] = "OK"
+    try:
+        await conn.fetchval("SELECT 1")
+        dependencies["postgresql"] = "OK"
+    finally:
+        await conn.close()
 except Exception:
     dependencies["postgresql"] = "UNAVAILABLE"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/agent-sidecar/grpc/server.py` around lines 59 - 67, El bloque de
HealthCheck puede filtrar la conexión si conn.fetchval() lanza una excepción;
modifica la lógica alrededor de asyncpg.connect/conn.fetchval para garantizar
que conn.close() siempre se llame: crea la conexión con asyncpg.connect en una
variable conn, usa try/except/finally donde en finally compruebas si conn existe
y aún está abierta y la cierras (o llamas await conn.close()), y en el except
sigues asignando dependencies["postgresql"] = "UNAVAILABLE" y registras el error
si procede; ajusta el manejo en la función que contiene este fragmento para usar
esta estructura (referencias: asyncpg.connect, conn.fetchval, conn.close,
dependencies["postgresql"]).
apps/backend/src/rescues/entities/rescue-alert.entity.ts-127-130 (1)

127-130: ⚠️ Potential issue | 🟡 Minor

Inconsistencia entre nullable: false en relación y propiedad opcional.

La relación reportedBy tiene { nullable: false } indicando que siempre debe existir, pero la propiedad TypeScript usa ? sugiriendo que puede ser undefined. Esto es confuso y puede causar errores en runtime.

🐛 Propuesta para corregir consistencia

Si el reporter es requerido:

 `@Field`(() => User, { nullable: true })
 `@ManyToOne`(() => User, { nullable: false })
 `@JoinColumn`({ name: 'reportedById' })
-reportedBy?: User;
+reportedBy: User;

O si es opcional (poco probable dado el contexto de negocio):

 `@Field`(() => User, { nullable: true })
-@ManyToOne(() => User, { nullable: false })
+@ManyToOne(() => User, { nullable: true })
 `@JoinColumn`({ name: 'reportedById' })
 reportedBy?: User;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/rescues/entities/rescue-alert.entity.ts` around lines 127 -
130, La propiedad reportedBy en la entidad RescueAlert es inconsistente: el
decorador `@ManyToOne`(() => User, { nullable: false }) obliga la relación pero la
propiedad TypeScript está marcada opcional (reportedBy?: User). Arregla la
inconsistencia eligiendo una de las dos opciones y aplicándola en el símbolo
reportedBy y su decorador: si el reporter es requerido, quita el signo de
interrogación y deja nullable: false; si el reporter puede faltar, cambia
nullable: false a nullable: true en `@ManyToOne` y mantén reportedBy?: User;
asegúrate de que cualquier validación/creación que use reportedBy
(constructores, DTOs, mappers) refleje la misma obligatoriedad.
apps/backend/src/rescues/rescues.resolver.ts-97-106 (1)

97-106: ⚠️ Potential issue | 🟡 Minor

Sin validación de rango para coordenadas de ubicación.

La mutación updateMyLocation acepta cualquier valor de latitude y longitude sin validar que estén dentro de rangos válidos (-90 a 90 para latitud, -180 a 180 para longitud). Coordenadas inválidas podrían causar errores en cálculos de distancia o consultas espaciales.

💡 Sugerencia: agregar validación de coordenadas

Puedes agregar validación en el DTO de entrada o directamente en el resolver:

`@Mutation`(() => Boolean)
`@UseGuards`(JwtAuthGuard)
async updateMyLocation(
  `@Args`('latitude', { type: () => Float }) latitude: number,
  `@Args`('longitude', { type: () => Float }) longitude: number,
  `@GqlUser`() user: User,
): Promise<boolean> {
  if (latitude < -90 || latitude > 90) {
    throw new BadRequestException('Latitude must be between -90 and 90');
  }
  if (longitude < -180 || longitude > 180) {
    throw new BadRequestException('Longitude must be between -180 and 180');
  }
  await this.matchingService.updateUserLocation(user.id, latitude, longitude);
  return true;
}

O mejor aún, crear un DTO con decoradores de validación de class-validator.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/rescues/rescues.resolver.ts` around lines 97 - 106, The
updateMyLocation resolver currently accepts any latitude/longitude; add range
validation before calling matchingService.updateUserLocation: ensure latitude is
between -90 and 90 and longitude between -180 and 180, and throw a
BadRequestException with a clear message if out of range (or replace the args
with a validated DTO using class-validator decorators for these bounds). Update
the method handling in updateMyLocation (and keep JwtAuthGuard/GqlUser usage) so
invalid coordinates are rejected early and only valid values are passed to
matchingService.updateUserLocation.
apps/backend/src/notifications/notifications.service.ts-66-70 (1)

66-70: ⚠️ Potential issue | 🟡 Minor

Posible error si Firebase Admin SDK ya está inicializado.

Si admin.initializeApp() se llama más de una vez (por ejemplo, en tests o hot-reload), lanzará un error. Deberías verificar si ya existe una app inicializada.

🐛 Corrección propuesta
       const serviceAccount = JSON.parse(serviceAccountJson);
+      
+      // Avoid re-initialization error in tests or hot-reload
+      if (admin.apps.length === 0) {
         admin.initializeApp({
           credential: admin.credential.cert(serviceAccount),
         });
+      }

       this.firebaseInitialized = true;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/notifications/notifications.service.ts` around lines 66 -
70, Evita llamar admin.initializeApp() si ya hay una app Firebase inicializada:
antes de inicializar, comprueba admin.apps && admin.apps.length (o intenta
obtener admin.app() en un try/catch) y solo llama a admin.initializeApp({
credential: admin.credential.cert(serviceAccount) }) cuando no exista ninguna
app previa; referencia los símbolos serviceAccount and admin.initializeApp()
para localizar y modificar el bloque actual donde se parsea serviceAccountJson y
se inicializa la app.
🧹 Nitpick comments (23)
apps/agent-sidecar/scripts/gen-proto.sh (1)

1-7: Mejorar robustez del script con manejo de errores.

El script debería fallar explícitamente si cd falla o si la generación de proto falla.

♻️ Corrección propuesta
 #!/bin/bash
+set -euo pipefail
+
-cd "$(dirname "$0")/.."
+cd "$(dirname "$0")/.." || exit 1
+
+# Ensure output directory exists
+mkdir -p grpc/generated/
+
 python -m grpc_tools.protoc \
   -I proto/ \
   --python_out=grpc/generated/ \
   --grpc_python_out=grpc/generated/ \
   proto/matching.proto
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/agent-sidecar/scripts/gen-proto.sh` around lines 1 - 7, The gen-proto.sh
script should fail fast on errors: enable strict error handling (e.g., set -euo
pipefail or equivalent) at the top of the script and guard the cd and protoc
steps so failures are detected and cause an explicit exit; ensure the initial cd
"$(dirname "$0")/.." invocation is checked and that the python -m
grpc_tools.protoc invocation's exit code is propagated (exit with its non-zero
status) so the CI or caller sees generation failures.
apps/backend/src/notifications/dto/register-device-token.input.ts (1)

12-13: Considerar añadir validación @IsEnum para el campo platform.

Aunque GraphQL validará el enum a nivel de schema, añadir @IsEnum(DevicePlatform) proporciona validación consistente a nivel de runtime con class-validator, alineándose con el patrón usado en token.

♻️ Cambio sugerido
+import { IsNotEmpty, IsString, IsEnum } from 'class-validator';
-import { IsNotEmpty, IsString } from 'class-validator';
  `@Field`(() => DevicePlatform)
+ `@IsEnum`(DevicePlatform)
  platform: DevicePlatform;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/notifications/dto/register-device-token.input.ts` around
lines 12 - 13, Añade la validación `@IsEnum`(DevicePlatform) al campo platform
para que la validación a nivel de runtime sea consistente con la usada en token;
importa IsEnum desde class-validator y aplícalo junto a `@Field`(() =>
DevicePlatform) en la clase (el mismo lugar donde está declarada la propiedad
platform) para garantizar que sólo se acepten valores del enum DevicePlatform
durante la validación.
apps/backend/src/notifications/entities/device-token.entity.ts (1)

34-36: Considerar añadir restricción de longitud al campo token.

Los tokens de dispositivo (FCM/APNs) pueden ser largos (~250+ caracteres). Especificar una longitud máxima evita problemas de truncamiento y documenta la expectativa.

♻️ Cambio sugerido
  `@Field`()
- `@Column`()
+ `@Column`({ length: 512 })
  token: string;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/notifications/entities/device-token.entity.ts` around lines
34 - 36, Add a length constraint to the device token column to avoid truncation:
update the DeviceToken entity's token field by changing the `@Column`() decorator
to include a length (e.g., `@Column`({ length: 512 })) and optionally add a
validation decorator like `@MaxLength`(512) from class-validator on the token
property to enforce it at the DTO/validation layer; keep the GraphQL `@Field`()
as-is. Ensure you modify the token property declaration in the DeviceToken
(device-token.entity) class to reflect these changes.
apps/agent-sidecar/proto/matching.proto (2)

14-14: Considerar usar un enum para urgency en lugar de string.

Definir urgency como enum Proto proporciona validación en tiempo de compilación y documentación explícita de valores válidos.

♻️ Cambio sugerido
+enum Urgency {
+  URGENCY_UNSPECIFIED = 0;
+  LOW = 1;
+  MEDIUM = 2;
+  HIGH = 3;
+  CRITICAL = 4;
+}
+
 message FindRescuersRequest {
   string rescue_alert_id = 1;
   double latitude = 2;
   double longitude = 3;
-  string urgency = 4;  // LOW, MEDIUM, HIGH, CRITICAL
+  Urgency urgency = 4;
   string animal_type = 5;
   int32 max_candidates = 6;
   string trace_id = 7;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/agent-sidecar/proto/matching.proto` at line 14, Replace the string-typed
field "urgency" with a Proto enum to enforce valid values: add an enum (e.g.,
"enum Urgency { LOW = 0; MEDIUM = 1; HIGH = 2; CRITICAL = 3; }") in
matching.proto and change the field declaration from "string urgency = 4;" to
use that enum ("Urgency urgency = 4;"), then regenerate any language-specific
protobuf artifacts and update places that set or read the urgency field to use
the enum values.

1-2: Buf lint: estructura de directorio no coincide con el paquete.

El analizador Buf indica que archivos con paquete altrupets.matching deberían estar en altrupets/matching/. En un monorepo esto puede ser intencional, pero considera configurar buf.yaml para ignorar esta regla o reorganizar si se planea usar Buf para gestión de protos.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/agent-sidecar/proto/matching.proto` around lines 1 - 2, The proto
package declaration (package altrupets.matching) doesn't match the directory
structure, which Buf lint flags; fix it by either moving the proto file so its
filesystem path mirrors the package (place the proto under altrupets/matching/
with the same filename, e.g., matching.proto) or update your buf.yaml to accept
the current layout (adjust module roots/build roots to include the current proto
directory or disable the directory-structure lint rule for this package) so Buf
stops reporting the mismatch.
apps/backend/.env.example (1)

1-28: Considera añadir AGENT_GRPC_URL para documentar la integración con el sidecar.

El resumen indica que el deployment K8s configura AGENT_GRPC_URL para la comunicación gRPC con el agent-sidecar. Sería útil documentarlo aquí para completitud.

📝 Sugerencia
 # Environment
 NODE_ENV=development
+
+# Agent Sidecar gRPC (for rescuer matching)
+# AGENT_GRPC_URL=localhost:50051
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/.env.example` around lines 1 - 28, Add an AGENT_GRPC_URL entry
to the environment example so deployments that configure the agent-sidecar have
documented configuration; update the .env example by adding a commented or
default line for AGENT_GRPC_URL (e.g. AGENT_GRPC_URL=localhost:50051) and a
short comment explaining it is the gRPC endpoint for the agent-sidecar used by
the application to communicate with the agent.
apps/agent-sidecar/Dockerfile (1)

8-10: Considera limpiar archivos innecesarios en la imagen runtime.

El COPY . . incluye archivos de desarrollo (tests, pyproject.toml, etc.) en la imagen final. Considera usar un .dockerignore o copiar selectivamente solo los archivos necesarios para runtime.

♻️ Ejemplo de .dockerignore
# .dockerignore
__pycache__/
*.pyc
*.pyo
.pytest_cache/
tests/
.git/
.gitignore
*.md
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/agent-sidecar/Dockerfile` around lines 8 - 10, The final image is
copying the entire context with the `COPY . .` in the Dockerfile which pulls dev
files into the runtime image; add a `.dockerignore` excluding tests, VCS, build
artifacts and config files (e.g., tests/, __pycache__/, .git/, pyproject.toml,
*.md) or replace `COPY . .` with explicit runtime-only copies (e.g., copy the
installed site-packages and only the application package/entrypoint) so only
necessary runtime files are included alongside the existing `COPY --from=builder
/usr/local/lib/python3.12/site-packages` and `COPY --from=builder
/usr/local/bin`.
k8s/base/backend/configmap-sidecar.yaml (1)

5-5: Evita fijar el namespace de entorno en el base.

En Line 5, namespace: altrupets-dev acopla este recurso al entorno dev; en base conviene dejarlo sin namespace y aplicarlo en overlays por ambiente.

♻️ Propuesta de ajuste
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: agent-sidecar-config
-  namespace: altrupets-dev
 data:

As per coding guidelines k8s/**: Kustomize base/overlays pattern. Check for resource limits, probes, and security contexts.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@k8s/base/backend/configmap-sidecar.yaml` at line 5, Quitar la línea
"namespace: altrupets-dev" del recurso en base (configmap-sidecar.yaml) para
evitar acoplar el base al entorno dev; en su lugar, definir el namespace en los
overlays (añadir "namespace: altrupets-dev" dentro del kustomization.yaml del
overlay correspondiente) y comprobar que no hay otras ocurrencias del mismo
atributo en la carpeta base; además, valida que los overlays incluya los
recursos/patches necesarios y que el kustomization.yaml de base quede sin
namespace para respetar el patrón base/overlays.
apps/backend/src/captures/captures.resolver.ts (1)

45-56: Patrón de notificación fire-and-forget correcto, pero considera logging mínimo.

El patrón de no bloquear el flujo principal ante fallos de notificación es apropiado. Sin embargo, silenciar completamente los errores con .catch(() => {}) puede dificultar el diagnóstico de problemas en producción.

💡 Sugerencia: agregar logging mínimo para observabilidad
         this.notificationsService
             .sendToRole({
                 role: UserRole.HELPER,
                 type: NotificationType.RESCUE_ALERT,
                 title: 'New Rescue Alert',
                 body: `A new ${input.animalType} rescue request has been reported`,
                 referenceId: capture.id,
                 referenceType: 'CaptureRequest',
             })
-            .catch(() => {
-                // Notification failures should not block the main flow
+            .catch((err) => {
+                // Notification failures should not block the main flow
+                console.warn('Notification send failed for capture', capture.id, err.message);
             });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/captures/captures.resolver.ts` around lines 45 - 56, The
fire-and-forget notificationsService.sendToRole call currently swallows errors
via .catch(() => {}) which hides failures; update the catch on
notificationsService.sendToRole (the RESCUE_ALERT notification to
UserRole.HELPER) to log a minimal warning/error including context (e.g.,
capture.id, input.animalType and NotificationType.RESCUE_ALERT) rather than an
empty handler so notification failures are visible but still don’t block the
main flow; use the resolver's existing logger or a lightweight logging helper to
record the error and any useful metadata in the catch block.
apps/agent-sidecar/main.py (1)

70-72: Posible confusión de nombres de módulo con grpc.

El import from grpc.server import serve funciona porque hay un directorio local grpc/ en el proyecto, pero este nombre puede causar confusión con el paquete grpcio que también expone un módulo grpc. Si en algún momento se necesita importar desde grpcio en este archivo, habrá conflictos.

Considera renombrar el directorio grpc/ a algo más específico como grpc_handlers/ o rpc/ para evitar shadowing del paquete estándar de gRPC.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/agent-sidecar/main.py` around lines 70 - 72, The local package directory
named "grpc/" shadows the external grpcio package; rename the directory (e.g.,
to "grpc_handlers" or "rpc") and update imports in this file to match (replace
the import statement that references grpc.server with the new package name) so
the serve symbol (await serve()) continues to resolve without conflicting with
the grpcio module.
apps/agent-sidecar/tests/test_grpc_server.py (1)

42-58: No hagas que esta prueba dependa de DNS/socket reales.

Usar nonexistent-host hace que el resultado dependa del timeout del resolver y de la red del runner. Para una prueba unitaria conviene mockear FalkorDB o graph.query para fallar de inmediato.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/agent-sidecar/tests/test_grpc_server.py` around lines 42 - 58, La prueba
no debe depender de DNS/sockets reales; en lugar de usar "nonexistent-host" haga
que la llamada a la persistencia falle inmediatamente mediante un mock más
directo: patchée la clase/cliente FalkorDB o el método graph.query que usa
handle_record_outcome para que lance una excepción o devuelva un error
controlado, y verifique que handle_record_outcome devuelva {"accepted": False};
ubique el uso de FalkorDB/graph.query dentro de handle_record_outcome y
sustituya esa dependencia en la prueba con un mock que falle de inmediato (por
ejemplo configurar side_effect=Exception(...) o return_value apropiado).
apps/backend/src/rescues/entities/rescue-alert.entity.ts (1)

45-53: Tipo any para location debilita type-safety.

El campo location usa tipo any en lugar del tipo Point importado. Esto pierde los beneficios de tipado estático.

♻️ Propuesta para usar tipo Point
 `@Field`(() => Point, { nullable: true })
 `@Index`({ spatial: true })
 `@Column`({
   type: 'geometry',
   spatialFeatureType: 'Point',
   srid: 4326,
   nullable: true,
 })
-location?: any;
+location?: Point;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/rescues/entities/rescue-alert.entity.ts` around lines 45 -
53, The location property is currently typed as any which breaks type-safety;
change its TypeScript type to the imported Point type (i.e., replace "location?:
any" with "location?: Point") in the RescueAlert entity and ensure the Point
import is the correct Point interface used across the file (or replace/augment
the import to use GeoJSON/Point if needed); keep the existing decorators
(`@Field`(() => Point...), `@Index`, `@Column`(...)) but update any related usages or
tests that assume any to match the stronger Point type.
apps/backend/src/notifications/entities/notification.entity.ts (1)

28-31: Considerar especificar comportamiento onDelete en la relación ManyToOne.

La relación con User no define onDelete. Si un usuario es eliminado, las notificaciones quedarán huérfanas o causarán errores de FK dependiendo de la configuración de la BD.

♻️ Propuesta para manejar eliminación de usuario
 `@Field`(() => User)
-@ManyToOne(() => User)
+@ManyToOne(() => User, { onDelete: 'CASCADE' })
 `@JoinColumn`({ name: 'userId' })
 user: User;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/notifications/entities/notification.entity.ts` around lines
28 - 31, La relación ManyToOne en la entidad Notification (propiedad user /
JoinColumn name 'userId') no especifica comportamiento onDelete: decide e
implementa un comportamiento explícito (ej. onDelete: 'SET NULL' si quieres
conservar notificaciones al borrar usuarios o onDelete: 'CASCADE' si quieres
borrarlas junto al usuario) añadiendo el onDelete al decorador `@ManyToOne`(() =>
User, { onDelete: 'SET NULL' }) o equivalente, y ajusta la nulabilidad del campo
user / la columna userId (marcar user?: User y la columna userId como nullable)
para reflejar la opción elegida y evitar errores de FK.
apps/agent-sidecar/graphs/nodes/enrich_from_graph.py (1)

64-71: Conexión a FalkorDB creada en cada invocación.

Se crea una nueva conexión a FalkorDB por cada llamada a enrich_from_graph. Para alto volumen de requests, considerar implementar un pool de conexiones o singleton.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/agent-sidecar/graphs/nodes/enrich_from_graph.py` around lines 64 - 71,
Se está creando una nueva conexión a FalkorDB en cada llamada a
enrich_from_graph; cambia esto para reutilizar un cliente en vez de instanciarlo
cada vez: extrae la lógica de creación del cliente FalkorDB fuera de
enrich_from_graph (por ejemplo en una función get_falkordb_client o un cliente
singleton/pooled a nivel de módulo) que haga lazy-init y cachee la instancia
(referencia a FalkorDB, db y graph en el diff), y en enrich_from_graph sólo
reutiliza esa instancia para llamar a select_graph("altrupets"); conserva el
bloque except para loguear con logger.warning si la inicialización falla pero
evita recrear conexiones en cada invocación.
apps/backend/src/notifications/notifications.resolver.ts (1)

59-61: Considerar limitar el valor máximo de limit para prevenir queries excesivas.

No hay validación del parámetro limit. Un cliente podría solicitar limit=100000, causando queries costosas.

♻️ Propuesta para limitar `limit`
 `@Args`('limit', { type: () => Int, nullable: true, defaultValue: 20 }) limit: number,

En el service o aquí en el resolver:

const sanitizedLimit = Math.min(Math.max(limit, 1), 100);
return this.notificationsService.getUserNotifications(user.id, {
  page,
  limit: sanitizedLimit,
  unreadOnly,
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/notifications/notifications.resolver.ts` around lines 59 -
61, Clamp the incoming limit parameter to a safe range before querying to
prevent expensive requests: in the notifications.resolver (where `@Args`('page')
page, `@Args`('limit') limit, `@Args`('unreadOnly') unreadOnly are read), compute a
sanitizedLimit (e.g., Math.min(Math.max(limit, 1), 100)) and pass that
sanitizedLimit into notificationsService.getUserNotifications(user.id, { page,
limit: sanitizedLimit, unreadOnly }); alternatively enforce the same clamp
inside notificationsService.getUserNotifications for centralized validation.
apps/backend/src/rescues/dto/create-rescue-alert.input.ts (1)

32-35: Considerar validar elementos del array imageBase64s.

@IsArray() valida que sea un array, pero no valida los elementos individuales. Si se requiere que cada elemento sea un string base64 válido, agregar @IsString({ each: true }).

♻️ Propuesta para validar elementos
 `@Field`(() => [String], { nullable: true })
 `@IsArray`()
+@IsString({ each: true })
 `@IsOptional`()
 imageBase64s?: string[];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/rescues/dto/create-rescue-alert.input.ts` around lines 32 -
35, The imageBase64s property uses `@IsArray`() but doesn't validate each element;
update the decorators on imageBase64s (the property in
create-rescue-alert.input.ts) to include element-level validation by adding
`@IsString`({ each: true }) (or `@IsBase64`({ each: true }) if you want
base64-specific checks) and import the corresponding validator from
class-validator so each array item is validated as a string/base64.
apps/backend/src/notifications/notifications.service.ts (2)

172-197: Envío secuencial de notificaciones a múltiples usuarios.

sendToUsers procesa usuarios secuencialmente con for...of y await. Para un gran número de usuarios, esto puede ser lento. Considera usar Promise.all con un límite de concurrencia para mejorar el rendimiento.

♻️ Alternativa con concurrencia controlada
async sendToUsers(params: {
  userIds: string[];
  // ...
}): Promise<Notification[]> {
  const CONCURRENCY_LIMIT = 10;
  const results: Notification[] = [];
  
  for (let i = 0; i < params.userIds.length; i += CONCURRENCY_LIMIT) {
    const batch = params.userIds.slice(i, i + CONCURRENCY_LIMIT);
    const batchResults = await Promise.all(
      batch.map(userId => this.sendToUser({
        userId,
        type: params.type,
        title: params.title,
        body: params.body,
        data: params.data,
        referenceId: params.referenceId,
        referenceType: params.referenceType,
      }))
    );
    results.push(...batchResults);
  }
  
  return results;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/notifications/notifications.service.ts` around lines 172 -
197, sendToUsers currently awaits sendToUser sequentially causing slow
performance for many userIds; change it to run sendToUser calls in parallel with
a controlled concurrency (e.g., define CONCURRENCY_LIMIT = 10) by batching
params.userIds into chunks, using Promise.all on each batch of
this.sendToUser(...) calls, collecting batch results into the results array
(preserving order if needed), and returning the aggregated results; update the
implementation inside sendToUsers to perform these batched Promise.all calls
instead of the for...of await loop.

57-80: Uso de process.env directamente en lugar de ConfigService.

El servicio usa process.env.FIREBASE_SERVICE_ACCOUNT_JSON directamente, mientras que otras configuraciones en el proyecto usan ConfigService. Esto es inconsistente y dificulta las pruebas y la gestión de configuración.

♻️ Refactor sugerido
+import { ConfigService } from '@nestjs/config';

 `@Injectable`()
 export class NotificationsService implements OnModuleInit {
   private readonly logger = new Logger(NotificationsService.name);
   private firebaseInitialized = false;

   constructor(
     `@InjectRepository`(DeviceToken)
     private readonly deviceTokenRepository: Repository<DeviceToken>,
     `@InjectRepository`(Notification)
     private readonly notificationRepository: Repository<Notification>,
     private readonly entityManager: EntityManager,
+    private readonly configService: ConfigService,
   ) {}

   private initFirebase(): void {
     try {
-      const serviceAccountJson = process.env.FIREBASE_SERVICE_ACCOUNT_JSON;
+      const serviceAccountJson = this.configService.get<string>('FIREBASE_SERVICE_ACCOUNT_JSON');
       if (!serviceAccountJson) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/notifications/notifications.service.ts` around lines 57 -
80, The initFirebase method reads process.env.FIREBASE_SERVICE_ACCOUNT_JSON
directly; inject and use ConfigService instead to be consistent and testable:
add ConfigService to the NotificationsService constructor, replace
process.env.FIREBASE_SERVICE_ACCOUNT_JSON with
this.configService.get<string>('FIREBASE_SERVICE_ACCOUNT_JSON'), keep the same
null/empty handling and JSON.parse logic, and update any imports and unit tests
to provide a mocked ConfigService rather than mutating process.env; refer to
initFirebase and the NotificationsService constructor to locate the changes.
apps/agent-sidecar/graphs/state.py (1)

18-34: Valores por defecto mutables en modelo Pydantic.

Aunque Pydantic v2 maneja correctamente los valores por defecto mutables como list y dict, es una buena práctica usar Field(default_factory=...) para mayor claridad y para evitar sorpresas si el modelo se usa fuera del contexto de Pydantic.

♻️ Refactor sugerido
+from pydantic import BaseModel, Field
+
 class CandidateData(BaseModel):
     """Data for a single rescuer candidate throughout the matching pipeline."""

     user_id: str
     name: str
     distance_km: float
     available_capacity: int
     roles: list[str]
     # Enrichment fields (filled by enrich_from_graph)
     rescue_count: int = 0
-    species_specializations: list[str] = []
+    species_specializations: list[str] = Field(default_factory=list)
     vet_network_size: int = 0
     endorsement_count: int = 0
     # Scoring fields (filled by score_candidates)
     score: float = 0.0
-    score_breakdown: dict[str, float] = {}
+    score_breakdown: dict[str, float] = Field(default_factory=dict)
     explanation: str = ""
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/agent-sidecar/graphs/state.py` around lines 18 - 34, La clase
CandidateData usa listas y diccionarios mutables como valores por defecto
(roles, species_specializations, score_breakdown) lo cual es mejor reemplazar
por Field(..., default_factory=...) para mayor claridad y seguridad; actualiza
los atributos mutables en CandidateData (por ejemplo roles,
species_specializations, score_breakdown) para usar pydantic.fields.Field con
default_factory=list o default_factory=dict según corresponda, manteniendo los
tipos y valores por defecto actuales (rescue_count, vet_network_size,
endorsement_count, score, explanation no cambian).
apps/backend/src/rescues/rescues.service.ts (1)

150-153: Uso de null as any indica problema de tipado.

La asignación saved.auxiliarId = null as any es un hack para evitar errores de TypeScript. Esto sugiere que la entidad RescueAlert tiene auxiliarId como no-nullable cuando debería ser opcional.

♻️ Sugerencia

Verifica la definición de la entidad RescueAlert y asegúrate de que auxiliarId esté definido como nullable:

// En rescue-alert.entity.ts
`@Column`({ nullable: true })
auxiliarId?: string | null;

Luego puedes asignar directamente:

-        saved.auxiliarId = null as any;
+        saved.auxiliarId = null;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/rescues/rescues.service.ts` around lines 150 - 153, La
asignación "saved.auxiliarId = null as any" es un hack de tipado; abre la
entidad RescueAlert (rescue-alert.entity.ts) y marca auxiliarId como
nullable/optional (ej. Column({ nullable: true }) y tipo auxiliarId?: string |
null) para que pueda aceptar null, luego quita el "as any" y asigna directamente
null en el método donde aparece "saved.auxiliarId" (eliminando el casteo) y, si
corresponde, genera/ejecuta la migración para actualizar el esquema de la DB.
apps/backend/src/matching/proto/matching.proto (1)

1-2: El directorio del archivo proto no coincide con el paquete definido.

Buf reporta que los archivos con paquete altrupets.matching deberían estar en un directorio altrupets/matching. El archivo actual está en apps/backend/src/matching/proto/. Aunque esto puede funcionar, viola las convenciones estándar de organización de protos y puede causar problemas con algunas herramientas de generación de código.

Considera una de estas opciones:

  1. Mover el proto a proto/altrupets/matching/matching.proto en la raíz del proyecto
  2. Crear un directorio compartido protos/ usado tanto por el backend como el sidecar
  3. Si la ubicación actual es intencional, documentar la razón y configurar Buf para ignorar esta regla
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/matching/proto/matching.proto` around lines 1 - 2, El
paquete protobuf declarado es altrupets.matching, pero el archivo no reside en
la ruta de directorio correspondiente; arregla esto moviendo el proto que
contiene "package altrupets.matching;" a un directorio que refleje el paquete
(por ejemplo proto/altrupets/matching/matching.proto) o crea un directorio
compartido protos/ y mueve el archivo allí, y si la ubicación actual es
intencional entonces agrega documentación y configura Buf para ignorar esta
regla; en resumen, asegura que el archivo que contiene "package
altrupets.matching;" esté ubicado bajo altrupets/matching en tu árbol de protos
o actualiza la configuración de Buf para aceptar la ubicación actual.
apps/agent-sidecar/graphs/nodes/fetch_candidates.py (1)

61-65: Captura de excepción demasiado amplia sin contexto de error.

Capturar Exception sin registrar el tipo de error específico dificulta el diagnóstico. Considera registrar más contexto sobre el error de conexión.

♻️ Mejora sugerida
     try:
         conn: asyncpg.Connection = await asyncpg.connect(settings.database_url)
-    except Exception:
-        logger.exception("database_connection_failed")
+    except Exception as exc:
+        logger.exception("database_connection_failed", error_type=type(exc).__name__)
         return {**state, "candidates": [], "total_evaluated": 0, "error": "database_connection_failed"}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/agent-sidecar/graphs/nodes/fetch_candidates.py` around lines 61 - 65,
The try/except around asyncpg.connect is too broad and logs no error context;
modify the block to catch Exception as e (or a more specific connection error if
available), call logger.exception or logger.error with a descriptive message
plus the exception info (e.g., logger.exception("database_connection_failed:
%s", e) or logger.exception("database_connection_failed", exc_info=e)), and
include the error string or its type in the returned state (replace "error":
"database_connection_failed" with something like "error": str(e) or a tuple of
(type(e).__name__, str(e))) so callers get actionable details; update the
asyncpg.connect line and the return payload accordingly to reference the same
variables.
apps/backend/src/matching/matching.service.ts (1)

125-133: Los errores de gRPC se registran pero no se propagan al llamador.

El patrón actual devuelve resultados vacíos tanto para errores como para ausencia de candidatos. Esto dificulta distinguir entre una falla del sidecar y un resultado legítimamente vacío. Considera agregar un campo de estado o propagar el error opcionalmente.

💡 Sugerencia: agregar indicador de error en el retorno
-  ): Promise<{ candidates: RescuerCandidate[]; totalEvaluated: number; durationMs: number }> {
+  ): Promise<{ candidates: RescuerCandidate[]; totalEvaluated: number; durationMs: number; error?: string }> {
     if (!this.client) {
       this.logger.warn('gRPC client not available, returning empty results');
-      return { candidates: [], totalEvaluated: 0, durationMs: 0 };
+      return { candidates: [], totalEvaluated: 0, durationMs: 0, error: 'grpc_unavailable' };
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/matching/matching.service.ts` around lines 125 - 133, Cuando
el callback del gRPC FindBestRescuers (la función anónima que actualmente llama
this.logger.error(...) y resolve({ candidates: [], totalEvaluated: 0,
durationMs: 0 })) detecta un error, no lo propaga ni marca en la respuesta;
cambia ese comportamiento para que la promesa devuelva un objeto con un
indicador de fallo y detalles del error (por ejemplo, add fields like error:
true, errorMessage: error.message, errorStack: error.stack) en lugar de sólo un
arreglo vacío, y actualiza la definición del tipo/ interfaz usada por el método
que invoca el gRPC (el resultado que se resuelve en resolve(...)) para incluir
estos campos para que los llamadores puedan distinguir entre "sin candidatos" y
"fallo del sidecar".

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: ab3f1e4a-d56e-4b8c-ad9e-a2cdc8b7f208

📥 Commits

Reviewing files that changed from the base of the PR and between 53c48af and a056180.

⛔ Files ignored due to path filters (2)
  • apps/agent-sidecar/grpc/generated/__init__.py is excluded by !**/generated/**
  • apps/backend/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (66)
  • Makefile
  • apps/agent-sidecar/Dockerfile
  • apps/agent-sidecar/agents/rescuer-matching.yaml
  • apps/agent-sidecar/config/__init__.py
  • apps/agent-sidecar/config/settings.py
  • apps/agent-sidecar/graphs/__init__.py
  • apps/agent-sidecar/graphs/nodes/__init__.py
  • apps/agent-sidecar/graphs/nodes/enrich_from_graph.py
  • apps/agent-sidecar/graphs/nodes/fetch_candidates.py
  • apps/agent-sidecar/graphs/nodes/rank_and_explain.py
  • apps/agent-sidecar/graphs/nodes/score_candidates.py
  • apps/agent-sidecar/graphs/rescuer_matching_graph.py
  • apps/agent-sidecar/graphs/state.py
  • apps/agent-sidecar/grpc/__init__.py
  • apps/agent-sidecar/grpc/handlers/__init__.py
  • apps/agent-sidecar/grpc/handlers/find_rescuers.py
  • apps/agent-sidecar/grpc/handlers/record_outcome.py
  • apps/agent-sidecar/grpc/interceptors.py
  • apps/agent-sidecar/grpc/server.py
  • apps/agent-sidecar/main.py
  • apps/agent-sidecar/proto/matching.proto
  • apps/agent-sidecar/pyproject.toml
  • apps/agent-sidecar/scripts/gen-proto.sh
  • apps/agent-sidecar/tests/__init__.py
  • apps/agent-sidecar/tests/test_grpc_server.py
  • apps/agent-sidecar/tests/test_matching_graph.py
  • apps/backend/.env.example
  • apps/backend/nest-cli.json
  • apps/backend/package.json
  • apps/backend/src/abuse-reports/abuse-reports.module.ts
  • apps/backend/src/abuse-reports/abuse-reports.service.ts
  • apps/backend/src/app.module.ts
  • apps/backend/src/captures/captures.module.ts
  • apps/backend/src/captures/captures.resolver.ts
  • apps/backend/src/matching/dto/matching-input.input.ts
  • apps/backend/src/matching/dto/matching-result.output.ts
  • apps/backend/src/matching/dto/rescuer-candidate.output.ts
  • apps/backend/src/matching/matching.module.ts
  • apps/backend/src/matching/matching.resolver.ts
  • apps/backend/src/matching/matching.service.ts
  • apps/backend/src/matching/proto/matching.proto
  • apps/backend/src/notifications/dto/notification.output.ts
  • apps/backend/src/notifications/dto/register-device-token.input.ts
  • apps/backend/src/notifications/entities/device-token.entity.ts
  • apps/backend/src/notifications/entities/notification.entity.ts
  • apps/backend/src/notifications/enums/device-platform.enum.ts
  • apps/backend/src/notifications/enums/notification-type.enum.ts
  • apps/backend/src/notifications/notifications.module.ts
  • apps/backend/src/notifications/notifications.resolver.ts
  • apps/backend/src/notifications/notifications.service.ts
  • apps/backend/src/notifications/scalars/json.scalar.ts
  • apps/backend/src/rescues/dto/complete-rescue.input.ts
  • apps/backend/src/rescues/dto/create-rescue-alert.input.ts
  • apps/backend/src/rescues/dto/update-rescue-progress.input.ts
  • apps/backend/src/rescues/entities/rescue-alert.entity.ts
  • apps/backend/src/rescues/enums/rescue-status.enum.ts
  • apps/backend/src/rescues/enums/rescue-urgency.enum.ts
  • apps/backend/src/rescues/rescue-matching.service.ts
  • apps/backend/src/rescues/rescue-state-machine.ts
  • apps/backend/src/rescues/rescues.module.ts
  • apps/backend/src/rescues/rescues.resolver.ts
  • apps/backend/src/rescues/rescues.service.ts
  • apps/backend/src/subsidies/subsidies.module.ts
  • apps/backend/src/subsidies/subsidies.service.ts
  • k8s/base/backend/configmap-sidecar.yaml
  • k8s/base/backend/deployment.yaml

Comment on lines +23 to +40
falkordb_host: str = Field(default="localhost", description="FalkorDB hostname")
falkordb_port: int = Field(default=6379, description="FalkorDB port")

# gRPC
grpc_port: int = Field(default=50051, description="gRPC server listen port")

# Observability
metrics_port: int = Field(default=9090, description="Prometheus metrics port")
health_port: int = Field(default=8080, description="Health check HTTP port")

# Matching
max_candidates: int = Field(default=5, description="Max candidates to return")

# Radius overrides per urgency (km)
radius_low: float = Field(default=15.0, description="Search radius for LOW urgency")
radius_medium: float = Field(default=25.0, description="Search radius for MEDIUM urgency")
radius_high: float = Field(default=50.0, description="Search radius for HIGH urgency")
radius_critical: float = Field(default=100.0, description="Search radius for CRITICAL urgency")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Faltan límites básicos para puertos y parámetros de búsqueda.

Con puertos 0/70000, max_candidates <= 0 o radios negativos el sidecar arranca con configuración imposible y el error aparece mucho más tarde. Este modelo debería rechazar esos valores al boot.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/agent-sidecar/config/settings.py` around lines 23 - 40, Add validation
constraints to the Settings model so impossible values are rejected at startup:
for all port fields (falkordb_port, grpc_port, metrics_port, health_port) set
integer bounds (>=1 and <=65535) via Field(ge=1, le=65535) or use conint, make
max_candidates require >=1 via Field(ge=1) or PositiveInt, and make radii
(radius_low, radius_medium, radius_high, radius_critical) require >=0 via
Field(ge=0); if the Settings class already uses Pydantic validators you can
alternatively add `@validator` methods on class Settings to enforce these rules
and raise ValueError with a clear message.

Comment on lines +3 to +4
COPY pyproject.toml .
RUN pip install --no-cache-dir .
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

El build fallará porque falta el código fuente antes de pip install.

Solo se copia pyproject.toml pero no el código fuente del proyecto. Si el paquete define módulos locales (como graphs/, grpc/), la instalación fallará.

🐛 Corrección propuesta
 FROM python:3.12-slim AS builder
 WORKDIR /app
-COPY pyproject.toml .
+COPY pyproject.toml README.md* ./
+COPY . .
 RUN pip install --no-cache-dir .

Alternativamente, si deseas optimizar la cache de capas:

 FROM python:3.12-slim AS builder
 WORKDIR /app
 COPY pyproject.toml .
+RUN pip install --no-cache-dir pip-tools
+RUN pip-compile pyproject.toml -o requirements.txt || true
+COPY . .
 RUN pip install --no-cache-dir .
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/agent-sidecar/Dockerfile` around lines 3 - 4, Se está copiando solo
pyproject.toml antes de ejecutar RUN pip install, por lo que la instalación
fallará al no encontrar el código fuente; update el Dockerfile para copiar
también el código fuente del paquete (por ejemplo las carpetas de módulos
locales como graphs/, grpc/ o el directorio de paquete) antes de ejecutar RUN
pip install, o usar un patrón de cache (COPY pyproject.toml . && COPY
poetry.lock . && RUN pip install --no-cache-dir . seguido de COPY . .) para
mantener la cache de capas; busca las instrucciones COPY pyproject.toml y RUN
pip install en el Dockerfile y asegúrate de copiar todo el código necesario
previo a la instalación.

Comment on lines +13 to +50
async def _query_falkordb(graph, user_id: str) -> dict:
"""Run Cypher queries against FalkorDB to fetch enrichment data for one candidate."""

# Rescue count
result = graph.query(
"MATCH (r:Rescuer {id: $id})-[:RESCUED]->(a:Animal) RETURN count(a) AS cnt",
params={"id": user_id},
)
rescue_count = result.result_set[0][0] if result.result_set else 0

# Species specializations
result = graph.query(
"MATCH (r:Rescuer {id: $id})-[:RESCUED]->(a:Animal) "
"RETURN DISTINCT a.species AS species",
params={"id": user_id},
)
specializations = [row[0] for row in result.result_set] if result.result_set else []

# Vet network size
result = graph.query(
"MATCH (r:Rescuer {id: $id})-[:KNOWS]->(v:Vet) RETURN count(v) AS cnt",
params={"id": user_id},
)
vet_network_size = result.result_set[0][0] if result.result_set else 0

# Endorsements
result = graph.query(
"MATCH (r:Rescuer {id: $id})<-[:ENDORSED]-(u) RETURN count(u) AS cnt",
params={"id": user_id},
)
endorsement_count = result.result_set[0][0] if result.result_set else 0

return {
"rescue_count": rescue_count,
"species_specializations": specializations,
"vet_network_size": vet_network_size,
"endorsement_count": endorsement_count,
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Llamadas síncronas bloqueantes en función async.

_query_falkordb está marcada como async pero graph.query() es una operación síncrona que bloqueará el event loop. Esto puede degradar la concurrencia del servidor gRPC.

♻️ Propuesta para ejecutar en executor
+import asyncio
+from functools import partial
+
-async def _query_falkordb(graph, user_id: str) -> dict:
+def _query_falkordb_sync(graph, user_id: str) -> dict:
     """Run Cypher queries against FalkorDB to fetch enrichment data for one candidate."""
-
     # Rescue count
     result = graph.query(
         "MATCH (r:Rescuer {id: $id})-[:RESCUED]->(a:Animal) RETURN count(a) AS cnt",
         params={"id": user_id},
     )
     # ... resto del código igual ...
+
+async def _query_falkordb(graph, user_id: str) -> dict:
+    loop = asyncio.get_running_loop()
+    return await loop.run_in_executor(
+        None, partial(_query_falkordb_sync, graph, user_id)
+    )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/agent-sidecar/graphs/nodes/enrich_from_graph.py` around lines 13 - 50,
_query_falkordb is async but calls the synchronous graph.query which will block
the event loop; wrap each graph.query invocation in an executor (e.g., use
asyncio.get_running_loop().run_in_executor or loop.run_in_executor) or create a
small sync helper (e.g., _sync_query) and call that via run_in_executor for the
rescue count, species specializations, vet network size and endorsement count
queries so the function remains non-blocking and then reconstruct and return the
same dict with "rescue_count", "species_specializations", "vet_network_size",
and "endorsement_count".

Comment on lines +1 to +2
from grpc.handlers.find_rescuers import handle_find_rescuers
from grpc.handlers.record_outcome import handle_record_outcome
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Verifica coexistencia de paquete local grpc y usos del SDK grpc externo.
fd -i "__init__.py" apps/agent-sidecar | rg -n "apps/agent-sidecar/grpc/__init__\.py" || true

# Busca imports del SDK grpc (grpcio) en sidecar.
rg -n --type=py -C2 '^(import grpc\b|from grpc\b)' apps/agent-sidecar

# Busca imports absolutos internos que dependan del top-level grpc local.
rg -n --type=py -C2 '^from grpc\.handlers\.' apps/agent-sidecar

Repository: altrupets/monorepo

Length of output: 3137


🏁 Script executed:

# Verificar estructura de paquetes en apps/agent-sidecar
find apps/agent-sidecar -name "__init__.py" -type f | head -20

# Verificar si apps/agent-sidecar tiene __init__.py (raíz)
test -f apps/agent-sidecar/__init__.py && echo "Root __init__.py exists" || echo "No root __init__.py"

# Ver la estructura de directorios relevante
tree -L 3 apps/agent-sidecar/grpc/ 2>/dev/null || find apps/agent-sidecar/grpc -type f -name "*.py" | sort

Repository: altrupets/monorepo

Length of output: 582


Usa imports relativos para evitar colisiones con el paquete externo grpc (grpcio).

El archivo importa desde grpc.handlers mientras que en el mismo módulo (grpc/server.py) se importa el SDK externo import grpc. Esta coexistencia crea riesgo de resolución ambigua según el contexto de ejecución. Dentro del paquete local, los imports relativos evitan esta colisión de namespace.

Propuesta de ajuste
-from grpc.handlers.find_rescuers import handle_find_rescuers
-from grpc.handlers.record_outcome import handle_record_outcome
+from .find_rescuers import handle_find_rescuers
+from .record_outcome import handle_record_outcome
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/agent-sidecar/grpc/handlers/__init__.py` around lines 1 - 2, Cambia los
imports absolutos en __init__.py por imports relativos para evitar colisiones
con el paquete externo grpc; reemplaza las referencias actuales a
handle_find_rescuers y handle_record_outcome importadas desde "grpc.handlers"
por imports relativos desde el paquete local (por ejemplo "from
.handlers.find_rescuers import handle_find_rescuers" y "from
.handlers.record_outcome import handle_record_outcome") asegurándote de que los
símbolos handle_find_rescuers y handle_record_outcome sigan exportándose desde
el paquete.

Comment on lines +11 to +16
[RescueStatus.ASSIGNED]: [
RescueStatus.ACCEPTED,
RescueStatus.REJECTED,
RescueStatus.CANCELLED,
RescueStatus.EXPIRED,
],
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

REJECTED vuelve terminal una alerta todavía recuperable.

Con ASSIGNED -> REJECTED y REJECTED: [], el primer rechazo deja la alerta sin camino para reasignarla al siguiente candidato. En un flujo de matching eso corta el rescate aunque todavía haya rescatistas disponibles.

Also applies to: 31-32

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/rescues/rescue-state-machine.ts` around lines 11 - 16, The
state machine currently treats RescueStatus.REJECTED as terminal (empty
transitions), which prevents reassignment after the first reject; update the
rescue-state-machine mapping so RescueStatus.REJECTED includes a transition back
to RescueStatus.ASSIGNED (or whatever internal state you use for trying the next
candidate) instead of an empty array; specifically modify the
RescueStatus.REJECTED entry in the same map where [RescueStatus.ASSIGNED] is
defined (also apply the same fix to the duplicate occurrence around lines 31-32)
so a rejected alert can be routed to the next rescuer.

Comment on lines +334 to +342
this.notificationsService
.sendToUsers({
userIds: notifyUserIds,
type: NotificationType.RESCUE_TRANSFERRED,
title: 'Rescue Completed',
body: `Rescue ${alert.trackingCode} has been completed successfully`,
referenceId: saved.id,
referenceType: 'RescueAlert',
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verificar que RESCUE_COMPLETED exista en el enum NotificationType
echo "=== Buscando definición de NotificationType ==="
fd -t f "notification-type.enum" --exec cat {}

Repository: altrupets/monorepo

Length of output: 674


🏁 Script executed:

# Buscar uso de RESCUE_TRANSFERRED en rescues.service.ts
rg "RESCUE_TRANSFERRED|completeRescue|acceptTransfer" apps/backend/src/rescues/rescues.service.ts -n -B 2 -A 2

Repository: altrupets/monorepo

Length of output: 744


🏁 Script executed:

# Buscar si hay más contexto sobre notificaciones de completado en otros archivos
rg "RESCUE_COMPLETED|Rescue Completed|completeRescue" apps/backend/src --type ts -l

Repository: altrupets/monorepo

Length of output: 150


Tipo de notificación incorrecto para rescate completado.

La notificación en completeRescue usa NotificationType.RESCUE_TRANSFERRED en lugar de un tipo específico para completado. Este mismo tipo se usa correctamente en acceptTransfer (línea 294), lo que causa conflicto semántico. El cliente no puede distinguir entre un rescate transferido y uno completado.

NotificationType.RESCUE_COMPLETED no existe en el enum y debe ser agregado.

🐛 Corrección propuesta
  1. Agregar a NotificationType enum:
export enum NotificationType {
  RESCUE_ALERT = 'RESCUE_ALERT',
  RESCUE_ACCEPTED = 'RESCUE_ACCEPTED',
  RESCUE_TRANSFERRED = 'RESCUE_TRANSFERRED',
+  RESCUE_COMPLETED = 'RESCUE_COMPLETED',
  SUBSIDY_CREATED = 'SUBSIDY_CREATED',
  ...
}
  1. Cambiar en rescues.service.ts línea 337:
this.notificationsService
  .sendToUsers({
    userIds: notifyUserIds,
-    type: NotificationType.RESCUE_TRANSFERRED,
+    type: NotificationType.RESCUE_COMPLETED,
     title: 'Rescue Completed',
     body: `Rescue ${alert.trackingCode} has been completed successfully`,
     referenceId: saved.id,
     referenceType: 'RescueAlert',
  })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/rescues/rescues.service.ts` around lines 334 - 342, La
notificación enviada desde completeRescue está usando
NotificationType.RESCUE_TRANSFERRED en lugar de un tipo específico para rescate
completado; agrega RESCUE_COMPLETED al enum NotificationType (donde se define
NotificationType) y actualizar la llamada notificationsService.sendToUsers en
rescues.service.ts para usar NotificationType.RESCUE_COMPLETED en lugar de
NotificationType.RESCUE_TRANSFERRED (referencia: método completeRescue / la
llamada sendToUsers).

Comment on lines 91 to +124
async updateStatus(id: string, status: SubsidyRequestStatus): Promise<SubsidyRequest> {
const request = await this.findOne(id);
request.status = status;
return this.subsidyRepository.save(request);
const saved = await this.subsidyRepository.save(request);

// Send notification to the requester based on status change
const notificationMap: Partial<Record<SubsidyRequestStatus, { type: NotificationType; title: string; body: string }>> = {
[SubsidyRequestStatus.APPROVED]: {
type: NotificationType.SUBSIDY_APPROVED,
title: 'Subsidy Request Approved',
body: 'Your subsidy request has been approved',
},
[SubsidyRequestStatus.REJECTED]: {
type: NotificationType.SUBSIDY_REJECTED,
title: 'Subsidy Request Rejected',
body: 'Your subsidy request has been rejected',
},
};

const notificationConfig = notificationMap[status];
if (notificationConfig) {
this.notificationsService
.sendToUser({
userId: saved.requesterId,
type: notificationConfig.type,
title: notificationConfig.title,
body: notificationConfig.body,
referenceId: saved.id,
referenceType: 'SubsidyRequest',
})
.catch(() => {
// Notification failures should not block the main flow
});
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Evita reenviar notificaciones cuando el estado no cambió.

Si updateStatus recibe el mismo valor actual, igual haces save() y vuelves a disparar SUBSIDY_APPROVED/SUBSIDY_REJECTED. Un retry idempotente del cliente termina spameando al solicitante aunque no hubo transición real.

💡 Cambio sugerido
  async updateStatus(id: string, status: SubsidyRequestStatus): Promise<SubsidyRequest> {
    const request = await this.findOne(id);
+   if (request.status === status) {
+     return request;
+   }
    request.status = status;
    const saved = await this.subsidyRepository.save(request);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async updateStatus(id: string, status: SubsidyRequestStatus): Promise<SubsidyRequest> {
const request = await this.findOne(id);
request.status = status;
return this.subsidyRepository.save(request);
const saved = await this.subsidyRepository.save(request);
// Send notification to the requester based on status change
const notificationMap: Partial<Record<SubsidyRequestStatus, { type: NotificationType; title: string; body: string }>> = {
[SubsidyRequestStatus.APPROVED]: {
type: NotificationType.SUBSIDY_APPROVED,
title: 'Subsidy Request Approved',
body: 'Your subsidy request has been approved',
},
[SubsidyRequestStatus.REJECTED]: {
type: NotificationType.SUBSIDY_REJECTED,
title: 'Subsidy Request Rejected',
body: 'Your subsidy request has been rejected',
},
};
const notificationConfig = notificationMap[status];
if (notificationConfig) {
this.notificationsService
.sendToUser({
userId: saved.requesterId,
type: notificationConfig.type,
title: notificationConfig.title,
body: notificationConfig.body,
referenceId: saved.id,
referenceType: 'SubsidyRequest',
})
.catch(() => {
// Notification failures should not block the main flow
});
}
async updateStatus(id: string, status: SubsidyRequestStatus): Promise<SubsidyRequest> {
const request = await this.findOne(id);
if (request.status === status) {
return request;
}
request.status = status;
const saved = await this.subsidyRepository.save(request);
// Send notification to the requester based on status change
const notificationMap: Partial<Record<SubsidyRequestStatus, { type: NotificationType; title: string; body: string }>> = {
[SubsidyRequestStatus.APPROVED]: {
type: NotificationType.SUBSIDY_APPROVED,
title: 'Subsidy Request Approved',
body: 'Your subsidy request has been approved',
},
[SubsidyRequestStatus.REJECTED]: {
type: NotificationType.SUBSIDY_REJECTED,
title: 'Subsidy Request Rejected',
body: 'Your subsidy request has been rejected',
},
};
const notificationConfig = notificationMap[status];
if (notificationConfig) {
this.notificationsService
.sendToUser({
userId: saved.requesterId,
type: notificationConfig.type,
title: notificationConfig.title,
body: notificationConfig.body,
referenceId: saved.id,
referenceType: 'SubsidyRequest',
})
.catch(() => {
// Notification failures should not block the main flow
});
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/subsidies/subsidies.service.ts` around lines 91 - 124, In
updateStatus, avoid saving and sending notifications when the status hasn't
changed: after retrieving the request via findOne(id) compare the current
request.status to the incoming status and if they are equal return the request
immediately (no subsidyRepository.save, no notificationsService.sendToUser). If
you still need to persist other fields in future, only call
subsidyRepository.save and build/dispatch notifications (using notificationMap
and notificationsService.sendToUser) when the incoming status differs from the
existing request.status.

Comment on lines +7 to +14
SIDECAR_GRPC_PORT: "50051"
SIDECAR_METRICS_PORT: "9090"
SIDECAR_HEALTH_PORT: "8080"
SIDECAR_MAX_CANDIDATES: "5"
SIDECAR_RADIUS_LOW: "15"
SIDECAR_RADIUS_MEDIUM: "25"
SIDECAR_RADIUS_HIGH: "50"
SIDECAR_RADIUS_CRITICAL: "100"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

if [ -f k8s/base/backend/deployment.yaml ]; then
  rg -n -C4 'agent-sidecar|resources:|limits:|requests:|readinessProbe:|livenessProbe:|securityContext:' k8s/base/backend/deployment.yaml
else
  echo "No se encontró k8s/base/backend/deployment.yaml"
fi

Repository: altrupets/monorepo

Length of output: 1942


Agregar securityContext al contenedor agent-sidecar en el Deployment.

La verificación confirma que el Deployment tiene límites de recursos y probes correctamente configurados para el sidecar. Sin embargo, falta definir securityContext para runtime hardening (restricción de privilegios, filesystem de solo lectura, etc.). Según las guías de codificación para k8s/**, se requiere validar límites, probes y contexto de seguridad.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@k8s/base/backend/configmap-sidecar.yaml` around lines 7 - 14, Agregar un
securityContext al contenedor llamado "agent-sidecar" en el Deployment: dentro
del spec.template.spec.containers que contiene el container "agent-sidecar",
añadir securityContext con runAsNonRoot: true, runAsUser (un UID no root),
allowPrivilegeEscalation: false, readOnlyRootFilesystem: true,
capabilities.drop: ["ALL"] y opcionalmente seccompProfile.type: "RuntimeDefault"
para reforzar el runtime; asegurarse de no modificar los probes o resource
limits ya presentes y aplicar la misma semántica de seguridad a cualquier
initContainer equivalente si existe.

Comment on lines +29 to +31
env:
- name: AGENT_GRPC_URL
value: "localhost:50051"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Los puertos configurables del sidecar quedaron hardcodeados en el Deployment.

El proceso puede escuchar en SIDECAR_GRPC_PORT, SIDECAR_METRICS_PORT y SIDECAR_HEALTH_PORT, pero aquí AGENT_GRPC_URL, containerPort y las probes siguen fijos en 50051/9090/8080. En cuanto un overlay cambie esos valores, backend y Kubernetes van a hablarle al puerto equivocado.

Also applies to: 54-85

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@k8s/base/backend/deployment.yaml` around lines 29 - 31, El Deployment está
hardcodeando puertos; cambia AGENT_GRPC_URL, los fields containerPort y las
readiness/liveness/probes para leer los puertos desde las variables de entorno
SIDECAR_GRPC_PORT, SIDECAR_METRICS_PORT y SIDECAR_HEALTH_PORT en lugar de
valores fijos (referencia: env name AGENT_GRPC_URL, env names
SIDECAR_GRPC_PORT/SIDECAR_METRICS_PORT/SIDECAR_HEALTH_PORT, los bloques ports ->
containerPort y los bloques readinessProbe/livenessProbe). Concatena el host y
el SIDECAR_GRPC_PORT para AGENT_GRPC_URL y reemplaza los números 50051/9090/8080
por las variables de entorno en todos los lugares (incluyendo el probe httpGet
ports y container ports) para que cualquier overlay que cambie los puertos
funcione correctamente.

lapc506 and others added 3 commits March 28, 2026 12:13
…ver, and proto definition

Implements ALT-38 Phases 1-4:
- Phase 1: pyproject.toml, proto definition, gen-proto script, directory structure
- Phase 2: LangGraph matching pipeline (fetch, enrich, score, rank nodes)
- Phase 3: rescuer-matching persona YAML
- Phase 4: gRPC server, interceptors, handlers, settings, Dockerfile

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Use START constant instead of deprecated set_entry_point()
- Fix fetch_candidates SQL: roles is array column on users (not join table),
  rescue_alerts uses auxiliarId/rescuerId (not rescue_assignments)
- Use COALESCE for name fallback, correct column casing (isActive, firstName)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@lapc506 lapc506 force-pushed the ALT-38-agent-ai-sidecar branch from 0ef9416 to e79987a Compare March 28, 2026 18:15
Copy link
Copy Markdown

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.

@lapc506 lapc506 merged commit 9ecb4e1 into main Mar 28, 2026
3 of 10 checks passed
@lapc506 lapc506 deleted the ALT-38-agent-ai-sidecar branch March 28, 2026 18:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant