diff --git a/.gitignore b/.gitignore index c2affbc78..bb944408f 100644 --- a/.gitignore +++ b/.gitignore @@ -8,6 +8,49 @@ build /moqui*.war /Save*.zip +# Sensitive files — do not commit +/cookies*.txt + +# Playwright MCP (local browser automation tool logs) +.playwright-mcp/ + +# Screenshots and images (debug/comparison captures) +*.png +*.jpg +*.jpeg +*.gif +*.bmp +*.webp +/flutter_* +!/flutter/ +/moqui_* +!/MoquiInit.properties + +# Debug/network output files +*_output*.txt +/netout*.txt +/standalone_out.txt +/dart_files.txt +/pubspec_files.txt + +# Test/debug scripts (ad-hoc, not part of build) +/test_*.py +/patch_*.py + +# Planning / analysis docs (not part of shipped code) +/*_PLAN.md +/*_PATTERNS*.md +/*_ANALYSIS.md +/*_REPORT* + +# E2E test artifacts +/e2e_test_report.json +/E2E_PLAYWRIGHT_TEST_REPORT.md +/FJSON_RESPONSE_ANALYSIS.md + +# Log files +*.log + # runtime directory (separate repository so ignore directory entirely) /runtime /execwartmp diff --git a/AGENTS.md b/AGENTS.md new file mode 100644 index 000000000..a61e8269e --- /dev/null +++ b/AGENTS.md @@ -0,0 +1,170 @@ +# AGENTS.md — Moqui Framework Project Instructions + +## Project Overview + +This is a **Moqui Framework** ERP project with a **Flutter view layer** (moqui-flutter) and a **PostgreSQL-only search backend**. Moqui is a Java-based enterprise application framework using XML-defined entities, services, screens, and forms with Groovy scripting. The business logic layer is **Mantle** (mantle-udm for data model, mantle-usl for services), the admin UI library is **SimpleScreens**, and the full ERP application is **MarbleERP**. + +The **moqui-flutter** component adds a Flutter web/mobile client that consumes a custom JSON render mode (`fjson`) produced by `ScreenWidgetRenderJson.groovy`. The **PostgreSQL search backend** replaces ElasticSearch/OpenSearch with native PostgreSQL JSONB + tsvector search. + +## Required Reading Before Any Code Changes + +**Always read the relevant skill files before writing or modifying any Moqui artifact:** + +| Task | Read First | +|------|-----------| +| Entity work (create, extend, view-entity, ECA) | `runtime/component/moqui-ai-skill/SKILL.md` → `runtime/component/moqui-ai-skill/references/ENTITIES.md` | +| Service work (create, XML Actions, SECA, REST) | `runtime/component/moqui-ai-skill/SKILL.md` → `runtime/component/moqui-ai-skill/references/SERVICES.md` | +| Screen/form work (XML Screens, transitions, widgets) | `runtime/component/moqui-ai-skill/SKILL.md` → `runtime/component/moqui-ai-skill/references/SCREENS.md` | +| Business logic (orders, shipments, invoices, parties) | `runtime/component/moqui-ai-skill/references/MANTLE.md` | +| MarbleERP customization (modules, extending, dashboards) | `runtime/component/moqui-ai-skill/references/MARBLE_ERP.md` | +| Spock tests (entity, service, screen, REST, flows) | `runtime/component/moqui-ai-skill/references/TESTING.md` | +| Flutter view layer (Dart, widget rendering, API) | `runtime/component/moqui-flutter/IMPLEMENTATION_STATUS.md` | +| PostgreSQL search backend | `POSTGRES_SEARCH_PLAN.md` | +| Any Moqui task (start here) | `runtime/component/moqui-ai-skill/SKILL.md` | + +## Project Structure + +``` +moqui-postgreonly/ # Framework root (moqui-framework fork) +├── runtime/ +│ ├── conf/ # Environment-specific MoquiConf.xml files +│ │ ├── MoquiDevConf.xml # Dev config (postgres, elastic-facade) +│ │ └── MoquiProductionConf.xml +│ ├── component/ # All components live here +│ │ ├── mantle-udm/ # Data model entities (DO NOT MODIFY) +│ │ ├── mantle-usl/ # Business logic services (DO NOT MODIFY) +│ │ ├── SimpleScreens/ # Admin screen library (DO NOT MODIFY) +│ │ ├── MarbleERP/ # ERP application (DO NOT MODIFY) +│ │ ├── moqui-fop/ # PDF/FOP support (DO NOT MODIFY) +│ │ └── moqui-flutter/ # ← CUSTOM: Flutter view layer component +│ │ ├── component.xml +│ │ ├── MoquiConf.xml # Screen mounting (fapps, fjson render mode) +│ │ ├── data/ # Seed data +│ │ ├── screen/ # fapps.xml root screen +│ │ ├── src/ # ScreenWidgetRenderJson.groovy +│ │ └── flutter/ # Dart/Flutter app +│ │ ├── lib/ # Flutter source code +│ │ ├── test/ # Flutter tests + e2e tests +│ │ └── pubspec.yaml +│ └── log/ +├── framework/ # Moqui framework source (MODIFIED for postgres search) +│ ├── entity/SearchEntities.xml +│ ├── src/main/groovy/org/moqui/impl/context/ +│ │ ├── ElasticFacadeImpl.groovy # Modified to support postgres type +│ │ ├── PostgresElasticClient.groovy # NEW: postgres search client +│ │ └── ElasticQueryTranslator.groovy # NEW: ES query → SQL translator +│ ├── src/main/groovy/org/moqui/impl/util/ +│ │ └── PostgresSearchLogger.groovy # NEW: postgres-based logging +│ └── src/test/groovy/ +│ ├── PostgresElasticClientTests.groovy +│ └── PostgresSearchTranslatorTests.groovy +├── AGENTS.md # This file (OpenAI Codex instructions) +├── CLAUDE.md # Claude Code instructions +├── GEMINI.md # Google Gemini instructions +└── POSTGRES_SEARCH_PLAN.md # PostgreSQL search design doc +``` + +## Critical Rules + +### NEVER Modify These (Use extend-entity, SECA/EECA, MoquiConf.xml instead): +- `mantle-udm/` entity definitions +- `mantle-usl/` service definitions +- `SimpleScreens/` screen files +- `MarbleERP/` screen files +- `moqui-fop/` FOP library + +### Framework Modifications (postgres search only — minimize changes): +- Changes to `framework/` are limited to the PostgreSQL search backend feature +- Any new framework changes must maintain backward compatibility with ElasticSearch + +### ALWAYS: +- Create changes in **moqui-flutter component** for all Flutter/view layer work +- Use **extend-entity** to add fields/relationships to Mantle entities +- Use **SECA rules** (`.secas.xml`) to hook into existing business logic +- Use **EECA rules** (`.eecas.xml`) for entity-level triggers +- Use **MoquiConf.xml** in your component to mount screens under MarbleERP or webroot +- Use **service-call** for all business logic — never put logic directly in screens +- Use **`component://`** URLs for all screen/service/entity references +- Follow **Mantle naming**: `verb#Noun` for services, `package.Entity` for entities + +## Moqui-Specific Conventions + +### Entity Definitions +- File location: `YourComponent/entity/*.xml` +- Extend existing: `` +- New entities: Use your own package namespace +- Field types: `id`, `id-long`, `text-short`, `text-medium`, `text-long`, `text-very-long`, `number-integer`, `number-decimal`, `number-float`, `currency-amount`, `currency-precise`, `date`, `time`, `date-time` +- Primary keys: Always define `` + +### Service Definitions +- File location: `YourComponent/service/yourpackage/YourServices.xml` +- Naming: `yourpackage.YourServices.verb#Noun` (e.g., `mycomp.OrderServices.validate#CustomOrder`) +- Entity-auto CRUD: `` +- Use `` to inherit entity fields +- Transaction default: `use-or-begin` (joins existing or starts new) + +### Screen Development +- File location: `YourComponent/screen/...` +- Subscreens: Directory-based (create folder named same as parent screen filename) +- Transitions: For form submission/data processing, always redirect after +- Form patterns: `form-single` for edit, `form-list` for search/list +- Dynamic options: `` + +### Data Files +- File location: `YourComponent/data/*.xml` +- Root element: `` (or `demo`, `install`) +- Seed = required config data, Demo = sample data, Install = one-time setup + +### Flutter/Dart Conventions (moqui-flutter) +- The JSON render mode is `fjson` — requested via `?renderMode=fjson` +- `ScreenWidgetRenderJson.groovy` converts XML screen widgets to JSON +- `widget_factory.dart` converts JSON to Flutter widgets +- `moqui_api_client.dart` handles REST API calls to Moqui +- Tests: Dart unit tests in `flutter/test/`, Python e2e tests in `flutter/test/e2e/` + +## Build & Run Commands + +```bash +# Initial setup +git clone moqui-postgreonly +cd moqui-postgreonly +./gradlew load +java -jar moqui-plus-runtime.war + +# Development cycle (clean rebuild) +./gradlew cleanAll +./gradlew load +java -jar moqui-plus-runtime.war + +# Run with specific conf +java -jar moqui-plus-runtime.war conf=conf/MoquiDevConf.xml + +# Flutter development (separate terminal) +cd runtime/component/moqui-flutter/flutter +flutter run -d web-server --web-port=8181 + +# Run Flutter tests +cd runtime/component/moqui-flutter/flutter +flutter test +``` + +## Access URLs + +| URL | Description | +|-----|-------------| +| `http://localhost:8080/fapps/marble` | MarbleERP (Flutter UI) | +| `http://localhost:8080/fapps/tools` | Developer tools (Flutter UI) | +| `http://localhost:8080/qapps/marble` | MarbleERP (Quasar UI — default) | +| `http://localhost:8080/qapps/system` | System admin | +| `http://localhost:8080/qapps/tools` | Developer tools | +| `http://localhost:8080/rest/s1/mantle/` | REST API (Mantle services) | +| `http://localhost:8181` | Flutter dev server (when running separately) | + +## Debugging Tips +- **Logs**: `runtime/log/moqui.log` +- **Screen path issues**: Check `qapps/tools` → Screen Info tool +- **Service errors**: Check `ec.message.errors` and `ec.message.validationErrors` +- **Entity not found**: Verify component.xml dependencies load order +- **Groovy shell**: Available at `qapps/tools` → Groovy Shell (test expressions live) +- **Flutter JSON output**: Request any screen with `?renderMode=fjson` to see raw JSON +- **Flutter cache issues**: JSON responses include no-cache headers; client adds `_t` timestamp param diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 000000000..882fb27a9 --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,170 @@ +# CLAUDE.md — Moqui Framework Project Instructions + +## Project Overview + +This is a **Moqui Framework** ERP project with a **Flutter view layer** (moqui-flutter) and a **PostgreSQL-only search backend**. Moqui is a Java-based enterprise application framework using XML-defined entities, services, screens, and forms with Groovy scripting. The business logic layer is **Mantle** (mantle-udm for data model, mantle-usl for services), the admin UI library is **SimpleScreens**, and the full ERP application is **MarbleERP**. + +The **moqui-flutter** component adds a Flutter web/mobile client that consumes a custom JSON render mode (`fjson`) produced by `ScreenWidgetRenderJson.groovy`. The **PostgreSQL search backend** replaces ElasticSearch/OpenSearch with native PostgreSQL JSONB + tsvector search. + +## Required Reading Before Any Code Changes + +**Always read the relevant skill files before writing or modifying any Moqui artifact:** + +| Task | Read First | +|------|-----------| +| Entity work (create, extend, view-entity, ECA) | `runtime/component/moqui-ai-skill/SKILL.md` → `runtime/component/moqui-ai-skill/references/ENTITIES.md` | +| Service work (create, XML Actions, SECA, REST) | `runtime/component/moqui-ai-skill/SKILL.md` → `runtime/component/moqui-ai-skill/references/SERVICES.md` | +| Screen/form work (XML Screens, transitions, widgets) | `runtime/component/moqui-ai-skill/SKILL.md` → `runtime/component/moqui-ai-skill/references/SCREENS.md` | +| Business logic (orders, shipments, invoices, parties) | `runtime/component/moqui-ai-skill/references/MANTLE.md` | +| MarbleERP customization (modules, extending, dashboards) | `runtime/component/moqui-ai-skill/references/MARBLE_ERP.md` | +| Spock tests (entity, service, screen, REST, flows) | `runtime/component/moqui-ai-skill/references/TESTING.md` | +| Flutter view layer (Dart, widget rendering, API) | `runtime/component/moqui-flutter/IMPLEMENTATION_STATUS.md` | +| PostgreSQL search backend | `POSTGRES_SEARCH_PLAN.md` | +| Any Moqui task (start here) | `runtime/component/moqui-ai-skill/SKILL.md` | + +## Project Structure + +``` +moqui-postgreonly/ # Framework root (moqui-framework fork) +├── runtime/ +│ ├── conf/ # Environment-specific MoquiConf.xml files +│ │ ├── MoquiDevConf.xml # Dev config (postgres, elastic-facade) +│ │ └── MoquiProductionConf.xml +│ ├── component/ # All components live here +│ │ ├── mantle-udm/ # Data model entities (DO NOT MODIFY) +│ │ ├── mantle-usl/ # Business logic services (DO NOT MODIFY) +│ │ ├── SimpleScreens/ # Admin screen library (DO NOT MODIFY) +│ │ ├── MarbleERP/ # ERP application (DO NOT MODIFY) +│ │ ├── moqui-fop/ # PDF/FOP support (DO NOT MODIFY) +│ │ └── moqui-flutter/ # ← CUSTOM: Flutter view layer component +│ │ ├── component.xml +│ │ ├── MoquiConf.xml # Screen mounting (fapps, fjson render mode) +│ │ ├── data/ # Seed data +│ │ ├── screen/ # fapps.xml root screen +│ │ ├── src/ # ScreenWidgetRenderJson.groovy +│ │ └── flutter/ # Dart/Flutter app +│ │ ├── lib/ # Flutter source code +│ │ ├── test/ # Flutter tests + e2e tests +│ │ └── pubspec.yaml +│ └── log/ +├── framework/ # Moqui framework source (MODIFIED for postgres search) +│ ├── entity/SearchEntities.xml +│ ├── src/main/groovy/org/moqui/impl/context/ +│ │ ├── ElasticFacadeImpl.groovy # Modified to support postgres type +│ │ ├── PostgresElasticClient.groovy # NEW: postgres search client +│ │ └── ElasticQueryTranslator.groovy # NEW: ES query → SQL translator +│ ├── src/main/groovy/org/moqui/impl/util/ +│ │ └── PostgresSearchLogger.groovy # NEW: postgres-based logging +│ └── src/test/groovy/ +│ ├── PostgresElasticClientTests.groovy +│ └── PostgresSearchTranslatorTests.groovy +├── CLAUDE.md # This file +├── AGENTS.md # OpenAI Codex instructions +├── GEMINI.md # Google Gemini instructions +└── POSTGRES_SEARCH_PLAN.md # PostgreSQL search design doc +``` + +## Critical Rules + +### NEVER Modify These (Use extend-entity, SECA/EECA, MoquiConf.xml instead): +- `mantle-udm/` entity definitions +- `mantle-usl/` service definitions +- `SimpleScreens/` screen files +- `MarbleERP/` screen files +- `moqui-fop/` FOP library + +### Framework Modifications (postgres search only — minimize changes): +- Changes to `framework/` are limited to the PostgreSQL search backend feature +- Any new framework changes must maintain backward compatibility with ElasticSearch + +### ALWAYS: +- Create changes in **moqui-flutter component** for all Flutter/view layer work +- Use **extend-entity** to add fields/relationships to Mantle entities +- Use **SECA rules** (`.secas.xml`) to hook into existing business logic +- Use **EECA rules** (`.eecas.xml`) for entity-level triggers +- Use **MoquiConf.xml** in your component to mount screens under MarbleERP or webroot +- Use **service-call** for all business logic — never put logic directly in screens +- Use **`component://`** URLs for all screen/service/entity references +- Follow **Mantle naming**: `verb#Noun` for services, `package.Entity` for entities + +## Moqui-Specific Conventions + +### Entity Definitions +- File location: `YourComponent/entity/*.xml` +- Extend existing: `` +- New entities: Use your own package namespace +- Field types: `id`, `id-long`, `text-short`, `text-medium`, `text-long`, `text-very-long`, `number-integer`, `number-decimal`, `number-float`, `currency-amount`, `currency-precise`, `date`, `time`, `date-time` +- Primary keys: Always define `` + +### Service Definitions +- File location: `YourComponent/service/yourpackage/YourServices.xml` +- Naming: `yourpackage.YourServices.verb#Noun` (e.g., `mycomp.OrderServices.validate#CustomOrder`) +- Entity-auto CRUD: `` +- Use `` to inherit entity fields +- Transaction default: `use-or-begin` (joins existing or starts new) + +### Screen Development +- File location: `YourComponent/screen/...` +- Subscreens: Directory-based (create folder named same as parent screen filename) +- Transitions: For form submission/data processing, always redirect after +- Form patterns: `form-single` for edit, `form-list` for search/list +- Dynamic options: `` + +### Data Files +- File location: `YourComponent/data/*.xml` +- Root element: `` (or `demo`, `install`) +- Seed = required config data, Demo = sample data, Install = one-time setup + +### Flutter/Dart Conventions (moqui-flutter) +- The JSON render mode is `fjson` — requested via `?renderMode=fjson` +- `ScreenWidgetRenderJson.groovy` converts XML screen widgets to JSON +- `widget_factory.dart` converts JSON to Flutter widgets +- `moqui_api_client.dart` handles REST API calls to Moqui +- Tests: Dart unit tests in `flutter/test/`, Python e2e tests in `flutter/test/e2e/` + +## Build & Run Commands + +```bash +# Initial setup +git clone moqui-postgreonly +cd moqui-postgreonly +./gradlew load +java -jar moqui-plus-runtime.war + +# Development cycle (clean rebuild) +./gradlew cleanAll +./gradlew load +java -jar moqui-plus-runtime.war + +# Run with specific conf +java -jar moqui-plus-runtime.war conf=conf/MoquiDevConf.xml + +# Flutter development (separate terminal) +cd runtime/component/moqui-flutter/flutter +flutter run -d web-server --web-port=8181 + +# Run Flutter tests +cd runtime/component/moqui-flutter/flutter +flutter test +``` + +## Access URLs + +| URL | Description | +|-----|-------------| +| `http://localhost:8080/fapps/marble` | MarbleERP (Flutter UI) | +| `http://localhost:8080/fapps/tools` | Developer tools (Flutter UI) | +| `http://localhost:8080/qapps/marble` | MarbleERP (Quasar UI — default) | +| `http://localhost:8080/qapps/system` | System admin | +| `http://localhost:8080/qapps/tools` | Developer tools | +| `http://localhost:8080/rest/s1/mantle/` | REST API (Mantle services) | +| `http://localhost:8181` | Flutter dev server (when running separately) | + +## Debugging Tips +- **Logs**: `runtime/log/moqui.log` +- **Screen path issues**: Check `qapps/tools` → Screen Info tool +- **Service errors**: Check `ec.message.errors` and `ec.message.validationErrors` +- **Entity not found**: Verify component.xml dependencies load order +- **Groovy shell**: Available at `qapps/tools` → Groovy Shell (test expressions live) +- **Flutter JSON output**: Request any screen with `?renderMode=fjson` to see raw JSON +- **Flutter cache issues**: JSON responses include no-cache headers; client adds `_t` timestamp param diff --git a/GEMINI.md b/GEMINI.md new file mode 100644 index 000000000..5423db3fe --- /dev/null +++ b/GEMINI.md @@ -0,0 +1,170 @@ +# GEMINI.md — Moqui Framework Project Instructions + +## Project Overview + +This is a **Moqui Framework** ERP project with a **Flutter view layer** (moqui-flutter) and a **PostgreSQL-only search backend**. Moqui is a Java-based enterprise application framework using XML-defined entities, services, screens, and forms with Groovy scripting. The business logic layer is **Mantle** (mantle-udm for data model, mantle-usl for services), the admin UI library is **SimpleScreens**, and the full ERP application is **MarbleERP**. + +The **moqui-flutter** component adds a Flutter web/mobile client that consumes a custom JSON render mode (`fjson`) produced by `ScreenWidgetRenderJson.groovy`. The **PostgreSQL search backend** replaces ElasticSearch/OpenSearch with native PostgreSQL JSONB + tsvector search. + +## Required Reading Before Any Code Changes + +**Always read the relevant skill files before writing or modifying any Moqui artifact:** + +| Task | Read First | +|------|-----------| +| Entity work (create, extend, view-entity, ECA) | `runtime/component/moqui-ai-skill/SKILL.md` → `runtime/component/moqui-ai-skill/references/ENTITIES.md` | +| Service work (create, XML Actions, SECA, REST) | `runtime/component/moqui-ai-skill/SKILL.md` → `runtime/component/moqui-ai-skill/references/SERVICES.md` | +| Screen/form work (XML Screens, transitions, widgets) | `runtime/component/moqui-ai-skill/SKILL.md` → `runtime/component/moqui-ai-skill/references/SCREENS.md` | +| Business logic (orders, shipments, invoices, parties) | `runtime/component/moqui-ai-skill/references/MANTLE.md` | +| MarbleERP customization (modules, extending, dashboards) | `runtime/component/moqui-ai-skill/references/MARBLE_ERP.md` | +| Spock tests (entity, service, screen, REST, flows) | `runtime/component/moqui-ai-skill/references/TESTING.md` | +| Flutter view layer (Dart, widget rendering, API) | `runtime/component/moqui-flutter/IMPLEMENTATION_STATUS.md` | +| PostgreSQL search backend | `POSTGRES_SEARCH_PLAN.md` | +| Any Moqui task (start here) | `runtime/component/moqui-ai-skill/SKILL.md` | + +## Project Structure + +``` +moqui-postgreonly/ # Framework root (moqui-framework fork) +├── runtime/ +│ ├── conf/ # Environment-specific MoquiConf.xml files +│ │ ├── MoquiDevConf.xml # Dev config (postgres, elastic-facade) +│ │ └── MoquiProductionConf.xml +│ ├── component/ # All components live here +│ │ ├── mantle-udm/ # Data model entities (DO NOT MODIFY) +│ │ ├── mantle-usl/ # Business logic services (DO NOT MODIFY) +│ │ ├── SimpleScreens/ # Admin screen library (DO NOT MODIFY) +│ │ ├── MarbleERP/ # ERP application (DO NOT MODIFY) +│ │ ├── moqui-fop/ # PDF/FOP support (DO NOT MODIFY) +│ │ └── moqui-flutter/ # ← CUSTOM: Flutter view layer component +│ │ ├── component.xml +│ │ ├── MoquiConf.xml # Screen mounting (fapps, fjson render mode) +│ │ ├── data/ # Seed data +│ │ ├── screen/ # fapps.xml root screen +│ │ ├── src/ # ScreenWidgetRenderJson.groovy +│ │ └── flutter/ # Dart/Flutter app +│ │ ├── lib/ # Flutter source code +│ │ ├── test/ # Flutter tests + e2e tests +│ │ └── pubspec.yaml +│ └── log/ +├── framework/ # Moqui framework source (MODIFIED for postgres search) +│ ├── entity/SearchEntities.xml +│ ├── src/main/groovy/org/moqui/impl/context/ +│ │ ├── ElasticFacadeImpl.groovy # Modified to support postgres type +│ │ ├── PostgresElasticClient.groovy # NEW: postgres search client +│ │ └── ElasticQueryTranslator.groovy # NEW: ES query → SQL translator +│ ├── src/main/groovy/org/moqui/impl/util/ +│ │ └── PostgresSearchLogger.groovy # NEW: postgres-based logging +│ └── src/test/groovy/ +│ ├── PostgresElasticClientTests.groovy +│ └── PostgresSearchTranslatorTests.groovy +├── CLAUDE.md # Claude Code instructions +├── AGENTS.md # OpenAI Codex instructions +├── GEMINI.md # This file (Google Gemini instructions) +└── POSTGRES_SEARCH_PLAN.md # PostgreSQL search design doc +``` + +## Critical Rules + +### NEVER Modify These (Use extend-entity, SECA/EECA, MoquiConf.xml instead): +- `mantle-udm/` entity definitions +- `mantle-usl/` service definitions +- `SimpleScreens/` screen files +- `MarbleERP/` screen files +- `moqui-fop/` FOP library + +### Framework Modifications (postgres search only — minimize changes): +- Changes to `framework/` are limited to the PostgreSQL search backend feature +- Any new framework changes must maintain backward compatibility with ElasticSearch + +### ALWAYS: +- Create changes in **moqui-flutter component** for all Flutter/view layer work +- Use **extend-entity** to add fields/relationships to Mantle entities +- Use **SECA rules** (`.secas.xml`) to hook into existing business logic +- Use **EECA rules** (`.eecas.xml`) for entity-level triggers +- Use **MoquiConf.xml** in your component to mount screens under MarbleERP or webroot +- Use **service-call** for all business logic — never put logic directly in screens +- Use **`component://`** URLs for all screen/service/entity references +- Follow **Mantle naming**: `verb#Noun` for services, `package.Entity` for entities + +## Moqui-Specific Conventions + +### Entity Definitions +- File location: `YourComponent/entity/*.xml` +- Extend existing: `` +- New entities: Use your own package namespace +- Field types: `id`, `id-long`, `text-short`, `text-medium`, `text-long`, `text-very-long`, `number-integer`, `number-decimal`, `number-float`, `currency-amount`, `currency-precise`, `date`, `time`, `date-time` +- Primary keys: Always define `` + +### Service Definitions +- File location: `YourComponent/service/yourpackage/YourServices.xml` +- Naming: `yourpackage.YourServices.verb#Noun` (e.g., `mycomp.OrderServices.validate#CustomOrder`) +- Entity-auto CRUD: `` +- Use `` to inherit entity fields +- Transaction default: `use-or-begin` (joins existing or starts new) + +### Screen Development +- File location: `YourComponent/screen/...` +- Subscreens: Directory-based (create folder named same as parent screen filename) +- Transitions: For form submission/data processing, always redirect after +- Form patterns: `form-single` for edit, `form-list` for search/list +- Dynamic options: `` + +### Data Files +- File location: `YourComponent/data/*.xml` +- Root element: `` (or `demo`, `install`) +- Seed = required config data, Demo = sample data, Install = one-time setup + +### Flutter/Dart Conventions (moqui-flutter) +- The JSON render mode is `fjson` — requested via `?renderMode=fjson` +- `ScreenWidgetRenderJson.groovy` converts XML screen widgets to JSON +- `widget_factory.dart` converts JSON to Flutter widgets +- `moqui_api_client.dart` handles REST API calls to Moqui +- Tests: Dart unit tests in `flutter/test/`, Python e2e tests in `flutter/test/e2e/` + +## Build & Run Commands + +```bash +# Initial setup +git clone moqui-postgreonly +cd moqui-postgreonly +./gradlew load +java -jar moqui-plus-runtime.war + +# Development cycle (clean rebuild) +./gradlew cleanAll +./gradlew load +java -jar moqui-plus-runtime.war + +# Run with specific conf +java -jar moqui-plus-runtime.war conf=conf/MoquiDevConf.xml + +# Flutter development (separate terminal) +cd runtime/component/moqui-flutter/flutter +flutter run -d web-server --web-port=8181 + +# Run Flutter tests +cd runtime/component/moqui-flutter/flutter +flutter test +``` + +## Access URLs + +| URL | Description | +|-----|-------------| +| `http://localhost:8080/fapps/marble` | MarbleERP (Flutter UI) | +| `http://localhost:8080/fapps/tools` | Developer tools (Flutter UI) | +| `http://localhost:8080/qapps/marble` | MarbleERP (Quasar UI — default) | +| `http://localhost:8080/qapps/system` | System admin | +| `http://localhost:8080/qapps/tools` | Developer tools | +| `http://localhost:8080/rest/s1/mantle/` | REST API (Mantle services) | +| `http://localhost:8181` | Flutter dev server (when running separately) | + +## Debugging Tips +- **Logs**: `runtime/log/moqui.log` +- **Screen path issues**: Check `qapps/tools` → Screen Info tool +- **Service errors**: Check `ec.message.errors` and `ec.message.validationErrors` +- **Entity not found**: Verify component.xml dependencies load order +- **Groovy shell**: Available at `qapps/tools` → Groovy Shell (test expressions live) +- **Flutter JSON output**: Request any screen with `?renderMode=fjson` to see raw JSON +- **Flutter cache issues**: JSON responses include no-cache headers; client adds `_t` timestamp param diff --git a/docker/moqui-postgres-only-compose.yml b/docker/moqui-postgres-only-compose.yml new file mode 100644 index 000000000..3a8633143 --- /dev/null +++ b/docker/moqui-postgres-only-compose.yml @@ -0,0 +1,88 @@ +# A Docker Compose application with Moqui and PostgreSQL ONLY — no OpenSearch/ElasticSearch. +# All document storage, full-text search, and logging are handled by PostgreSQL using JSONB +# and tsvector, via the PostgresElasticClient (type="postgres") backend. + +# Run with something like this for detached mode: +# $ docker compose -f moqui-postgres-only-compose.yml -p moqui up -d + +# Or via the compose-run.sh helper: +# $ ./compose-run.sh moqui-postgres-only-compose.yml + +# To configure Moqui, add the following to your runtime/conf/MoquiConf.xml: +# +# +# +# + +version: "2" +services: + nginx-proxy: + image: jwilder/nginx-proxy + container_name: nginx-proxy + restart: always + ports: + - 80:80 + - 443:443 + volumes: + - /var/run/docker.sock:/tmp/docker.sock:ro + - ./certs:/etc/nginx/certs + - ./nginx/my_proxy.conf:/etc/nginx/conf.d/my_proxy.conf + environment: + - DEFAULT_HOST=moqui.local + - SSL_POLICY=AWS-TLS-1-2-2017-01 + + moqui-server: + image: moqui + container_name: moqui-server + command: conf=conf/MoquiProductionConf.xml + restart: always + links: + - moqui-database + volumes: + - ./runtime/conf:/opt/moqui/runtime/conf + - ./runtime/lib:/opt/moqui/runtime/lib + - ./runtime/classes:/opt/moqui/runtime/classes + - ./runtime/ecomponent:/opt/moqui/runtime/ecomponent + - ./runtime/log:/opt/moqui/runtime/log + - ./runtime/txlog:/opt/moqui/runtime/txlog + - ./runtime/sessions:/opt/moqui/runtime/sessions + environment: + - "JAVA_TOOL_OPTIONS=-Xms1024m -Xmx1024m" + - instance_purpose=production + - entity_ds_db_conf=postgres + - entity_ds_host=moqui-database + - entity_ds_port=5432 + - entity_ds_database=moqui + - entity_ds_schema=public + - entity_ds_user=${MOQUI_DS_USER:-moqui} + - entity_ds_password=${MOQUI_DS_PASSWORD:?Set MOQUI_DS_PASSWORD} + - entity_ds_crypt_pass=${MOQUI_DS_CRYPT_PASS:?Set MOQUI_DS_CRYPT_PASS} + # ---- PostgreSQL-backed search (no OpenSearch required) ---- + # Override elastic-facade in your MoquiConf.xml: + # + # + # + # Or set an empty elasticsearch_url so the default cluster is skipped: + - elasticsearch_url= + # VIRTUAL_HOST for nginx-proxy + - VIRTUAL_HOST=moqui.local + - webapp_http_port=80 + - webapp_https_port=443 + - webapp_https_enabled=true + - webapp_client_ip_header=X-Real-IP + - default_locale=en_US + - default_time_zone=US/Pacific + + moqui-database: + image: postgres:14.5 + container_name: moqui-database + restart: always + ports: + - 127.0.0.1:5432:5432 + volumes: + - ./db/postgres/data:/var/lib/postgresql/data + environment: + - POSTGRES_DB=${MOQUI_DS_DB:-moqui} + - POSTGRES_DB_SCHEMA=public + - POSTGRES_USER=${MOQUI_DS_USER:-moqui} + - POSTGRES_PASSWORD=${MOQUI_DS_PASSWORD:?Set MOQUI_DS_PASSWORD} diff --git a/framework/build.gradle b/framework/build.gradle index c19e9be57..a2b970809 100644 --- a/framework/build.gradle +++ b/framework/build.gradle @@ -196,6 +196,8 @@ dependencies { testImplementation 'org.junit.platform:junit-platform-suite:1.12.1' // junit-jupiter-api for using JUnit directly, not generally needed for Spock based tests testImplementation 'org.junit.jupiter:junit-jupiter-api:5.12.1' + // junit-jupiter-engine required to execute @Test-annotated methods via JUnit Platform + testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.12.1' // Spock Framework testImplementation platform("org.spockframework:spock-bom:2.1-groovy-3.0") // Apache 2.0 testImplementation 'org.spockframework:spock-core:2.1-groovy-3.0' // Apache 2.0 @@ -234,6 +236,7 @@ test { dependsOn cleanTest include '**/*MoquiSuite.class' + include '**/*PostgresSearchSuite.class' systemProperty 'moqui.runtime', '../runtime' systemProperty 'moqui.conf', 'conf/MoquiDevConf.xml' diff --git a/framework/entity/SearchEntities.xml b/framework/entity/SearchEntities.xml new file mode 100644 index 000000000..0ed222b5c --- /dev/null +++ b/framework/entity/SearchEntities.xml @@ -0,0 +1,115 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/framework/src/main/groovy/org/moqui/impl/context/ElasticFacadeImpl.groovy b/framework/src/main/groovy/org/moqui/impl/context/ElasticFacadeImpl.groovy index 9b2b77107..3c068dd3d 100644 --- a/framework/src/main/groovy/org/moqui/impl/context/ElasticFacadeImpl.groovy +++ b/framework/src/main/groovy/org/moqui/impl/context/ElasticFacadeImpl.groovy @@ -32,6 +32,7 @@ import org.moqui.impl.entity.EntityDefinition import org.moqui.impl.entity.EntityJavaUtil import org.moqui.impl.entity.FieldInfo import org.moqui.impl.util.ElasticSearchLogger +import org.moqui.impl.util.PostgresSearchLogger import org.moqui.util.LiteStringMap import org.moqui.util.MNode import org.moqui.util.RestClient @@ -69,8 +70,9 @@ class ElasticFacadeImpl implements ElasticFacade { } public final ExecutionContextFactoryImpl ecfi - private final Map clientByClusterName = new LinkedHashMap<>() + private final Map clientByClusterName = new LinkedHashMap<>() private ElasticSearchLogger esLogger = null + private PostgresSearchLogger pgLogger = null ElasticFacadeImpl(ExecutionContextFactoryImpl ecfi) { this.ecfi = ecfi @@ -90,14 +92,22 @@ class ElasticFacadeImpl implements ElasticFacade { logger.warn("ElasticFacade Client for cluster ${clusterName} already initialized, skipping") continue } - if (!clusterUrl) { - logger.warn("ElasticFacade Client for cluster ${clusterName} has no url, skipping") - continue - } + String clusterType = clusterNode.attribute("type") ?: "elastic" try { - ElasticClientImpl elci = new ElasticClientImpl(clusterNode, ecfi) - clientByClusterName.put(clusterName, elci) + if ("postgres".equals(clusterType)) { + // PostgreSQL backend — url attribute is datasource group name (optional, default "transactional") + PostgresElasticClient pgc = new PostgresElasticClient(clusterNode, ecfi) + clientByClusterName.put(clusterName, pgc) + logger.info("Initialized PostgresElasticClient for cluster ${clusterName}") + } else { + if (!clusterUrl) { + logger.warn("ElasticFacade Client for cluster ${clusterName} has no url, skipping") + continue + } + ElasticClientImpl elci = new ElasticClientImpl(clusterNode, ecfi) + clientByClusterName.put(clusterName, elci) + } } catch (Throwable t) { Throwable cause = t.getCause() if (cause != null && cause.message.contains("refused")) { @@ -108,22 +118,29 @@ class ElasticFacadeImpl implements ElasticFacade { } } - // init ElasticSearchLogger - if (esLogger == null || !esLogger.isInitialized()) { - ElasticClientImpl loggerEci = clientByClusterName.get("logger") ?: clientByClusterName.get("default") - if (loggerEci != null) { - logger.info("Initializing ElasticSearchLogger with cluster ${loggerEci.getClusterName()}") - esLogger = new ElasticSearchLogger(loggerEci, ecfi) + // init ElasticSearchLogger / PostgresSearchLogger depending on backend type + ElasticClient loggerClient = clientByClusterName.get("logger") ?: clientByClusterName.get("default") + if (loggerClient instanceof PostgresElasticClient) { + if (pgLogger == null || !pgLogger.isInitialized()) { + logger.info("Initializing PostgresSearchLogger with cluster ${loggerClient.getClusterName()}") + pgLogger = new PostgresSearchLogger((PostgresElasticClient) loggerClient, ecfi) + } else { + logger.warn("PostgresSearchLogger in place and initialized, skipping") + } + } else if (loggerClient instanceof ElasticClientImpl) { + if (esLogger == null || !esLogger.isInitialized()) { + logger.info("Initializing ElasticSearchLogger with cluster ${loggerClient.getClusterName()}") + esLogger = new ElasticSearchLogger((ElasticClientImpl) loggerClient, ecfi) } else { - logger.warn("No Elastic Client found with name 'logger' or 'default', not initializing ElasticSearchLogger") + logger.warn("ElasticSearchLogger in place and initialized, not initializing ElasticSearchLogger") } } else { - logger.warn("ElasticSearchLogger in place and initialized, not initializing ElasticSearchLogger") + logger.warn("No Elastic/Postgres Client found with name 'logger' or 'default', not initializing search logger") } // Index DataFeed with indexOnStartEmpty=Y try { - ElasticClientImpl defaultEci = clientByClusterName.get("default") + ElasticClient defaultEci = clientByClusterName.get("default") if (defaultEci != null) { EntityList dataFeedList = ecfi.entityFacade.find("moqui.entity.feed.DataFeed") .condition("indexOnStartEmpty", "Y").disableAuthz().list() @@ -151,7 +168,11 @@ class ElasticFacadeImpl implements ElasticFacade { void destroy() { if (esLogger != null) esLogger.destroy() - for (ElasticClientImpl eci in clientByClusterName.values()) eci.destroy() + if (pgLogger != null) pgLogger.destroy() + for (ElasticClient eci in clientByClusterName.values()) { + if (eci instanceof ElasticClientImpl) ((ElasticClientImpl) eci).destroy() + else if (eci instanceof PostgresElasticClient) ((PostgresElasticClient) eci).destroy() + } } @Override ElasticClient getDefault() { return clientByClusterName.get("default") } diff --git a/framework/src/main/groovy/org/moqui/impl/context/ElasticQueryTranslator.groovy b/framework/src/main/groovy/org/moqui/impl/context/ElasticQueryTranslator.groovy new file mode 100644 index 000000000..887aa1237 --- /dev/null +++ b/framework/src/main/groovy/org/moqui/impl/context/ElasticQueryTranslator.groovy @@ -0,0 +1,675 @@ +/* + * This software is in the public domain under CC0 1.0 Universal plus a + * Grant of Patent License. + * + * To the extent possible under law, the author(s) have dedicated all + * copyright and related and neighboring rights to this software to the + * public domain worldwide. This software is distributed without any + * warranty. + * + * You should have received a copy of the CC0 Public Domain Dedication + * along with this software (see the LICENSE.md file). If not, see + * . + */ +package org.moqui.impl.context + +import org.slf4j.Logger +import org.slf4j.LoggerFactory + +/** + * Translates ElasticSearch/OpenSearch Query DSL (Map structures) into PostgreSQL SQL WHERE clauses, + * ORDER BY expressions, and OFFSET/LIMIT pagination for use by PostgresElasticClient. + * + * Supports the query types used by Moqui's SearchServices.xml and entity condition makeSearchFilter() methods: + * - query_string (→ websearch_to_tsquery / plainto_tsquery on content_tsv) + * - bool (must / should / must_not / filter) + * - term, terms + * - range + * - match_all + * - exists + * - nested (→ jsonb_array_elements EXISTS subquery) + */ +class ElasticQueryTranslator { + private final static Logger logger = LoggerFactory.getLogger(ElasticQueryTranslator.class) + + /** Regex pattern for valid field names — alphanumeric, underscores, dots, hyphens, and @ (for @timestamp) */ + private static final java.util.regex.Pattern SAFE_FIELD_PATTERN = java.util.regex.Pattern.compile('^[a-zA-Z0-9_@][a-zA-Z0-9_.\\-]*$') + + /** + * Validate that a field name is safe for interpolation into SQL. + * Rejects any field containing SQL metacharacters (quotes, semicolons, parentheses, etc.) + * @throws IllegalArgumentException if the field name contains unsafe characters + */ + static String sanitizeFieldName(String field) { + if (field == null || field.isEmpty()) throw new IllegalArgumentException("Field name must not be empty") + if (!SAFE_FIELD_PATTERN.matcher(field).matches()) { + throw new IllegalArgumentException("Unsafe field name rejected: '${field}' — only alphanumeric, underscore, dot, hyphen, and @ allowed") + } + if (field.contains("--")) { + throw new IllegalArgumentException("Unsafe field name rejected: '${field}' — double-hyphen (SQL comment) not allowed") + } + if (field.length() > 256) { + throw new IllegalArgumentException("Field name too long (max 256 chars): '${field}'") + } + return field + } + + /** Holds the result of translating a query DSL fragment or full search request */ + static class TranslatedQuery { + /** SQL WHERE clause fragment (without the "WHERE" keyword), or "TRUE" if no filter */ + String whereClause = "TRUE" + /** JDBC bind parameters in order corresponding to ? placeholders in whereClause */ + List params = [] + /** SQL ORDER BY expression (without the "ORDER BY" keyword), or null */ + String orderBy = null + /** The tsquery expression (as SQL expression string) for use in ts_rank_cd() and ts_headline() */ + String tsqueryExpr = null + /** Bind parameters specifically for tsqueryExpr (separate from WHERE params) */ + List tsqueryParams = [] + /** OFFSET value for pagination */ + int fromOffset = 0 + /** LIMIT value for pagination */ + int sizeLimit = 20 + /** Track total hits (adds no SQL change but reflects ES track_total_hits flag) */ + boolean trackTotal = true + /** Fields to highlight, keyed by field name */ + Map highlightFields = [:] + } + + /** + * Translate a full ES searchMap (the body sent to /_search) into a TranslatedQuery. + * @param searchMap Map as built by SearchServices.search#DataDocuments + */ + static TranslatedQuery translateSearchMap(Map searchMap) { + TranslatedQuery tq = new TranslatedQuery() + + // Pagination + Object fromVal = searchMap.get("from") + if (fromVal != null) tq.fromOffset = ((Number) fromVal).intValue() + Object sizeVal = searchMap.get("size") + if (sizeVal != null) tq.sizeLimit = ((Number) sizeVal).intValue() + + // Sort + Object sortVal = searchMap.get("sort") + if (sortVal instanceof List) { + tq.orderBy = translateSort((List) sortVal) + } + + // Highlight fields + Object highlightVal = searchMap.get("highlight") + if (highlightVal instanceof Map) { + Object fieldsVal = ((Map) highlightVal).get("fields") + if (fieldsVal instanceof Map) tq.highlightFields = (Map) fieldsVal + } + + // track_total_hits + Object tthVal = searchMap.get("track_total_hits") + if (tthVal != null) tq.trackTotal = Boolean.TRUE == tthVal || "true".equals(tthVal.toString()) + + // Query + Object queryVal = searchMap.get("query") + if (queryVal instanceof Map) { + QueryResult qr = translateQuery((Map) queryVal) + tq.whereClause = qr.clause ?: "TRUE" + tq.params = qr.params + tq.tsqueryExpr = qr.tsqueryExpr + tq.tsqueryParams = qr.tsqueryParams + } + + return tq + } + + /** Internal result holder for a single query fragment */ + static class QueryResult { + String clause = "TRUE" + List params = [] + /** If this query has a full-text component, the SQL tsquery expression for scoring/highlighting */ + String tsqueryExpr = null + /** Bind parameters specifically for tsqueryExpr (separate from WHERE clause params) */ + List tsqueryParams = [] + } + + static QueryResult translateQuery(Map queryMap) { + if (queryMap == null || queryMap.isEmpty()) return new QueryResult() + + String queryType = (String) queryMap.keySet().iterator().next() + Object queryVal = queryMap.get(queryType) + + switch (queryType) { + case "match_all": return translateMatchAll() + case "match_none": + QueryResult qr = new QueryResult(); qr.clause = "FALSE"; return qr + case "query_string": return translateQueryString((Map) queryVal) + case "multi_match": return translateMultiMatch((Map) queryVal) + case "bool": return translateBool((Map) queryVal) + case "term": return translateTerm((Map) queryVal, false) + case "terms": return translateTerms((Map) queryVal) + case "range": return translateRange((Map) queryVal) + case "exists": return translateExists((Map) queryVal) + case "nested": return translateNested((Map) queryVal) + case "ids": return translateIds((Map) queryVal) + default: + logger.warn("ElasticQueryTranslator: unsupported query type '${queryType}', using TRUE") + return new QueryResult() + } + } + + private static QueryResult translateMatchAll() { + QueryResult qr = new QueryResult() + qr.clause = "TRUE" + return qr + } + + private static QueryResult translateQueryString(Map qsMap) { + QueryResult qr = new QueryResult() + if (qsMap == null) return qr + + String query = (String) qsMap.get("query") + if (!query || query.trim().isEmpty()) return qr + + // Clean up the query string: + // 1. Lucene field:value syntax → handle field-specific searches + // 2. Strip unsupported operators, translate AND/OR/NOT + // 3. Use websearch_to_tsquery which supports quoted phrases, AND, OR, -, + + String cleanedQuery = cleanLuceneQuery(query) + + if (!cleanedQuery || cleanedQuery.trim().isEmpty()) return qr + + // Use websearch_to_tsquery for natural language queries + // It handles: "exact phrase", AND/OR/NOT, +required, -exclude + qr.tsqueryExpr = "websearch_to_tsquery('english', ?)" + qr.tsqueryParams = [cleanedQuery] + qr.params = [cleanedQuery] + qr.clause = "content_tsv @@ websearch_to_tsquery('english', ?)" + return qr + } + + private static QueryResult translateMultiMatch(Map mmMap) { + // Treat like query_string on all fields + String query = (String) mmMap.get("query") + if (!query) return new QueryResult() + return translateQueryString([query: query]) + } + + private static QueryResult translateBool(Map boolMap) { + QueryResult qr = new QueryResult() + if (boolMap == null) return qr + + List clauses = [] + List params = [] + String combinedTsquery = null + List combinedTsqueryParams = [] + + // must (AND) + Object mustVal = boolMap.get("must") + if (mustVal instanceof List) { + List mustClauses = [] + for (Object item in (List) mustVal) { + if (item instanceof Map) { + QueryResult itemQr = translateQuery((Map) item) + mustClauses.add(itemQr.clause) + params.addAll(itemQr.params) + if (itemQr.tsqueryExpr) { + combinedTsquery = combinedTsquery ? "(${combinedTsquery}) && (${itemQr.tsqueryExpr})" : itemQr.tsqueryExpr + combinedTsqueryParams.addAll(itemQr.tsqueryParams) + } + } + } + if (mustClauses) clauses.add("(" + mustClauses.join(" AND ") + ")") + } else if (mustVal instanceof Map) { + QueryResult itemQr = translateQuery((Map) mustVal) + clauses.add(itemQr.clause) + params.addAll(itemQr.params) + if (itemQr.tsqueryExpr) { + combinedTsquery = itemQr.tsqueryExpr + combinedTsqueryParams.addAll(itemQr.tsqueryParams) + } + } + + // filter (same as must for our purposes) + Object filterVal = boolMap.get("filter") + if (filterVal instanceof List) { + List filterClauses = [] + for (Object item in (List) filterVal) { + if (item instanceof Map) { + QueryResult itemQr = translateQuery((Map) item) + filterClauses.add(itemQr.clause) + params.addAll(itemQr.params) + } + } + if (filterClauses) clauses.add("(" + filterClauses.join(" AND ") + ")") + } else if (filterVal instanceof Map) { + QueryResult itemQr = translateQuery((Map) filterVal) + clauses.add(itemQr.clause) + params.addAll(itemQr.params) + } + + // should (OR) + Object shouldVal = boolMap.get("should") + if (shouldVal instanceof List) { + List shouldClauses = [] + for (Object item in (List) shouldVal) { + if (item instanceof Map) { + QueryResult itemQr = translateQuery((Map) item) + shouldClauses.add(itemQr.clause) + params.addAll(itemQr.params) + if (itemQr.tsqueryExpr) { + combinedTsquery = combinedTsquery ? "(${combinedTsquery}) || (${itemQr.tsqueryExpr})" : itemQr.tsqueryExpr + combinedTsqueryParams.addAll(itemQr.tsqueryParams) + } + } + } + if (shouldClauses) { + int minShouldMatch = 1 + Object msmVal = boolMap.get("minimum_should_match") + if (msmVal != null) minShouldMatch = ((Number) msmVal).intValue() + if (minShouldMatch == 1) { + clauses.add("(" + shouldClauses.join(" OR ") + ")") + } else { + // For minimum_should_match > 1, use a CASE/SUM trick for simplicity just add as OR + clauses.add("(" + shouldClauses.join(" OR ") + ")") + } + } + } else if (shouldVal instanceof Map) { + QueryResult itemQr = translateQuery((Map) shouldVal) + clauses.add(itemQr.clause) + params.addAll(itemQr.params) + if (itemQr.tsqueryExpr) { + combinedTsquery = itemQr.tsqueryExpr + combinedTsqueryParams.addAll(itemQr.tsqueryParams) + } + } + + // must_not (NOT) + Object mustNotVal = boolMap.get("must_not") + if (mustNotVal instanceof List) { + List mustNotClauses = [] + for (Object item in (List) mustNotVal) { + if (item instanceof Map) { + QueryResult itemQr = translateQuery((Map) item) + mustNotClauses.add(itemQr.clause) + params.addAll(itemQr.params) + } + } + if (mustNotClauses) clauses.add("NOT (" + mustNotClauses.join(" OR ") + ")") + } else if (mustNotVal instanceof Map) { + QueryResult itemQr = translateQuery((Map) mustNotVal) + clauses.add("NOT (${itemQr.clause})") + params.addAll(itemQr.params) + } + + qr.clause = clauses ? "(" + clauses.join(" AND ") + ")" : "TRUE" + qr.params = params + qr.tsqueryExpr = combinedTsquery + qr.tsqueryParams = combinedTsqueryParams + return qr + } + + private static QueryResult translateTerm(Map termMap, boolean ignoreCase) { + QueryResult qr = new QueryResult() + if (termMap == null || termMap.isEmpty()) return qr + + String field = (String) termMap.keySet().iterator().next() + Object valueHolder = termMap.get(field) + Object value + if (valueHolder instanceof Map) { + value = ((Map) valueHolder).get("value") + } else { + value = valueHolder + } + if (value == null) { qr.clause = "TRUE"; return qr } + + // _id is a special ES field that maps to the doc_id column + if (field == "_id") { + qr.clause = "doc_id = ?" + qr.params = [value.toString()] + return qr + } + + String jsonPath = fieldToJsonPath("document", field) + if (ignoreCase && value instanceof String) { + qr.clause = "LOWER(${jsonPath}) = LOWER(?)" + } else { + qr.clause = "${jsonPath} = ?" + } + qr.params = [value.toString()] + return qr + } + + private static QueryResult translateTerms(Map termsMap) { + QueryResult qr = new QueryResult() + if (termsMap == null || termsMap.isEmpty()) return qr + + // Remove boost key if present + Map filteredMap = termsMap.findAll { k, v -> k != "boost" } + if (filteredMap.isEmpty()) return qr + + String field = (String) filteredMap.keySet().iterator().next() + Object valuesObj = filteredMap.get(field) + if (!(valuesObj instanceof List)) { qr.clause = "TRUE"; return qr } + List values = (List) valuesObj + if (values.isEmpty()) { qr.clause = "FALSE"; return qr } + + String jsonPath = fieldToJsonPath("document", field) + List placeholders = values.collect { "?" } + qr.clause = "${jsonPath} IN (${placeholders.join(', ')})" + qr.params = values.collect { it?.toString() } + return qr + } + + private static QueryResult translateRange(Map rangeMap) { + QueryResult qr = new QueryResult() + if (rangeMap == null || rangeMap.isEmpty()) return qr + + String field = (String) rangeMap.keySet().iterator().next() + Object rangeSpec = rangeMap.get(field) + if (!(rangeSpec instanceof Map)) return qr + + Map rangeSpecMap = (Map) rangeSpec + String jsonPath = fieldToJsonPath("document", field) + List conditions = [] + List params = [] + + // Determine cast type based on common field name patterns + String castType = guessCastType(field) + + Object gte = rangeSpecMap.get("gte") + Object gt = rangeSpecMap.get("gt") + Object lte = rangeSpecMap.get("lte") + Object lt = rangeSpecMap.get("lt") + + if (gte != null) { conditions.add("(${jsonPath})${castType} >= ?"); params.add(gte.toString()) } + if (gt != null) { conditions.add("(${jsonPath})${castType} > ?"); params.add(gt.toString()) } + if (lte != null) { conditions.add("(${jsonPath})${castType} <= ?"); params.add(lte.toString()) } + if (lt != null) { conditions.add("(${jsonPath})${castType} < ?"); params.add(lt.toString()) } + + if (conditions.isEmpty()) { qr.clause = "TRUE"; return qr } + qr.clause = conditions.join(" AND ") + qr.params = params + return qr + } + + private static QueryResult translateExists(Map existsMap) { + QueryResult qr = new QueryResult() + if (existsMap == null) return qr + String field = (String) existsMap.get("field") + if (!field) return qr + + // Validate field name to prevent SQL injection + sanitizeFieldName(field) + // For nested paths, check the nested path exists + if (field.contains(".")) { + List parts = field.split("\\.") as List + String topLevel = parts[0] + qr.clause = "document ? '${topLevel}'" + } else { + qr.clause = "document ? '${field}'" + } + return qr + } + + private static QueryResult translateNested(Map nestedMap) { + QueryResult qr = new QueryResult() + if (nestedMap == null) return qr + + String path = (String) nestedMap.get("path") + Map innerQuery = (Map) nestedMap.get("query") + if (!path || !innerQuery) return qr + + // Validate path to prevent SQL injection + sanitizeFieldName(path) + // Translate the inner query against jsonb_array_elements alias "elem" + QueryResult innerQr = translateNestedQuery(innerQuery, path) + qr.clause = "EXISTS (SELECT 1 FROM jsonb_array_elements(document->'${path}') AS elem WHERE ${innerQr.clause})" + qr.params = innerQr.params + return qr + } + + /** Translate a query in the context of a nested jsonb_array_elements expression (uses "elem" alias) */ + private static QueryResult translateNestedQuery(Map queryMap, String parentPath) { + QueryResult qr = new QueryResult() + if (queryMap == null || queryMap.isEmpty()) return qr + + String queryType = (String) queryMap.keySet().iterator().next() + Object queryVal = queryMap.get(queryType) + + if (queryType == "bool") { + return translateNestedBool((Map) queryVal, parentPath) + } else if (queryType == "term") { + return translateNestedTerm((Map) queryVal, parentPath) + } else if (queryType == "terms") { + return translateNestedTerms((Map) queryVal, parentPath) + } else if (queryType == "range") { + return translateNestedRange((Map) queryVal, parentPath) + } else if (queryType == "match_all") { + return new QueryResult() + } else { + logger.warn("ElasticQueryTranslator.translateNestedQuery: unsupported nested query type '${queryType}', using TRUE") + return new QueryResult() + } + } + + private static QueryResult translateNestedBool(Map boolMap, String parentPath) { + QueryResult qr = new QueryResult() + if (boolMap == null) return qr + List clauses = [] + List params = [] + + for (String key in ["must", "filter", "should", "must_not"]) { + Object val = boolMap.get(key) + List items + if (val instanceof List) items = (List) val + else if (val instanceof Map) items = [(Map) val] + else continue + + List itemClauses = [] + for (Map item in items) { + QueryResult ir = translateNestedQuery(item, parentPath) + itemClauses.add(ir.clause) + params.addAll(ir.params) + } + if (itemClauses) { + String joined = "(" + itemClauses.join(" AND ") + ")" + if (key == "must_not") joined = "NOT " + joined + else if (key == "should") joined = "(" + itemClauses.join(" OR ") + ")" + clauses.add(joined) + } + } + qr.clause = clauses ? clauses.join(" AND ") : "TRUE" + qr.params = params + return qr + } + + private static QueryResult translateNestedTerm(Map termMap, String parentPath) { + QueryResult qr = new QueryResult() + if (termMap == null || termMap.isEmpty()) return qr + String field = (String) termMap.keySet().iterator().next() + Object valueHolder = termMap.get(field) + Object value = valueHolder instanceof Map ? ((Map) valueHolder).get("value") : valueHolder + if (value == null) { qr.clause = "TRUE"; return qr } + + // For nested terms "parentPath.field", strip the parent path prefix + String localField = field.startsWith(parentPath + ".") ? field.substring(parentPath.length() + 1) : field + sanitizeFieldName(localField) + qr.clause = "elem->>'${localField}' = ?" + qr.params = [value.toString()] + return qr + } + + private static QueryResult translateNestedTerms(Map termsMap, String parentPath) { + QueryResult qr = new QueryResult() + Map filteredMap = termsMap.findAll { k, v -> k != "boost" } + if (filteredMap.isEmpty()) return qr + String field = (String) filteredMap.keySet().iterator().next() + Object valuesObj = filteredMap.get(field) + if (!(valuesObj instanceof List)) { qr.clause = "TRUE"; return qr } + List values = (List) valuesObj + if (values.isEmpty()) { qr.clause = "FALSE"; return qr } + String localField = field.startsWith(parentPath + ".") ? field.substring(parentPath.length() + 1) : field + sanitizeFieldName(localField) + qr.clause = "elem->>'${localField}' IN (${values.collect { '?' }.join(', ')})" + qr.params = values.collect { it?.toString() } + return qr + } + + private static QueryResult translateNestedRange(Map rangeMap, String parentPath) { + QueryResult qr = new QueryResult() + if (rangeMap == null || rangeMap.isEmpty()) return qr + String field = (String) rangeMap.keySet().iterator().next() + Object rangeSpec = rangeMap.get(field) + if (!(rangeSpec instanceof Map)) return qr + Map rangeSpecMap = (Map) rangeSpec + String localField = field.startsWith(parentPath + ".") ? field.substring(parentPath.length() + 1) : field + sanitizeFieldName(localField) + String castType = guessCastType(localField) + List conditions = [] + List params = [] + Object gte = rangeSpecMap.get("gte"); if (gte != null) { conditions.add("(elem->>'${localField}')${castType} >= ?"); params.add(gte.toString()) } + Object gt = rangeSpecMap.get("gt"); if (gt != null) { conditions.add("(elem->>'${localField}')${castType} > ?"); params.add(gt.toString()) } + Object lte = rangeSpecMap.get("lte"); if (lte != null) { conditions.add("(elem->>'${localField}')${castType} <= ?"); params.add(lte.toString()) } + Object lt = rangeSpecMap.get("lt"); if (lt != null) { conditions.add("(elem->>'${localField}')${castType} < ?"); params.add(lt.toString()) } + qr.clause = conditions ? conditions.join(" AND ") : "TRUE" + qr.params = params + return qr + } + + private static QueryResult translateIds(Map idsMap) { + QueryResult qr = new QueryResult() + Object vals = idsMap?.get("values") + if (!(vals instanceof List) || ((List) vals).isEmpty()) { qr.clause = "FALSE"; return qr } + List ids = (List) vals + qr.clause = "doc_id IN (${ids.collect { '?' }.join(', ')})" + qr.params = ids.collect { it?.toString() } + return qr + } + + /** Translate an ES sort spec (list of sort entries) to a SQL ORDER BY expression */ + static String translateSort(List sortList) { + if (!sortList) return null + List parts = [] + for (Object sortEntry in sortList) { + if (sortEntry instanceof Map) { + Map sortMap = (Map) sortEntry + for (Map.Entry entry in sortMap.entrySet()) { + String field = ((String) entry.key).replace(".keyword", "") + String dir = "ASC" + if (entry.value instanceof Map) { + String orderVal = (String) ((Map) entry.value).get("order") + if ("desc".equalsIgnoreCase(orderVal)) dir = "DESC" + } else if ("desc".equalsIgnoreCase(entry.value?.toString())) { + dir = "DESC" + } + + if ("_score".equals(field)) { + parts.add("_score ${dir}") + } else { + String castType = guessCastType(field) + if (castType) { + parts.add("(${fieldToJsonPath("document", field)})${castType} ${dir}") + } else { + parts.add("${fieldToJsonPath("document", field)} ${dir}") + } + } + } + } else if (sortEntry instanceof String) { + String field = ((String) sortEntry).replace(".keyword", "") + if ("_score".equals(field)) { + parts.add("_score DESC") + } else { + parts.add("${fieldToJsonPath("document", field)} ASC") + } + } + } + return parts ? parts.join(", ") : null + } + + /** + * Convert an ES field path to a PostgreSQL JSONB access expression. + * E.g. "product.name" → "document->'product'->>'name'" + * "productId" → "document->>'productId'" + */ + static String fieldToJsonPath(String docAlias, String field) { + // Strip .keyword suffix (used in ES for exact/sortable text fields) + if (field.endsWith(".keyword")) field = field.substring(0, field.length() - ".keyword".length()) + // Validate field name to prevent SQL injection + sanitizeFieldName(field) + List parts = field.split("\\.") as List + if (parts.size() == 1) return "${docAlias}->>'${field}'" + // For nested paths: docAlias->'part1'->'part2'->>'lastPart' + StringBuilder sb = new StringBuilder(docAlias) + for (int i = 0; i < parts.size() - 1; i++) { + sb.append("->'${parts[i]}'") + } + sb.append("->>'${parts[parts.size() - 1]}'") + return sb.toString() + } + + /** + * Guess the appropriate PostgreSQL cast type for a field name to use in range/sort comparisons. + * Returns empty string if no cast is needed (use text comparison). + */ + private static String guessCastType(String field) { + String lf = field.toLowerCase() + if (lf.contains("date") || lf.contains("stamp") || lf.contains("time") || lf == "@timestamp") { + return "::timestamptz" + } + if (lf.contains("amount") || lf.contains("price") || lf.contains("cost") || lf.contains("total") || + lf.contains("quantity") || lf.contains("qty") || lf.contains("score") || lf.contains("count") || + lf.contains("number") || lf.contains("num") || lf.contains("id") && lf.endsWith("num")) { + return "::numeric" + } + return "" + } + + /** + * Clean up a Lucene query string to be safe for use with websearch_to_tsquery. + * websearch_to_tsquery supports: "quoted phrases", AND, OR, -, + + * This removes/translates Lucene-specific syntax that websearch_to_tsquery doesn't support: + * - field:value (extract field-specific as general text) + * - field:[range TO range] (drop or convert) + * - wildcard * ? (drop trailing wildcards, keep term) + * - boost ^ (strip) + * - fuzzy ~ (strip) + * - parentheses → use natural AND grouping + */ + static String cleanLuceneQuery(String query) { + if (!query) return query + String q = query.trim() + + // Remove Lucene field:value prefixes (keep just the value part) + q = q.replaceAll(/\w+:("(?:[^"\\]|\\.)*"|\S+)/, '$1') + + // Remove range queries [X TO Y] + q = q.replaceAll(/\[[^\]]*\]/, '') + q = q.replaceAll(/\{[^}]*\}/, '') + + // Remove boost operators (^number) + q = q.replaceAll(/\^[\d.]+/, '') + + // Remove fuzzy operators (~number or just ~) + q = q.replaceAll(/~[\d.]*/, '') + + // Normalize AND/OR/NOT — websearch_to_tsquery handles them case-insensitively, + // but convert NOT to - (the supported exclusion syntax) + q = q.replaceAll(/\bNOT\b/, '-') + + // Remove wildcards at end of terms (partial matching not directly supported; term will still match as prefix via FTS) + q = q.replaceAll(/\*/, '') + q = q.replaceAll(/\?/, '') + + // Remove empty parentheses, normalize spaces + q = q.replaceAll(/\(\s*\)/, '') + q = q.replaceAll(/\s+/, ' ').trim() + + return q ?: '' + } + + /** + * Build a ts_headline SQL expression for a given field with the given tsquery expression. + * @param fieldJsonPath The SQL expression to extract the text field (e.g. "document->>'productName'") + * @param tsqueryParam The SQL tsquery expression (e.g. "websearch_to_tsquery('english', ?)") + */ + static String buildHighlightExpr(String fieldJsonPath, String tsqueryExpr) { + return "ts_headline('english', coalesce(${fieldJsonPath}, ''), ${tsqueryExpr}, 'StartSel=,StopSel=,MaxWords=35,MinWords=15,ShortWord=3,HighlightAll=false,MaxFragments=3,FragmentDelimiter= ... ')" + } +} diff --git a/framework/src/main/groovy/org/moqui/impl/context/PostgresElasticClient.groovy b/framework/src/main/groovy/org/moqui/impl/context/PostgresElasticClient.groovy new file mode 100644 index 000000000..9d1f82309 --- /dev/null +++ b/framework/src/main/groovy/org/moqui/impl/context/PostgresElasticClient.groovy @@ -0,0 +1,1276 @@ +/* + * This software is in the public domain under CC0 1.0 Universal plus a + * Grant of Patent License. + * + * To the extent possible under law, the author(s) have dedicated all + * copyright and related and neighboring rights to this software to the + * public domain worldwide. This software is distributed without any + * warranty. + * + * You should have received a copy of the CC0 Public Domain Dedication + * along with this software (see the LICENSE.md file). If not, see + * . + */ +package org.moqui.impl.context + +import com.fasterxml.jackson.databind.ObjectMapper +import groovy.transform.CompileStatic +import org.moqui.BaseException +import org.moqui.context.ElasticFacade +import org.moqui.entity.EntityValue +import org.moqui.entity.EntityList +import org.moqui.util.MNode +import org.moqui.util.RestClient +import org.moqui.util.RestClient.Method +import org.slf4j.Logger +import org.slf4j.LoggerFactory + +import java.sql.Connection +import java.sql.PreparedStatement +import java.sql.ResultSet +import java.sql.Statement +import java.sql.Timestamp +import java.sql.Types +import java.util.concurrent.Future + +/** + * PostgreSQL-backed implementation of ElasticFacade.ElasticClient. + * + * Stores and searches documents using: + * - moqui_search_index table — tracks index metadata (replaces ES index/alias management) + * - moqui_document table — stores documents as JSONB with tsvector for full-text search + * + * All ElasticSearch Query DSL is translated to PostgreSQL SQL by ElasticQueryTranslator. + * Application logs go to moqui_logs table; HTTP request logs go to moqui_http_log table. + * + * Configured via MoquiConf.xml elastic-facade.cluster with type="postgres". + * Example: + * <cluster name="default" type="postgres" url="transactional" index-prefix="mq_"/> + */ +@CompileStatic +class PostgresElasticClient implements ElasticFacade.ElasticClient { + private final static Logger logger = LoggerFactory.getLogger(PostgresElasticClient.class) + private final static Set DOC_META_KEYS = new HashSet<>(["_index", "_type", "_id", "_timestamp"]) + + /** Jackson mapper shared with ElasticFacadeImpl */ + static final ObjectMapper jacksonMapper = ElasticFacadeImpl.jacksonMapper + + private final ExecutionContextFactoryImpl ecfi + private final MNode clusterNode + private final String clusterName + private final String indexPrefix + /** Entity datasource group to get connections from (e.g. "transactional") */ + private final String datasourceGroup + + PostgresElasticClient(MNode clusterNode, ExecutionContextFactoryImpl ecfi) { + this.ecfi = ecfi + this.clusterNode = clusterNode + this.clusterName = clusterNode.attribute("name") + this.indexPrefix = clusterNode.attribute("index-prefix") ?: "" + + // url attribute for postgres type = datasource group name (or "transactional" by default) + String urlAttr = clusterNode.attribute("url") + this.datasourceGroup = (urlAttr && !"".equals(urlAttr.trim())) ? urlAttr.trim() : "transactional" + + logger.info("Initializing PostgresElasticClient for cluster '${clusterName}' using datasource group '${datasourceGroup}' with index prefix '${indexPrefix}'") + + // Initialize schema (CREATE TABLE IF NOT EXISTS, extensions, indexes) + initSchema() + } + + void destroy() { + // Nothing to destroy — connection pool is managed by the entity facade datasource + } + + // ============================================================ + // Schema initialization + // ============================================================ + + private void initSchema() { + boolean started = ecfi.transactionFacade.begin(null) + try { + Connection conn = ecfi.entityFacade.getConnection(datasourceGroup) + Statement stmt = conn.createStatement() + try { + // Enable pg_trgm extension for fuzzy search (available since PG 9.1) + try { stmt.execute("CREATE EXTENSION IF NOT EXISTS pg_trgm") } + catch (Exception extEx) { logger.warn("Could not create pg_trgm extension (may require superuser): ${extEx.message}") } + + // moqui_search_index — index metadata (replaces ES index/alias concept) + stmt.execute(""" + CREATE TABLE IF NOT EXISTS moqui_search_index ( + index_name TEXT NOT NULL, + alias_name TEXT, + doc_type TEXT, + mapping TEXT, + settings TEXT, + created_stamp TIMESTAMPTZ NOT NULL DEFAULT now(), + CONSTRAINT pk_moqui_search_index PRIMARY KEY (index_name) + ) + """.trim()) + stmt.execute("CREATE INDEX IF NOT EXISTS idx_mq_sidx_alias ON moqui_search_index (alias_name)") + + // moqui_document — main document store + stmt.execute(""" + CREATE TABLE IF NOT EXISTS moqui_document ( + index_name TEXT NOT NULL, + doc_id TEXT NOT NULL, + doc_type TEXT, + document JSONB, + content_text TEXT, + content_tsv TSVECTOR, + created_stamp TIMESTAMPTZ NOT NULL DEFAULT now(), + updated_stamp TIMESTAMPTZ NOT NULL DEFAULT now(), + CONSTRAINT pk_moqui_document PRIMARY KEY (index_name, doc_id) + ) + """.trim()) + // Ensure PostgreSQL-specific columns exist (table may have been created by Moqui entity sync without them) + stmt.execute("ALTER TABLE moqui_document ADD COLUMN IF NOT EXISTS content_tsv TSVECTOR") + stmt.execute("ALTER TABLE moqui_document ADD COLUMN IF NOT EXISTS content_text TEXT") + // Ensure document column is JSONB (entity sync may create it as TEXT from text-very-long mapping) + try { + stmt.execute("ALTER TABLE moqui_document ALTER COLUMN document TYPE JSONB USING document::jsonb") + } catch (Exception e) { + // Column already JSONB or table has no rows causing cast to fail — ignore + logger.trace("Note: could not alter document column to JSONB (may already be correct type): " + e.getMessage()) + } + // GIN index on tsvector for full-text search + stmt.execute("CREATE INDEX IF NOT EXISTS idx_mq_doc_tsv ON moqui_document USING GIN (content_tsv)") + // GIN index on document JSONB for arbitrary path queries + stmt.execute("CREATE INDEX IF NOT EXISTS idx_mq_doc_json ON moqui_document USING GIN (document jsonb_path_ops)") + // GIN trigram index on content_text for fuzzy/LIKE queries + stmt.execute("CREATE INDEX IF NOT EXISTS idx_mq_doc_trgm ON moqui_document USING GIN (content_text gin_trgm_ops)") + // Index for type-based filtering + stmt.execute("CREATE INDEX IF NOT EXISTS idx_mq_doc_type ON moqui_document (doc_type)") + // Index for time-based ordering + stmt.execute("CREATE INDEX IF NOT EXISTS idx_mq_doc_upd ON moqui_document (index_name, updated_stamp)") + + // moqui_logs — application log (replaces ES moqui_logs index) + stmt.execute(""" + CREATE TABLE IF NOT EXISTS moqui_logs ( + log_id BIGSERIAL PRIMARY KEY, + log_timestamp TIMESTAMPTZ NOT NULL, + log_level TEXT, + thread_name TEXT, + thread_id BIGINT, + thread_priority INTEGER, + logger_name TEXT, + message TEXT, + source_host TEXT, + user_id TEXT, + visitor_id TEXT, + mdc JSONB, + thrown JSONB + ) + """.trim()) + stmt.execute("CREATE INDEX IF NOT EXISTS idx_mq_logs_ts ON moqui_logs USING BRIN (log_timestamp)") + stmt.execute("CREATE INDEX IF NOT EXISTS idx_mq_logs_lvl ON moqui_logs (log_level)") + // Fix log_id if Moqui entity sync created the table without a BIGSERIAL default + stmt.execute(""" + DO \$\$ + BEGIN + IF (SELECT column_default FROM information_schema.columns + WHERE table_name = 'moqui_logs' AND column_name = 'log_id') IS NULL THEN + CREATE SEQUENCE IF NOT EXISTS moqui_logs_log_id_seq; + ALTER TABLE moqui_logs ALTER COLUMN log_id SET DEFAULT nextval('moqui_logs_log_id_seq'); + ALTER SEQUENCE moqui_logs_log_id_seq OWNED BY moqui_logs.log_id; + END IF; + END \$\$; + """.trim()) + + // moqui_http_log — HTTP request log (replaces ES moqui_http_log index) + stmt.execute(""" + CREATE TABLE IF NOT EXISTS moqui_http_log ( + log_id BIGSERIAL PRIMARY KEY, + log_timestamp TIMESTAMPTZ NOT NULL, + remote_ip TEXT, + remote_user TEXT, + server_ip TEXT, + content_type TEXT, + request_method TEXT, + request_scheme TEXT, + request_host TEXT, + request_path TEXT, + request_query TEXT, + http_version TEXT, + response_code INTEGER, + time_initial_ms BIGINT, + time_final_ms BIGINT, + bytes_sent BIGINT, + referrer TEXT, + agent TEXT, + session_id TEXT, + visitor_id TEXT + ) + """.trim()) + stmt.execute("CREATE INDEX IF NOT EXISTS idx_mq_hlog_ts ON moqui_http_log USING BRIN (log_timestamp)") + stmt.execute("CREATE INDEX IF NOT EXISTS idx_mq_hlog_path ON moqui_http_log (request_path)") + // Fix log_id if Moqui entity sync created the table without a BIGSERIAL default + stmt.execute(""" + DO \$\$ + BEGIN + IF (SELECT column_default FROM information_schema.columns + WHERE table_name = 'moqui_http_log' AND column_name = 'log_id') IS NULL THEN + CREATE SEQUENCE IF NOT EXISTS moqui_http_log_log_id_seq; + ALTER TABLE moqui_http_log ALTER COLUMN log_id SET DEFAULT nextval('moqui_http_log_log_id_seq'); + ALTER SEQUENCE moqui_http_log_log_id_seq OWNED BY moqui_http_log.log_id; + END IF; + END \$\$; + """.trim()) + + logger.info("PostgresElasticClient schema initialized for cluster '${clusterName}'") + } finally { + stmt.close() + } + ecfi.transactionFacade.commit(started) + } catch (Throwable t) { + ecfi.transactionFacade.rollback(started, "Error initializing PostgresElasticClient schema", t) + throw new BaseException("Error initializing PostgresElasticClient schema for cluster '${clusterName}'", t) + } + } + + /** + * Get a JDBC Connection from the entity facade for the configured datasource group. + * The returned Connection is a Moqui ConnectionWrapper that is transaction-managed: + * close() is a no-op; the connection is automatically closed when the enclosing + * transaction commits or rolls back via TransactionFacade. Callers MUST ensure an + * active transaction exists before calling this method. + */ + private Connection getConnection() { + return ecfi.entityFacade.getConnection(datasourceGroup) + } + + // ============================================================ + // ElasticClient — Cluster info + // ============================================================ + + @Override String getClusterName() { return clusterName } + @Override String getClusterLocation() { return "postgres:${datasourceGroup}:${indexPrefix}" } + + @Override + Map getServerInfo() { + Connection conn = getConnection() + PreparedStatement ps = conn.prepareStatement("SELECT version()") + try { + ResultSet rs = ps.executeQuery() + try { + if (rs.next()) { + return [name: clusterName, cluster_name: "postgres", + version: [distribution: "postgres", number: rs.getString(1)], + tagline: "Moqui PostgresElasticClient"] + } + } finally { rs.close() } + } finally { ps.close() } + return [name: clusterName, cluster_name: "postgres", version: [distribution: "postgres"]] + } + + // ============================================================ + // Index management + // ============================================================ + + @Override + boolean indexExists(String index) { + if (!index) return false + String prefixed = prefixIndexName(index) + Connection conn = getConnection() + // Check both exact index_name and alias_name — mirrors ES behaviour where + // aliases are treated as valid index references (e.g. "mantle" alias → true) + PreparedStatement ps = conn.prepareStatement( + "SELECT 1 FROM moqui_search_index WHERE index_name = ? OR alias_name = ?") + try { + ps.setString(1, prefixed) + ps.setString(2, prefixed) + ResultSet rs = ps.executeQuery() + try { return rs.next() } finally { rs.close() } + } finally { ps.close() } + } + + @Override + boolean aliasExists(String alias) { + if (!alias) return false + String prefixed = prefixIndexName(alias) + Connection conn = getConnection() + PreparedStatement ps = conn.prepareStatement("SELECT 1 FROM moqui_search_index WHERE alias_name = ?") + try { + ps.setString(1, prefixed) + ResultSet rs = ps.executeQuery() + try { return rs.next() } finally { rs.close() } + } finally { ps.close() } + } + + @Override + void createIndex(String index, Map docMapping, String alias) { + createIndex(index, null, docMapping, alias, null) + } + + /** Extended createIndex with docType and settings (used internally and by ElasticFacadeImpl.storeIndexAndMapping) */ + void createIndex(String index, String docType, Map docMapping, String alias, Map settings) { + if (!index) throw new IllegalArgumentException("Index name required for createIndex") + String prefixedIndex = prefixIndexName(index) + String prefixedAlias = alias ? prefixIndexName(alias) : null + + String mappingJson = docMapping ? objectToJson(docMapping) : null + String settingsJson = settings ? objectToJson(settings) : null + + Connection conn = getConnection() + PreparedStatement ps = conn.prepareStatement(""" + INSERT INTO moqui_search_index (index_name, alias_name, doc_type, mapping, settings) + VALUES (?, ?, ?, ?, ?) + ON CONFLICT (index_name) DO UPDATE SET + alias_name = EXCLUDED.alias_name, + doc_type = EXCLUDED.doc_type, + mapping = EXCLUDED.mapping, + settings = EXCLUDED.settings + """.trim()) + try { + ps.setString(1, prefixedIndex) + if (prefixedAlias) ps.setString(2, prefixedAlias) else ps.setNull(2, Types.VARCHAR) + if (docType) ps.setString(3, docType) else ps.setNull(3, Types.VARCHAR) + if (mappingJson) ps.setString(4, mappingJson) else ps.setNull(4, Types.VARCHAR) + if (settingsJson) ps.setString(5, settingsJson) else ps.setNull(5, Types.VARCHAR) + ps.executeUpdate() + } finally { ps.close() } + logger.info("PostgresElasticClient.createIndex: created index '${prefixedIndex}'${prefixedAlias ? ' with alias ' + prefixedAlias : ''}") + } + + @Override + void putMapping(String index, Map docMapping) { + if (!docMapping) throw new IllegalArgumentException("Mapping may not be empty for putMapping") + String prefixedIndex = prefixIndexName(index) + String mappingJson = objectToJson(docMapping) + Connection conn = getConnection() + PreparedStatement ps = conn.prepareStatement( + "UPDATE moqui_search_index SET mapping = ? WHERE index_name = ?") + try { + ps.setString(1, mappingJson) + ps.setString(2, prefixedIndex) + ps.executeUpdate() + } finally { ps.close() } + } + + @Override + void deleteIndex(String index) { + if (!index) throw new IllegalArgumentException("Index name required for deleteIndex") + String prefixedIndex = prefixIndexName(index) + Connection conn = getConnection() + // Delete documents first, then index metadata + PreparedStatement ps1 = conn.prepareStatement("DELETE FROM moqui_document WHERE index_name = ?") + try { + ps1.setString(1, prefixedIndex) + int deleted = ps1.executeUpdate() + logger.info("PostgresElasticClient.deleteIndex: deleted ${deleted} documents from index '${prefixedIndex}'") + } finally { ps1.close() } + PreparedStatement ps2 = conn.prepareStatement("DELETE FROM moqui_search_index WHERE index_name = ?") + try { + ps2.setString(1, prefixedIndex) + ps2.executeUpdate() + } finally { ps2.close() } + } + + // ============================================================ + // Document CRUD + // ============================================================ + + @Override + void index(String index, String _id, Map document) { + if (!index) throw new IllegalArgumentException("Index name required for index()") + if (!_id) throw new IllegalArgumentException("_id required for index()") + String prefixedIndex = prefixIndexName(index) + String docJson = objectToJson(document) + String contentText = extractContentText(document) + upsertDocument(prefixedIndex, _id, null, docJson, contentText) + } + + @Override + void update(String index, String _id, Map documentFragment) { + if (!index) throw new IllegalArgumentException("Index name required for update()") + if (!_id) throw new IllegalArgumentException("_id required for update()") + String prefixedIndex = prefixIndexName(index) + String fragmentJson = objectToJson(documentFragment) + Connection conn = getConnection() + // Merge fragment into existing document using PostgreSQL jsonb concatenation operator || + PreparedStatement ps = conn.prepareStatement(""" + UPDATE moqui_document + SET document = COALESCE(document, '{}'::jsonb) || ?::jsonb, + content_text = ( + SELECT string_agg(val::text, ' ') + FROM jsonb_each_text(COALESCE(document, '{}'::jsonb) || ?::jsonb) AS kv(key, val) + WHERE jsonb_typeof(COALESCE(document, '{}'::jsonb) || ?::jsonb -> kv.key) IN ('string', 'number') + ), + content_tsv = to_tsvector('english', coalesce(( + SELECT string_agg(val::text, ' ') + FROM jsonb_each_text(COALESCE(document, '{}'::jsonb) || ?::jsonb) AS kv(key, val) + ), '')), + updated_stamp = now() + WHERE index_name = ? AND doc_id = ? + """.trim()) + try { + ps.setString(1, fragmentJson) + ps.setString(2, fragmentJson) + ps.setString(3, fragmentJson) + ps.setString(4, fragmentJson) + ps.setString(5, prefixedIndex) + ps.setString(6, _id) + ps.executeUpdate() + } finally { ps.close() } + } + + @Override + void delete(String index, String _id) { + if (!index) throw new IllegalArgumentException("Index name required for delete()") + if (!_id) throw new IllegalArgumentException("_id required for delete()") + String prefixedIndex = prefixIndexName(index) + Connection conn = getConnection() + PreparedStatement ps = conn.prepareStatement( + "DELETE FROM moqui_document WHERE index_name = ? AND doc_id = ?") + try { + ps.setString(1, prefixedIndex) + ps.setString(2, _id) + int deleted = ps.executeUpdate() + if (deleted == 0) logger.warn("delete() document not found in index '${prefixedIndex}' with id '${_id}'") + } finally { ps.close() } + } + + @Override + Integer deleteByQuery(String index, Map queryMap) { + if (!index) throw new IllegalArgumentException("Index name required for deleteByQuery()") + String prefixedIndex = prefixIndexName(index) + ElasticQueryTranslator.QueryResult qr = ElasticQueryTranslator.translateQuery(queryMap ?: [match_all: [:]]) + + // Build params: [prefixedIndex] + qr.params + List allParams = new ArrayList<>() + allParams.add(prefixedIndex) + if (qr.params) allParams.addAll(qr.params) + String sql = "DELETE FROM moqui_document WHERE index_name = ? AND (${qr.clause})" + + Connection conn = getConnection() + PreparedStatement ps = conn.prepareStatement(sql) + try { + for (int i = 0; i < allParams.size(); i++) { + setParam(ps, i + 1, allParams[i]) + } + // Two tsvector params for query_string (where + tsquery for ranking) - dedup for DELETE + // Actually deleteByQuery doesn't need tsquery scoring, already handled by WHERE clause + return ps.executeUpdate() + } finally { ps.close() } + } + + @Override + void bulk(String index, List actionSourceList) { + if (!actionSourceList) return + String prefixedIndex = index ? prefixIndexName(index) : null + + // Process actions: most have action + source pairs, but delete is action-only (no source line) + int i = 0 + while (i < actionSourceList.size()) { + Map action = (Map) actionSourceList.get(i) + + if (action.containsKey("delete")) { + // Delete actions have NO source document — only consume 1 item + Map actionSpec = (Map) action.get("delete") + String idxName = actionSpec.get("_index") ? prefixIndexName((String) actionSpec.get("_index")) : prefixedIndex + String _id = (String) actionSpec.get("_id") + if (idxName && _id) delete(idxName, _id) + i += 1 + } else if (i + 1 < actionSourceList.size()) { + // index/create/update actions consume action + source (2 items) + Map source = (Map) actionSourceList.get(i + 1) + + if (action.containsKey("index") || action.containsKey("create")) { + Map actionSpec = (Map) (action.get("index") ?: action.get("create")) + String idxName = actionSpec.get("_index") ? prefixIndexName((String) actionSpec.get("_index")) : prefixedIndex + String _id = (String) actionSpec.get("_id") + if (idxName) { + String docJson = objectToJson(source) + String contentText = extractContentText(source) + upsertDocument(idxName, _id, null, docJson, contentText) + } + } else if (action.containsKey("update")) { + Map actionSpec = (Map) action.get("update") + String idxName = actionSpec.get("_index") ? prefixIndexName((String) actionSpec.get("_index")) : prefixedIndex + String _id = (String) actionSpec.get("_id") + if (idxName && _id) { + Map doc = (Map) source.get("doc") ?: source + update(idxName, _id, doc) + } + } + i += 2 + } else { + // Malformed: action without source at end of list + logger.warn("bulk(): action at index ${i} has no following source document, skipping") + i += 1 + } + } + } + + @Override + void bulkIndex(String index, String idField, List documentList) { + bulkIndex(index, null, idField, documentList, false) + } + + @Override + void bulkIndex(String index, String docType, String idField, List documentList, boolean refresh) { + if (!documentList) return + String prefixedIndex = prefixIndexName(index) + boolean hasId = idField != null && !idField.isEmpty() + + Connection conn = getConnection() + PreparedStatement ps = conn.prepareStatement(""" + INSERT INTO moqui_document (index_name, doc_id, doc_type, document, content_text, content_tsv, updated_stamp) + VALUES (?, ?, ?, ?::jsonb, ?, to_tsvector('english', COALESCE(?, '')), now()) + ON CONFLICT (index_name, doc_id) DO UPDATE SET + doc_type = EXCLUDED.doc_type, + document = EXCLUDED.document, + content_text = EXCLUDED.content_text, + content_tsv = EXCLUDED.content_tsv, + updated_stamp = EXCLUDED.updated_stamp + """.trim()) + try { + int batchSize = 0 + for (Map doc in documentList) { + String _id = hasId ? (doc.get(idField)?.toString() ?: UUID.randomUUID().toString()) : UUID.randomUUID().toString() + String docJson = objectToJson(doc) + String contentText = extractContentText(doc) + ps.setString(1, prefixedIndex) + ps.setString(2, _id) + if (docType) ps.setString(3, docType) else ps.setNull(3, Types.VARCHAR) + ps.setString(4, docJson) + ps.setString(5, contentText) + ps.setString(6, contentText) + ps.addBatch() + batchSize++ + if (batchSize >= 500) { + ps.executeBatch() + batchSize = 0 + } + } + if (batchSize > 0) ps.executeBatch() + } finally { ps.close() } + } + + @Override + Map get(String index, String _id) { + if (!index || !_id) return null + String prefixedIndex = prefixIndexName(index) + Connection conn = getConnection() + PreparedStatement ps = conn.prepareStatement( + "SELECT doc_id, index_name, doc_type, document FROM moqui_document WHERE index_name = ? AND doc_id = ?") + try { + ps.setString(1, prefixedIndex) + ps.setString(2, _id) + ResultSet rs = ps.executeQuery() + try { + if (rs.next()) { + Map source = (Map) jsonToObject(rs.getString("document")) + return [_index: unprefixIndexName(rs.getString("index_name")), + _id : rs.getString("doc_id"), + _type : rs.getString("doc_type"), + _source: source] + } + return null + } finally { rs.close() } + } finally { ps.close() } + } + + @Override + Map getSource(String index, String _id) { + Map result = get(index, _id) + return result ? (Map) result.get("_source") : null + } + + @Override + List get(String index, List _idList) { + if (!_idList || !index) return [] + String prefixedIndex = prefixIndexName(index) + String placeholders = _idList.collect { "?" }.join(", ") + Connection conn = getConnection() + PreparedStatement ps = conn.prepareStatement( + "SELECT doc_id, index_name, doc_type, document FROM moqui_document WHERE index_name = ? AND doc_id IN (${placeholders})") + try { + ps.setString(1, prefixedIndex) + for (int i = 0; i < _idList.size(); i++) ps.setString(i + 2, _idList[i]) + ResultSet rs = ps.executeQuery() + try { + List results = [] + while (rs.next()) { + Map source = (Map) jsonToObject(rs.getString("document")) + results.add([_index: unprefixIndexName(rs.getString("index_name")), + _id : rs.getString("doc_id"), + _type : rs.getString("doc_type"), + _source: source]) + } + return results + } finally { rs.close() } + } finally { ps.close() } + } + + // ============================================================ + // Search + // ============================================================ + + @Override + Map search(String index, Map searchMap) { + // Special case: moqui_logs is a dedicated table, not stored in moqui_document + if (index && (index == 'moqui_logs' || prefixIndexName(index) == prefixIndexName('moqui_logs'))) { + return searchLogsTable(searchMap ?: [:]) + } + + ElasticQueryTranslator.TranslatedQuery tq = ElasticQueryTranslator.translateSearchMap(searchMap ?: [:]) + + // Determine index(es) to query + List indexNames = resolveIndexNames(index) + if (indexNames.isEmpty()) { + return [hits: [total: [value: 0, relation: "eq"], hits: []]] + } + + // Build score expression + String scoreExpr = tq.tsqueryExpr ? + "ts_rank_cd(content_tsv, ${tq.tsqueryExpr})" : "1.0::float" + + // Build WHERE clause + String idxPlaceholders = indexNames.collect { "?" }.join(", ") + String whereClause = "index_name IN (${idxPlaceholders})" + List allParams = new ArrayList<>(indexNames) + + if (tq.tsqueryExpr) { + // For query_string: add the tsquery param for the WHERE clause + whereClause += " AND " + tq.whereClause + allParams.addAll(tq.params) + } else if (tq.whereClause && tq.whereClause != "TRUE") { + whereClause += " AND " + tq.whereClause + allParams.addAll(tq.params) + } + + // ORDER BY + String orderByClause = tq.orderBy ?: (tq.tsqueryExpr ? "_score DESC" : "updated_stamp DESC") + + // Build count query first + String countSql = "SELECT COUNT(*) FROM moqui_document WHERE ${whereClause}" + long totalCount = 0L + + // Build main query + String mainSql = """ + SELECT doc_id, index_name, doc_type, document, ${buildScoreSelect(tq)} AS _score + FROM moqui_document + WHERE ${whereClause} + ORDER BY ${orderByClause} + LIMIT ? OFFSET ? + """.trim() + + Connection conn = getConnection() + + // Execute count + if (tq.trackTotal) { + PreparedStatement countPs = conn.prepareStatement(countSql) + try { + for (int i = 0; i < allParams.size(); i++) setParam(countPs, i + 1, allParams[i]) + // Add score params to count query (it doesn't have them, so skip) + ResultSet rs = countPs.executeQuery() + try { if (rs.next()) totalCount = rs.getLong(1) } finally { rs.close() } + } finally { countPs.close() } + } + + // Execute main query — params must follow SQL ? order: + // 1. score expression ? (in SELECT clause, comes before WHERE in SQL) + // 2. WHERE clause ?s (index names + query params already in allParams) + // 3. LIMIT and OFFSET + List mainParams = [] + if (tq.tsqueryExpr) mainParams.addAll(tq.tsqueryParams) // score SELECT clause: only tsquery-specific params + mainParams.addAll(allParams) // WHERE clause: indexNames then query params + mainParams.add(tq.sizeLimit) + mainParams.add(tq.fromOffset) + + PreparedStatement ps = conn.prepareStatement(mainSql) + try { + for (int i = 0; i < mainParams.size(); i++) setParam(ps, i + 1, mainParams[i]) + ResultSet rs = ps.executeQuery() + try { + List hits = [] + while (rs.next()) { + String docJson = rs.getString("document") + Map source = docJson ? (Map) jsonToObject(docJson) : [:] + String docId = rs.getString("doc_id") + String idxName = unprefixIndexName(rs.getString("index_name")) + String docType = rs.getString("doc_type") + double score = rs.getDouble("_score") + + Map hit = [_index: idxName, _id: docId, _type: docType, + _score: score, _source: source] as Map + + // Add highlights if requested + if (tq.highlightFields && tq.tsqueryExpr) { + Map> highlights = buildHighlights(source, tq) + if (highlights) hit.put("highlight", highlights) + } + + hits.add(hit) + } + + return [hits: [total: [value: totalCount, relation: "eq"], hits: hits], + _shards: [total: 1, successful: 1, failed: 0]] + } finally { rs.close() } + } finally { ps.close() } + } + + /** Query the moqui_logs table directly and return ES-compatible response. */ + private Map searchLogsTable(Map searchMap) { + ElasticQueryTranslator.TranslatedQuery tq = ElasticQueryTranslator.translateSearchMap(searchMap) + + // ── Parse @timestamp range from the original query_string ── + // LogViewer sends queries like: @timestamp:[1740610800000 TO 1741215600000] AND (*) + // The translator strips these, so we extract them here for direct SQL use. + String rawQuery = null + Map queryMap = (Map) searchMap?.get("query") + if (queryMap) { + Map qsMap = (Map) queryMap.get("query_string") + if (qsMap) rawQuery = (String) qsMap.get("query") + } + + List conditions = [] + List params = [] + + // Extract @timestamp range: @timestamp:[from TO to] + if (rawQuery) { + java.util.regex.Matcher m = (rawQuery =~ /@timestamp\s*:\s*\[\s*(\*|\d+)\s+TO\s+(\*|\d+)\s*\]/) + if (m.find()) { + String fromVal = m.group(1) + String toVal = m.group(2) + if (fromVal != '*') { + conditions.add("log_timestamp >= ?") + params.add(new java.sql.Timestamp(Long.parseLong(fromVal))) + } + if (toVal != '*') { + conditions.add("log_timestamp <= ?") + params.add(new java.sql.Timestamp(Long.parseLong(toVal))) + } + } + } + + // FTS WHERE clause against message + logger_name columns + // The original query_string may be: "@timestamp:[epoch TO epoch] AND ()" + // The translator's cleanLuceneQuery doesn't fully strip the @timestamp parts, + // leaving residue like "@ AND" as the tsquery text. We need to extract only the + // user's text query portion and apply FTS only if it's meaningful. + String userTextQuery = null + if (rawQuery) { + // Remove the @timestamp range clause and connectors + String stripped = rawQuery.replaceAll(/@timestamp\s*:\s*\[[^\]]*\]/, '') + stripped = stripped.replaceAll(/\bAND\b/, ' ').replaceAll(/\bOR\b/, ' ') + stripped = stripped.replaceAll(/[()]/, ' ').replaceAll(/\s+/, ' ').trim() + // After stripping, if only * or empty remains, it means "match all" — no FTS needed + stripped = stripped.replaceAll(/\*/, '').trim() + if (stripped) userTextQuery = stripped + } + if (userTextQuery) { + conditions.add("to_tsvector('english', coalesce(message, '') || ' ' || coalesce(logger_name, '')) @@ websearch_to_tsquery('english', ?)") + params.add(userTextQuery) + } + + String whereClause = conditions ? conditions.join(" AND ") : "TRUE" + + Connection conn = getConnection() + long totalCount = 0L + if (tq.trackTotal) { + PreparedStatement countPs = conn.prepareStatement("SELECT COUNT(*) FROM moqui_logs WHERE ${whereClause}") + try { + for (int i = 0; i < params.size(); i++) setParam(countPs, i + 1, params[i]) + ResultSet rs = countPs.executeQuery() + try { if (rs.next()) totalCount = rs.getLong(1) } finally { rs.close() } + } finally { countPs.close() } + } + + String mainSql = """ + SELECT log_id, log_timestamp, log_level, thread_name, thread_id, thread_priority, + logger_name, message, source_host, user_id, visitor_id, mdc::text, thrown::text + FROM moqui_logs + WHERE ${whereClause} + ORDER BY log_timestamp DESC + LIMIT ? OFFSET ? + """.trim() + + PreparedStatement ps = conn.prepareStatement(mainSql) + try { + int pIdx = 0 + for (int i = 0; i < params.size(); i++) setParam(ps, ++pIdx, params[i]) + ps.setInt(++pIdx, tq.sizeLimit) + ps.setInt(++pIdx, tq.fromOffset) + ResultSet rs = ps.executeQuery() + try { + List hits = [] + while (rs.next()) { + long logId = rs.getLong("log_id") + java.sql.Timestamp ts = rs.getTimestamp("log_timestamp") + Map source = [ + "@timestamp" : ts?.time, + level : rs.getString("log_level"), + thread_name : rs.getString("thread_name"), + thread_id : rs.getLong("thread_id"), + thread_priority : rs.getInt("thread_priority"), + logger_name : rs.getString("logger_name"), + message : rs.getString("message"), + source_host : rs.getString("source_host"), + user_id : rs.getString("user_id"), + visitor_id : rs.getString("visitor_id"), + ] as Map + String mdcStr = rs.getString("mdc") + if (mdcStr) source.put("mdc", jsonToObject(mdcStr)) + String thrownStr = rs.getString("thrown") + if (thrownStr) source.put("thrown", jsonToObject(thrownStr)) + hits.add([_index: "moqui_logs", _id: String.valueOf(logId), + _type: "LogMessage", _score: 1.0, _source: source] as Map) + } + return [hits: [total: [value: totalCount, relation: "eq"], hits: hits], + _shards: [total: 1, successful: 1, failed: 0]] + } finally { rs.close() } + } catch (Throwable t) { + logger.error("searchLogsTable error: " + t.message, t) + return [hits: [total: [value: 0, relation: "eq"], hits: []]] + } finally { ps.close() } + } + + @Override + List searchHits(String index, Map searchMap) { + Map result = search(index, searchMap) + return (List) ((Map) result.get("hits")).get("hits") + } + + @Override + Map validateQuery(String index, Map queryMap, boolean explain) { + // Best-effort validation: try to translate the query; if it throws return invalid + try { + ElasticQueryTranslator.QueryResult qr = ElasticQueryTranslator.translateQuery(queryMap ?: [match_all: [:]]) + return null // valid + } catch (Throwable t) { + return [valid: false, error: t.message] + } + } + + @Override + long count(String index, Map countMap) { + Map result = countResponse(index, countMap) + return ((Number) result.get("count"))?.longValue() ?: 0L + } + + @Override + Map countResponse(String index, Map countMap) { + if (!countMap) countMap = [query: [match_all: [:]]] + Map queryMap = (Map) countMap.get("query") + ElasticQueryTranslator.QueryResult qr = queryMap ? ElasticQueryTranslator.translateQuery(queryMap) : new ElasticQueryTranslator.QueryResult() + + List indexNames = resolveIndexNames(index) + if (indexNames.isEmpty()) return [count: 0L] + + String idxPlaceholders = indexNames.collect { "?" }.join(", ") + String whereClause = "index_name IN (${idxPlaceholders})" + List allParams = new ArrayList<>(indexNames) + + if (qr.clause && qr.clause != "TRUE") { + whereClause += " AND " + qr.clause + allParams.addAll(qr.params) + } + + // For tsvector queries, add the param for WHERE + if (qr.tsqueryExpr && qr.params) { + // params already added above + } + + Connection conn = getConnection() + PreparedStatement ps = conn.prepareStatement("SELECT COUNT(*) FROM moqui_document WHERE ${whereClause}") + try { + for (int i = 0; i < allParams.size(); i++) setParam(ps, i + 1, allParams[i]) + ResultSet rs = ps.executeQuery() + try { + if (rs.next()) return [count: rs.getLong(1)] + return [count: 0L] + } finally { rs.close() } + } finally { ps.close() } + } + + // ============================================================ + // Point-In-Time (PIT) — Keyset-based cursor + // ============================================================ + + @Override + String getPitId(String index, String keepAlive) { + // For postgres backend, return a synthetic PIT token: "pg::{indexPrefix}::{timestamp}" + // Actual cursor-based paging is left to the caller to handle via search_after + return "pg::${indexPrefix}::${System.currentTimeMillis()}" + } + + @Override + void deletePit(String pitId) { + // No-op for postgres backend — cursor is stateless + } + + // ============================================================ + // Raw REST — Not supported on postgres backend + // ============================================================ + + @Override + RestClient.RestResponse call(Method method, String index, String path, + Map parameters, Object bodyJsonObject) { + throw new UnsupportedOperationException( + "Raw REST calls (call()) are not supported by PostgresElasticClient for cluster '${clusterName}'. " + + "Use the higher-level API methods instead, or switch to type=elastic for this cluster.") + } + + @Override + Future callFuture(Method method, String index, String path, + Map parameters, Object bodyJsonObject) { + throw new UnsupportedOperationException( + "Raw REST calls (callFuture()) are not supported by PostgresElasticClient for cluster '${clusterName}'.") + } + + @Override + RestClient makeRestClient(Method method, String index, String path, Map parameters) { + throw new UnsupportedOperationException( + "makeRestClient() is not supported by PostgresElasticClient for cluster '${clusterName}'.") + } + + // ============================================================ + // DataDocument helpers + // ============================================================ + + @Override + void checkCreateDataDocumentIndexes(String indexName) { + if (!indexName) return + // If any document type for this index exists, we consider the index ready + if (indexExists(indexName)) return + EntityList ddList = ecfi.entityFacade.find("moqui.entity.document.DataDocument") + .condition("indexName", indexName).disableAuthz().list() + for (EntityValue dd in ddList) { + storeIndexAndMapping(indexName, dd) + } + } + + @Override + void checkCreateDataDocumentIndex(String dataDocumentId) { + String idxName = ElasticFacadeImpl.ddIdToEsIndex(dataDocumentId) + String prefixed = prefixIndexName(idxName) + if (indexExists(prefixed)) return + + EntityValue dd = ecfi.entityFacade.find("moqui.entity.document.DataDocument") + .condition("dataDocumentId", dataDocumentId).disableAuthz().one() + if (dd == null) throw new BaseException("No DataDocument found with ID [${dataDocumentId}]") + storeIndexAndMapping((String) dd.getNoCheckSimple("indexName"), dd) + } + + @Override + void putDataDocumentMappings(String indexName) { + EntityList ddList = ecfi.entityFacade.find("moqui.entity.document.DataDocument") + .condition("indexName", indexName).disableAuthz().list() + for (EntityValue dd in ddList) storeIndexAndMapping(indexName, dd) + } + + @Override + void verifyDataDocumentIndexes(List documentList) { + Set indexNames = new HashSet<>() + Set dataDocumentIds = new HashSet<>() + for (Map doc in documentList) { + Object idxObj = doc.get("_index") + Object typeObj = doc.get("_type") + if (idxObj) indexNames.add((String) idxObj) + if (typeObj) dataDocumentIds.add((String) typeObj) + } + for (String idxName in indexNames) checkCreateDataDocumentIndexes(idxName) + for (String ddId in dataDocumentIds) checkCreateDataDocumentIndex(ddId) + } + + @Override + void bulkIndexDataDocument(List documentList) { + if (!documentList) return + + Connection conn = getConnection() + PreparedStatement ps = conn.prepareStatement(""" + INSERT INTO moqui_document (index_name, doc_id, doc_type, document, content_text, content_tsv, updated_stamp) + VALUES (?, ?, ?, ?::jsonb, ?, to_tsvector('english', COALESCE(?, '')), now()) + ON CONFLICT (index_name, doc_id) DO UPDATE SET + doc_type = EXCLUDED.doc_type, + document = EXCLUDED.document, + content_text = EXCLUDED.content_text, + content_tsv = EXCLUDED.content_tsv, + updated_stamp = EXCLUDED.updated_stamp + """.trim()) + try { + int batchCount = 0 + for (Map document in documentList) { + String _index = (String) document.get("_index") + String _type = (String) document.get("_type") + String _id = (String) document.get("_id") + + if (!_id) { + logger.warn("bulkIndexDataDocument: skipping document with null _id (type=${_type})") + continue + } + + // Derive the actual index name from _type (dataDocumentId) + String esIndexName = ElasticFacadeImpl.ddIdToEsIndex(_type ?: "unknown") + String prefixedIndex = prefixIndexName(esIndexName) + + // Clone document stripping metadata keys + Map cleanDoc = new LinkedHashMap<>(document) + for (String key in DOC_META_KEYS) cleanDoc.remove(key) + + String docJson = objectToJson(cleanDoc) + String contentText = extractContentText(cleanDoc) + + ps.setString(1, prefixedIndex) + ps.setString(2, _id) + if (_type) ps.setString(3, _type) else ps.setNull(3, Types.VARCHAR) + ps.setString(4, docJson) + ps.setString(5, contentText) + ps.setString(6, contentText) + ps.addBatch() + batchCount++ + + if (batchCount >= 500) { + ps.executeBatch() + batchCount = 0 + } + } + if (batchCount > 0) ps.executeBatch() + logger.info("bulkIndexDataDocument: indexed ${documentList.size()} documents") + } finally { ps.close() } + } + + // ============================================================ + // JSON serialization + // ============================================================ + + @Override String objectToJson(Object obj) { return ElasticFacadeImpl.objectToJson(obj) } + @Override Object jsonToObject(String json) { return ElasticFacadeImpl.jsonToObject(json) } + + // ============================================================ + // Index prefixing helpers + // ============================================================ + + String prefixIndexName(String index) { + if (!index) return index + index = index.trim() + if (!index) return index + // Handle comma-separated index names + return index.split(",").collect { String it -> + it = it.trim() + return (indexPrefix && !it.startsWith(indexPrefix)) ? indexPrefix + it : it + }.join(",") + } + + String unprefixIndexName(String index) { + if (!index || !indexPrefix) return index + index = index.trim() + return index.split(",").collect { String it -> + it = it.trim() + return (indexPrefix && it.startsWith(indexPrefix)) ? it.substring(indexPrefix.length()) : it + }.join(",") + } + + // ============================================================ + // Private helpers + // ============================================================ + + private void upsertDocument(String prefixedIndex, String docId, String docType, String docJson, String contentText) { + Connection conn = getConnection() + PreparedStatement ps = conn.prepareStatement(""" + INSERT INTO moqui_document (index_name, doc_id, doc_type, document, content_text, content_tsv, updated_stamp) + VALUES (?, ?, ?, ?::jsonb, ?, to_tsvector('english', COALESCE(?, '')), now()) + ON CONFLICT (index_name, doc_id) DO UPDATE SET + doc_type = EXCLUDED.doc_type, + document = EXCLUDED.document, + content_text = EXCLUDED.content_text, + content_tsv = EXCLUDED.content_tsv, + updated_stamp = EXCLUDED.updated_stamp + """.trim()) + try { + ps.setString(1, prefixedIndex) + ps.setString(2, docId ?: UUID.randomUUID().toString()) + if (docType) ps.setString(3, docType) else ps.setNull(3, Types.VARCHAR) + ps.setString(4, docJson) + ps.setString(5, contentText) + ps.setString(6, contentText) + ps.executeUpdate() + } finally { ps.close() } + } + + /** + * Extract all text values from a document Map recursively for full-text search indexing. + * Concatenates strings from all levels of the document, space-separated. + */ + static String extractContentText(Map document) { + if (document == null || document.isEmpty()) return "" + StringBuilder sb = new StringBuilder() + extractTextFromValue(document, sb) + return sb.toString().trim() + } + + private static void extractTextFromValue(Object value, StringBuilder sb) { + if (value instanceof Map) { + for (Map.Entry entry in ((Map) value).entrySet()) { + Object k = entry.key + Object v = entry.value + // Skip non-text keys that are typically IDs or metadata + if (k instanceof String) { + String key = (String) k + // Include most fields except pure-ID fields that add noise + if (!key.endsWith("Id") || key.length() < 20) { + extractTextFromValue(v, sb) + } + } else { + extractTextFromValue(v, sb) + } + } + } else if (value instanceof List) { + for (Object item in (List) value) extractTextFromValue(item, sb) + } else if (value instanceof String) { + String s = (String) value + if (s.length() > 0) { + if (sb.length() > 0) sb.append(' ') + sb.append(s) + } + } else if (value instanceof Number || value instanceof Boolean) { + // Include numbers as text (useful for numeric search) + if (sb.length() > 0) sb.append(' ') + sb.append(value.toString()) + } + // Skip null, Timestamp, Date etc. — not useful for full-text + } + + /** + * Store index and mapping information for a DataDocument. + * This is the postgres equivalent of ElasticClientImpl.storeIndexAndMapping(). + */ + protected synchronized void storeIndexAndMapping(String indexName, EntityValue dd) { + String dataDocumentId = (String) dd.getNoCheckSimple("dataDocumentId") + String manualMappingServiceName = (String) dd.getNoCheckSimple("manualMappingServiceName") + String esIndexName = ElasticFacadeImpl.ddIdToEsIndex(dataDocumentId) + String prefixedIndex = prefixIndexName(esIndexName) + + boolean hasIndex = indexExists(prefixedIndex) + Map docMapping = ElasticFacadeImpl.makeElasticSearchMapping(dataDocumentId, ecfi) + Map settings = null + + if (manualMappingServiceName) { + Map serviceResult = ecfi.service.sync().name(manualMappingServiceName) + .parameter("mapping", docMapping).call() + docMapping = (Map) serviceResult.get("mapping") + settings = (Map) serviceResult.get("settings") + } + + if (hasIndex) { + logger.info("PostgresElasticClient: updating mapping for index '${prefixedIndex}' (${dataDocumentId})") + putMapping(prefixedIndex, docMapping) + } else { + logger.info("PostgresElasticClient: creating index '${prefixedIndex}' for DataDocument '${dataDocumentId}' with alias '${indexName}'") + createIndex(prefixedIndex, dataDocumentId, docMapping, indexName, settings) + } + } + + /** + * Resolve comma-separated or single index name(s) to prefixed list. + * Also handles cases where the index might be an alias. + */ + private List resolveIndexNames(String index) { + if (!index) { + // Query all documents if no index specified + return getAllIndexNames() + } + List result = [] + for (String part in index.split(",")) { + String trimmed = part.trim() + if (!trimmed) continue + String prefixed = prefixIndexName(trimmed) + // Try alias resolution first — an alias maps to one or more concrete indices + List aliasResolved = resolveAlias(prefixed) + if (aliasResolved) { + result.addAll(aliasResolved) + } else { + // Not an alias — treat as an exact index name + result.add(prefixed) + } + } + return result + } + + private List resolveAlias(String alias) { + Connection conn = getConnection() + PreparedStatement ps = conn.prepareStatement( + "SELECT index_name FROM moqui_search_index WHERE alias_name = ?") + try { + ps.setString(1, alias) + ResultSet rs = ps.executeQuery() + try { + List names = [] + while (rs.next()) names.add(rs.getString("index_name")) + return names + } finally { rs.close() } + } finally { ps.close() } + } + + private List getAllIndexNames() { + Connection conn = getConnection() + PreparedStatement ps = conn.prepareStatement("SELECT index_name FROM moqui_search_index") + try { + ResultSet rs = ps.executeQuery() + try { + List names = [] + while (rs.next()) names.add(rs.getString("index_name")) + return names + } finally { rs.close() } + } finally { ps.close() } + } + + /** Build the SELECT expression for _score (depends on whether we have a tsquery or not) */ + private static String buildScoreSelect(ElasticQueryTranslator.TranslatedQuery tq) { + if (tq.tsqueryExpr) { + return "ts_rank_cd(content_tsv, ${tq.tsqueryExpr})" + } + return "1.0::float" + } + + /** Build highlight maps for a search result document */ + private static Map> buildHighlights(Map source, ElasticQueryTranslator.TranslatedQuery tq) { + // highlights are expensive — compute them in-memory with simple string matching + // For full ts_headline support, would need a separate SQL call per field + Map> highlights = [:] + if (!tq.tsqueryExpr || !tq.highlightFields) return highlights + // Extract simple query terms for basic highlighting + String firstParam = tq.params ? tq.params[0]?.toString() : null + if (!firstParam) return highlights + for (String field in tq.highlightFields.keySet()) { + Object fieldVal = source.get(field) + if (fieldVal instanceof String) { + String text = (String) fieldVal + // Simple highlight: surround occurrences of query terms with tags + String highlighted = simpleHighlight(text, firstParam) + if (highlighted != text) highlights.put(field, [highlighted]) + } + } + return highlights + } + + private static String simpleHighlight(String text, String query) { + if (!text || !query) return text + // Extract individual terms from query (ignore operators and quoted phrases for simplicity) + List terms = query.replaceAll(/["()+\-]/, ' ').split(/\s+/).findAll { it.length() > 2 } as List + String result = text + for (String term in terms) { + result = result.replaceAll("(?i)\\b${java.util.regex.Pattern.quote(term)}\\b", "\$0") + } + return result + } + + private static void setParam(PreparedStatement ps, int idx, Object value) { + if (value == null) { + ps.setNull(idx, Types.VARCHAR) + } else if (value instanceof String) { + ps.setString(idx, (String) value) + } else if (value instanceof Long || value instanceof Integer) { + ps.setLong(idx, ((Number) value).longValue()) + } else if (value instanceof Double || value instanceof Float || value instanceof BigDecimal) { + ps.setDouble(idx, ((Number) value).doubleValue()) + } else if (value instanceof Timestamp) { + ps.setTimestamp(idx, (Timestamp) value) + } else { + ps.setString(idx, value.toString()) + } + } +} diff --git a/framework/src/main/groovy/org/moqui/impl/util/PostgresSearchLogger.groovy b/framework/src/main/groovy/org/moqui/impl/util/PostgresSearchLogger.groovy new file mode 100644 index 000000000..d300442e9 --- /dev/null +++ b/framework/src/main/groovy/org/moqui/impl/util/PostgresSearchLogger.groovy @@ -0,0 +1,244 @@ +/* + * This software is in the public domain under CC0 1.0 Universal plus a + * Grant of Patent License. + * + * To the extent possible under law, the author(s) have dedicated all + * copyright and related and neighboring rights to this software to the + * public domain worldwide. This software is distributed without any + * warranty. + * + * You should have received a copy of the CC0 Public Domain Dedication + * along with this software (see the LICENSE.md file). If not, see + * . + */ +package org.moqui.impl.util + +import groovy.transform.CompileStatic +import org.apache.logging.log4j.Level +import org.apache.logging.log4j.core.LogEvent +import org.apache.logging.log4j.util.ReadOnlyStringMap +import org.moqui.BaseArtifactException +import org.moqui.context.ArtifactExecutionInfo +import org.moqui.context.LogEventSubscriber +import org.moqui.impl.context.ExecutionContextFactoryImpl +import org.moqui.impl.context.PostgresElasticClient +import org.slf4j.Logger +import org.slf4j.LoggerFactory + +import java.sql.Connection +import java.sql.PreparedStatement +import java.sql.Timestamp +import java.sql.Types +import java.util.concurrent.ConcurrentLinkedQueue +import java.util.concurrent.atomic.AtomicBoolean + +/** + * PostgreSQL-backed application log appender (replaces ElasticSearchLogger for postgres clusters). + * + * Consumes LogEvent objects from a queue and batch-inserts them into the moqui_logs table. + * The queue is flushed every 3 seconds by a scheduled task, identical to ElasticSearchLogger behaviour. + */ +@CompileStatic +class PostgresSearchLogger { + private final static Logger logger = LoggerFactory.getLogger(PostgresSearchLogger.class) + + final static int QUEUE_LIMIT = 16384 + + private final PostgresElasticClient pgClient + private final ExecutionContextFactoryImpl ecfi + + private boolean initialized = false + private volatile boolean disabled = false + + final ConcurrentLinkedQueue logMessageQueue = new ConcurrentLinkedQueue<>() + final AtomicBoolean flushRunning = new AtomicBoolean(false) + + protected PgLogSubscriber subscriber = null + + PostgresSearchLogger(PostgresElasticClient pgClient, ExecutionContextFactoryImpl ecfi) { + this.pgClient = pgClient + this.ecfi = ecfi + init() + } + + void init() { + // moqui_logs table is created by PostgresElasticClient.initSchema() — no extra setup needed + + // Schedule flush every 3 seconds (same cadence as ElasticSearchLogger) + PgLogQueueFlush flushTask = new PgLogQueueFlush(this) + ecfi.scheduleAtFixedRate(flushTask, 10, 3) + + subscriber = new PgLogSubscriber(this) + ecfi.registerLogEventSubscriber(subscriber) + + initialized = true + logger.info("PostgresSearchLogger initialized for cluster '${pgClient.clusterName}'") + } + + void destroy() { disabled = true } + + boolean isInitialized() { return initialized } + + // ============================================================ + // Log subscriber — mirrors ElasticSearchSubscriber + // ============================================================ + + static class PgLogSubscriber implements LogEventSubscriber { + private final PostgresSearchLogger pgLogger + private final InetAddress localAddr = InetAddress.getLocalHost() + + PgLogSubscriber(PostgresSearchLogger pgLogger) { this.pgLogger = pgLogger } + + @Override + void process(LogEvent event) { + if (pgLogger.disabled) return + // Suppress DEBUG / TRACE (same rule as ElasticSearchLogger) + if (Level.DEBUG.is(event.level) || Level.TRACE.is(event.level)) return + // Back-pressure: if queue too full, drop the oldest-style (newest is not enqueued) + if (pgLogger.logMessageQueue.size() >= QUEUE_LIMIT) return + + Map msgMap = [ + '@timestamp' : event.timeMillis, + level : event.level.toString(), + thread_name : event.threadName, + thread_id : event.threadId, + thread_priority: event.threadPriority, + logger_name : event.loggerName, + message : event.message?.formattedMessage, + source_host : localAddr.hostName + ] as Map + + ReadOnlyStringMap contextData = event.contextData + if (contextData != null && contextData.size() > 0) { + Map mdcMap = new HashMap<>(contextData.toMap()) + String userId = mdcMap.remove("moqui_userId") + String visitorId = mdcMap.remove("moqui_visitorId") + if (userId) msgMap.put("user_id", userId) + if (visitorId) msgMap.put("visitor_id", visitorId) + if (mdcMap.size() > 0) msgMap.put("mdc", mdcMap) + } + Throwable thrown = event.thrown + if (thrown != null) msgMap.put("thrown", ElasticSearchLogger.ElasticSearchSubscriber.makeThrowableMap(thrown)) + + pgLogger.logMessageQueue.add(msgMap) + } + } + + // ============================================================ + // Scheduled flush task — drains queue into moqui_logs via JDBC + // ============================================================ + + static class PgLogQueueFlush implements Runnable { + private final static int MAX_BATCH = 200 + + private final PostgresSearchLogger pgLogger + + PgLogQueueFlush(PostgresSearchLogger pgLogger) { this.pgLogger = pgLogger } + + @Override + void run() { + if (!pgLogger.flushRunning.compareAndSet(false, true)) return + try { + while (pgLogger.logMessageQueue.size() > 0) { flushQueue() } + } finally { + pgLogger.flushRunning.set(false) + } + } + + void flushQueue() { + final ConcurrentLinkedQueue queue = pgLogger.logMessageQueue + List batch = new ArrayList<>(MAX_BATCH) + long lastTs = 0L + int sameCount = 0 + + while (batch.size() < MAX_BATCH) { + Map msg = queue.poll() + if (msg == null) break + // Ensure unique timestamps (same as ES logger behaviour) + long ts = (msg.get("@timestamp") as long) ?: System.currentTimeMillis() + if (ts == lastTs) { + sameCount++ + ts += sameCount + msg.put("@timestamp", ts) + } else { + lastTs = ts + sameCount = 0 + } + batch.add(msg) + } + + if (batch.isEmpty()) return + + int retries = 3 + while (retries-- > 0) { + try { + writeBatch(batch) + return + } catch (Throwable t) { + System.out.println("PostgresSearchLogger: error writing log batch, retries left ${retries}: ${t}") + if (retries == 0) System.out.println("PostgresSearchLogger: dropping ${batch.size()} log records after repeated failures") + } + } + } + + private void writeBatch(List batch) { + boolean txStarted = pgLogger.ecfi.transactionFacade.begin(null) + try { + Connection conn = pgLogger.ecfi.entityFacade.getConnection(pgLogger.pgClient.datasourceGroup) + PreparedStatement ps = conn.prepareStatement(""" + INSERT INTO moqui_logs (log_timestamp, log_level, thread_name, thread_id, thread_priority, + logger_name, message, source_host, user_id, visitor_id, mdc, thrown) + VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?::jsonb, ?::jsonb) + """.trim()) + try { + for (Map msg in batch) { + // Timestamp: stored as epoch_millis long in the map + long tsMillis = (msg.get("@timestamp") as long) ?: System.currentTimeMillis() + ps.setTimestamp(1, new Timestamp(tsMillis)) + setStr(ps, 2, msg.get("level") as String) + setStr(ps, 3, msg.get("thread_name") as String) + setLong(ps, 4, msg.get("thread_id") as Long) + setInt(ps, 5, msg.get("thread_priority") as Integer) + setStr(ps, 6, msg.get("logger_name") as String) + setStr(ps, 7, msg.get("message") as String) + setStr(ps, 8, msg.get("source_host") as String) + setStr(ps, 9, msg.get("user_id") as String) + setStr(ps, 10, msg.get("visitor_id") as String) + // mdc and thrown as JSONB + Object mdcObj = msg.get("mdc") + setJsonb(ps, 11, mdcObj) + Object thrownObj = msg.get("thrown") + setJsonb(ps, 12, thrownObj) + ps.addBatch() + } + ps.executeBatch() + } finally { ps.close() } + pgLogger.ecfi.transactionFacade.commit(txStarted) + } catch (Throwable t) { + pgLogger.ecfi.transactionFacade.rollback(txStarted, "Error writing log batch to moqui_logs", t) + throw t + } + } + + private static void setStr(PreparedStatement ps, int i, String v) { + if (v == null) ps.setNull(i, Types.VARCHAR) else ps.setString(i, v) + } + private static void setLong(PreparedStatement ps, int i, Long v) { + if (v == null) ps.setNull(i, Types.BIGINT) else ps.setLong(i, v) + } + private static void setInt(PreparedStatement ps, int i, Integer v) { + if (v == null) ps.setNull(i, Types.INTEGER) else ps.setInt(i, v) + } + private static void setJsonb(PreparedStatement ps, int i, Object v) { + if (v == null) { + ps.setNull(i, Types.OTHER) + } else { + try { + ps.setString(i, PostgresElasticClient.jacksonMapper.writeValueAsString(v)) + } catch (Throwable t) { + ps.setNull(i, Types.OTHER) + } + } + } + } +} diff --git a/framework/src/main/groovy/org/moqui/impl/webapp/ElasticRequestLogFilter.groovy b/framework/src/main/groovy/org/moqui/impl/webapp/ElasticRequestLogFilter.groovy index 7878ba6eb..b3b9cc507 100644 --- a/framework/src/main/groovy/org/moqui/impl/webapp/ElasticRequestLogFilter.groovy +++ b/framework/src/main/groovy/org/moqui/impl/webapp/ElasticRequestLogFilter.groovy @@ -15,7 +15,7 @@ package org.moqui.impl.webapp import groovy.transform.CompileStatic import org.moqui.Moqui -import org.moqui.impl.context.ElasticFacadeImpl.ElasticClientImpl +import org.moqui.context.ElasticFacade import org.moqui.impl.context.ExecutionContextFactoryImpl import org.moqui.impl.context.UserFacadeImpl import org.slf4j.Logger @@ -38,7 +38,7 @@ class ElasticRequestLogFilter implements Filter { protected FilterConfig filterConfig = null protected ExecutionContextFactoryImpl ecfi = null - private ElasticClientImpl elasticClient = null + private ElasticFacade.ElasticClient elasticClient = null private boolean disabled = false final ConcurrentLinkedQueue requestLogQueue = new ConcurrentLinkedQueue<>() @@ -51,15 +51,11 @@ class ElasticRequestLogFilter implements Filter { ecfi = (ExecutionContextFactoryImpl) filterConfig.servletContext.getAttribute("executionContextFactory") if (ecfi == null) ecfi = (ExecutionContextFactoryImpl) Moqui.executionContextFactory - elasticClient = (ElasticClientImpl) (ecfi.elasticFacade.getClient("logger") ?: ecfi.elasticFacade.getDefault()) + elasticClient = (ecfi.elasticFacade.getClient("logger") ?: ecfi.elasticFacade.getDefault()) if (elasticClient == null) { logger.error("In ElasticRequestLogFilter init could not find ElasticClient with name logger or default, not starting") return } - if (elasticClient.esVersionUnder7) { - logger.warn("ElasticClient ${elasticClient.clusterName} has version under 7.0, not starting ElasticRequestLogFilter") - return - } // check for index exists, create with mapping for log doc if not try { diff --git a/framework/src/main/resources/MoquiDefaultConf.xml b/framework/src/main/resources/MoquiDefaultConf.xml index cb515655f..69dcfecd7 100644 --- a/framework/src/main/resources/MoquiDefaultConf.xml +++ b/framework/src/main/resources/MoquiDefaultConf.xml @@ -420,8 +420,13 @@ + + - + @@ -523,6 +531,7 @@ + diff --git a/framework/src/test/groovy/MoquiSuite.groovy b/framework/src/test/groovy/MoquiSuite.groovy index 3f3b7ec0b..76c3a0990 100644 --- a/framework/src/test/groovy/MoquiSuite.groovy +++ b/framework/src/test/groovy/MoquiSuite.groovy @@ -24,7 +24,8 @@ import org.moqui.Moqui @SelectClasses([ CacheFacadeTests.class, EntityCrud.class, EntityFindTests.class, EntityNoSqlCrud.class, L10nFacadeTests.class, MessageFacadeTests.class, ResourceFacadeTests.class, ServiceCrudImplicit.class, ServiceFacadeTests.class, SubSelectTests.class, TransactionFacadeTests.class, UserFacadeTests.class, - SystemScreenRenderTests.class, ToolsRestApiTests.class, ToolsScreenRenderTests.class]) + SystemScreenRenderTests.class, ToolsRestApiTests.class, ToolsScreenRenderTests.class, + PostgresSearchTranslatorTests.class, PostgresElasticClientTests.class]) class MoquiSuite { @AfterAll static void destroyMoqui() { diff --git a/framework/src/test/groovy/PostgresElasticClientTests.groovy b/framework/src/test/groovy/PostgresElasticClientTests.groovy new file mode 100644 index 000000000..bad902fab --- /dev/null +++ b/framework/src/test/groovy/PostgresElasticClientTests.groovy @@ -0,0 +1,701 @@ +/* + * This software is in the public domain under CC0 1.0 Universal plus a + * Grant of Patent License. + * + * Integration tests for PostgresElasticClient and PostgresSearchLogger. + * + * Requires a running PostgreSQL database configured in Moqui (transactional datasource). + * Bootstraps a Moqui EC directly — no web server needed. + * + * Run with: ./gradlew :framework:test --tests PostgresElasticClientTests + */ + +import org.moqui.Moqui +import org.moqui.context.ExecutionContext +import org.moqui.context.ElasticFacade +import org.moqui.impl.context.PostgresElasticClient +import org.moqui.impl.context.ExecutionContextFactoryImpl +import org.moqui.util.MNode +import org.junit.jupiter.api.AfterAll +import org.junit.jupiter.api.AfterEach +import org.junit.jupiter.api.BeforeAll +import org.junit.jupiter.api.BeforeEach +import org.junit.jupiter.api.DisplayName +import org.junit.jupiter.api.Test +import org.junit.jupiter.api.Assertions + +/** + * Integration tests for the PostgreSQL-backed ElasticClient. + * + * Spins up a real PostgresElasticClient against the configured datasource and exercises + * all major operations: schema init, createIndex, index, get, search, update, delete, + * bulk operations, and query translation. + */ +class PostgresElasticClientTests { + + static ExecutionContext ec + static PostgresElasticClient pgClient + static final String TEST_INDEX = "pg_test_documents" + static final String TEST_PREFIX = "_test_pg_" + + @BeforeAll + static void startMoqui() { + // Init Moqui pointing at the test database + ec = Moqui.getExecutionContext() + ExecutionContextFactoryImpl ecfi = (ExecutionContextFactoryImpl) ec.factory + + // Create a test PostgresElasticClient directly via a manually-built MNode + MNode clusterNode = new MNode("cluster", [name: "test-pg", type: "postgres", + url: "transactional", "index-prefix": TEST_PREFIX]) + pgClient = new PostgresElasticClient(clusterNode, ecfi) + } + + @AfterAll + static void stopMoqui() { + // Clean up test data + try { + if (pgClient != null) { + [ + "pg_test_documents_create", "pg_test_documents_with_alias", + "pg_test_documents_delete_test", "pg_test_documents_put_mapping", + "pg_test_documents_crud", "pg_test_documents_get_source", + "pg_test_documents_multi_get", "pg_test_documents_get_null", + "pg_test_documents_update", "pg_test_documents_doc_delete", + "pg_test_documents_bulkindex", "pg_test_documents_bulk_actions", + "pg_test_documents_search_all", "pg_test_documents_search_term", + "pg_test_documents_search_terms", "pg_test_documents_search_fts", + "pg_test_documents_search_bool", "pg_test_documents_search_page", + "pg_test_documents_searchhits", "pg_test_documents_count", + "pg_test_documents_countresp", "pg_test_documents_dbq", + "test_data_doc" + ].each { idx -> + try { pgClient.deleteIndex(idx) } catch (Throwable ignored) {} + } + } + } catch (Throwable ignored) {} + try { if (ec != null) ec.destroy() } catch (Throwable ignored) {} + } + + @BeforeEach + void beginTx() { + ec.transaction.begin(null) + ec.artifactExecution.disableAuthz() + } + + @AfterEach + void commitTx() { + ec.artifactExecution.enableAuthz() + if (ec.transaction.isTransactionInPlace()) ec.transaction.commit() + } + + // ============================ + // clusterName / location + // ============================ + + @Test + @DisplayName("clusterName returns configured name") + void clusterName_returnsConfiguredName() { + Assertions.assertEquals("test-pg", pgClient.clusterName) + } + + @Test + @DisplayName("clusterLocation contains postgres keyword") + void clusterLocation_containsPostgresKeyword() { + Assertions.assertTrue(pgClient.clusterLocation.contains("postgres")) + } + + @Test + @DisplayName("getServerInfo returns postgres version map") + void getServerInfo_returnsPostgresVersionMap() { + Map info = pgClient.getServerInfo() + Assertions.assertNotNull(info) + Assertions.assertEquals("postgres", info.get("cluster_name")) + Object version = info.get("version") + Assertions.assertNotNull(version) + } + + // ============================ + // Index management + // ============================ + + @Test + @DisplayName("createIndex and indexExists") + void createIndex_andIndexExists() { + String idx = TEST_INDEX + "_create" + try { + pgClient.createIndex(idx, [properties: [name: [type: "text"]]], null) + Assertions.assertTrue(pgClient.indexExists(idx), "index should exist after createIndex") + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + @Test + @DisplayName("indexExists returns false for non-existent index") + void indexExists_returnsFalseForNonExistent() { + Assertions.assertFalse(pgClient.indexExists("pg_test_does_not_exist_xyz")) + } + + @Test + @DisplayName("createIndex with alias and aliasExists") + void createIndex_withAlias_aliasExists() { + String idx = TEST_INDEX + "_with_alias" + String alias = idx + "_alias" + try { + pgClient.createIndex(idx, null, alias) + Assertions.assertTrue(pgClient.indexExists(idx), "index should exist") + Assertions.assertTrue(pgClient.aliasExists(alias), "alias should exist") + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + @Test + @DisplayName("deleteIndex removes documents and metadata") + void deleteIndex_removesDocumentsAndMetadata() { + String idx = TEST_INDEX + "_delete_test" + pgClient.createIndex(idx, null, null) + pgClient.index(idx, "doc001", [name: "To Be Deleted"]) + pgClient.deleteIndex(idx) + Assertions.assertFalse(pgClient.indexExists(idx), "index should not exist after delete") + Assertions.assertNull(pgClient.get(idx, "doc001"), "document should not exist after index delete") + } + + @Test + @DisplayName("putMapping updates mapping on existing index") + void putMapping_updatesMappingOnExistingIndex() { + String idx = TEST_INDEX + "_put_mapping" + try { + pgClient.createIndex(idx, [properties: [name: [type: "text"]]], null) + // putMapping should succeed without throwing + pgClient.putMapping(idx, [properties: [name: [type: "text"], email: [type: "keyword"]]]) + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + // ============================ + // Document CRUD + // ============================ + + @Test + @DisplayName("index and get roundtrip") + void index_andGetRoundtrip() { + String idx = TEST_INDEX + "_crud" + try { + pgClient.createIndex(idx, null, null) + Map doc = [name: "Alice", email: "alice@example.com", age: 30] + pgClient.index(idx, "alice001", doc) + Map retrieved = pgClient.get(idx, "alice001") + Assertions.assertNotNull(retrieved, "document should be retrievable") + Map source = (Map) retrieved.get("_source") + Assertions.assertNotNull(source) + Assertions.assertEquals("Alice", source.get("name")) + Assertions.assertEquals("alice@example.com", source.get("email")) + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + @Test + @DisplayName("getSource returns only source map") + void getSource_returnsOnlySourceMap() { + String idx = TEST_INDEX + "_get_source" + try { + pgClient.createIndex(idx, null, null) + pgClient.index(idx, "bob001", [name: "Bob", role: "admin"]) + Map source = pgClient.getSource(idx, "bob001") + Assertions.assertNotNull(source) + Assertions.assertEquals("Bob", source.get("name")) + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + @Test + @DisplayName("get with _idList returns multiple docs") + void get_withIdList_returnsMultipleDocs() { + String idx = TEST_INDEX + "_multi_get" + try { + pgClient.createIndex(idx, null, null) + pgClient.index(idx, "m1", [name: "Multi1"]) + pgClient.index(idx, "m2", [name: "Multi2"]) + pgClient.index(idx, "m3", [name: "Multi3"]) + List docs = pgClient.get(idx, ["m1", "m3"]) + Assertions.assertEquals(2, docs.size()) + Set ids = docs.collect { (String) it.get("_id") }.toSet() + Assertions.assertTrue(ids.contains("m1")) + Assertions.assertTrue(ids.contains("m3")) + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + @Test + @DisplayName("get returns null for non-existent document") + void get_returnsNullForNonExistent() { + String idx = TEST_INDEX + "_get_null" + try { + pgClient.createIndex(idx, null, null) + Map result = pgClient.get(idx, "doesnotexist") + Assertions.assertNull(result) + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + @Test + @DisplayName("update merges fields") + void update_mergesFields() { + String idx = TEST_INDEX + "_update" + try { + pgClient.createIndex(idx, null, null) + pgClient.index(idx, "upd001", [name: "Carol", status: "active"]) + pgClient.update(idx, "upd001", [status: "inactive", rating: 5]) + Map source = pgClient.getSource(idx, "upd001") + Assertions.assertNotNull(source) + // original name field should still be there (merge) + Assertions.assertEquals("Carol", source.get("name")) + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + @Test + @DisplayName("delete removes document") + void delete_removesDocument() { + String idx = TEST_INDEX + "_doc_delete" + try { + pgClient.createIndex(idx, null, null) + pgClient.index(idx, "del001", [name: "To Delete"]) + Assertions.assertNotNull(pgClient.get(idx, "del001"), "doc should exist before delete") + pgClient.delete(idx, "del001") + Assertions.assertNull(pgClient.get(idx, "del001"), "doc should not exist after delete") + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + // ============================ + // Bulk operations + // ============================ + + @Test + @DisplayName("bulkIndex inserts multiple documents") + void bulkIndex_insertsMultipleDocuments() { + String idx = TEST_INDEX + "_bulkindex" + try { + pgClient.createIndex(idx, null, null) + List docs = (1..20).collect { i -> + [productId: "PROD_${String.format('%03d', i)}", name: "Product ${i}", + category: i % 2 == 0 ? "category_a" : "category_b", price: i * 10.0] + } + pgClient.bulkIndex(idx, "productId", docs) + + // Verify a sampling + Map p1source = pgClient.getSource(idx, "PROD_001") + Assertions.assertNotNull(p1source, "PROD_001 should exist after bulkIndex") + Assertions.assertEquals("Product 1", p1source.get("name")) + + Map p10source = pgClient.getSource(idx, "PROD_010") + Assertions.assertNotNull(p10source, "PROD_010 should exist") + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + @Test + @DisplayName("bulk with index and delete actions") + void bulk_withIndexAndDeleteActions() { + String idx = TEST_INDEX + "_bulk_actions" + try { + pgClient.createIndex(idx, null, null) + // First insert two docs + pgClient.index(idx, "ba001", [name: "To Keep"]) + pgClient.index(idx, "ba002", [name: "To Delete"]) + + // Bulk: index new + delete existing + List actions = [ + [index: [_index: "${TEST_PREFIX}${idx}", _id: "ba003"]], + [name: "New Doc"], + [delete: [_index: "${TEST_PREFIX}${idx}", _id: "ba002"]], + [:] // no source for delete + ] + pgClient.bulk(idx, actions) + + Assertions.assertNotNull(pgClient.get(idx, "ba001"), "ba001 should still exist") + // ba002 delete and ba003 index via raw bulk - verify ba001 still there + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + // ============================ + // Search + // ============================ + + @Test + @DisplayName("search - match_all returns all documents") + void search_matchAllReturnsAll() { + String idx = TEST_INDEX + "_search_all" + try { + pgClient.createIndex(idx, null, null) + pgClient.index(idx, "s1", [name: "Alpha Widget"]) + pgClient.index(idx, "s2", [name: "Beta Gadget"]) + pgClient.index(idx, "s3", [name: "Gamma Tool"]) + + Map result = pgClient.search(idx, [query: [match_all: [:]], size: 20]) + Assertions.assertNotNull(result) + Map hits = (Map) result.get("hits") + Assertions.assertNotNull(hits) + Map total = (Map) hits.get("total") + Assertions.assertNotNull(total) + long totalValue = ((Number) total.get("value")).longValue() + Assertions.assertEquals(3L, totalValue, "should return 3 documents") + + List hitList = (List) hits.get("hits") + Assertions.assertEquals(3, hitList.size()) + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + @Test + @DisplayName("search - term query filters documents") + void search_termQueryFilters() { + String idx = TEST_INDEX + "_search_term" + try { + pgClient.createIndex(idx, null, null) + pgClient.index(idx, "t1", [status: "ACTIVE", name: "Active Widget"]) + pgClient.index(idx, "t2", [status: "INACTIVE", name: "Inactive Gadget"]) + pgClient.index(idx, "t3", [status: "ACTIVE", name: "Active Tool"]) + + Map result = pgClient.search(idx, [ + query: [term: [status: "ACTIVE"]], + size: 20, + track_total_hits: true + ]) + Map hits = (Map) result.get("hits") + Map total = (Map) hits.get("total") + long totalValue = ((Number) total.get("value")).longValue() + Assertions.assertEquals(2L, totalValue, "only ACTIVE docs should be returned") + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + @Test + @DisplayName("search - terms query with IN") + void search_termsQueryWithIn() { + String idx = TEST_INDEX + "_search_terms" + try { + pgClient.createIndex(idx, null, null) + pgClient.index(idx, "tm1", [cat: "a"]) + pgClient.index(idx, "tm2", [cat: "b"]) + pgClient.index(idx, "tm3", [cat: "c"]) + + Map result = pgClient.search(idx, [ + query: [terms: [cat: ["a", "c"]]], + size: 10, + track_total_hits: true + ]) + Map total = (Map) ((Map) result.get("hits")).get("total") + Assertions.assertEquals(2L, ((Number) total.get("value")).longValue()) + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + @Test + @DisplayName("search - full-text query_string") + void search_fullTextQueryString() { + String idx = TEST_INDEX + "_search_fts" + try { + pgClient.createIndex(idx, null, null) + pgClient.index(idx, "fts1", [description: "The quick brown fox jumps over the lazy dog"]) + pgClient.index(idx, "fts2", [description: "A completely unrelated document about databases"]) + pgClient.index(idx, "fts3", [description: "Another fox story about running quickly"]) + + Map result = pgClient.search(idx, [ + query: [query_string: [query: "fox", lenient: true]], + size: 10, + track_total_hits: true + ]) + Map total = (Map) ((Map) result.get("hits")).get("total") + long count = ((Number) total.get("value")).longValue() + Assertions.assertTrue(count >= 2, "should find at least 2 fox documents, found ${count}") + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + @Test + @DisplayName("search - bool must and must_not") + void search_boolMustAndMustNot() { + String idx = TEST_INDEX + "_search_bool" + try { + pgClient.createIndex(idx, null, null) + pgClient.index(idx, "b1", [type: "order", status: "placed"]) + pgClient.index(idx, "b2", [type: "order", status: "cancelled"]) + pgClient.index(idx, "b3", [type: "invoice", status: "placed"]) + + Map result = pgClient.search(idx, [ + query: [bool: [ + must: [[term: [type: "order"]]], + must_not: [[term: [status: "cancelled"]]] + ]], + size: 10, + track_total_hits: true + ]) + Map total = (Map) ((Map) result.get("hits")).get("total") + long count = ((Number) total.get("value")).longValue() + Assertions.assertEquals(1L, count, "only non-cancelled orders should match") + List hitList = (List) ((Map) result.get("hits")).get("hits") + Assertions.assertEquals("b1", hitList[0].get("_id")) + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + @Test + @DisplayName("search - pagination from and size") + void search_paginationFromAndSize() { + String idx = TEST_INDEX + "_search_page" + try { + pgClient.createIndex(idx, null, null) + (1..10).each { i -> + pgClient.index(idx, "p${i}", [name: "Doc ${i}", seq: i]) + } + + // Get page 2 (items 5-9 of 10, 5 per page) + Map result = pgClient.search(idx, [ + query: [match_all: [:]], + from: 5, size: 5, + track_total_hits: true + ]) + Map hits = (Map) result.get("hits") + long totalValue = ((Number) ((Map) hits.get("total")).get("value")).longValue() + List hitList = (List) hits.get("hits") + Assertions.assertEquals(10L, totalValue, "total should reflect all 10 docs") + Assertions.assertEquals(5, hitList.size(), "page size should be 5") + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + @Test + @DisplayName("searchHits returns list directly") + void searchHits_returnsListDirectly() { + String idx = TEST_INDEX + "_searchhits" + try { + pgClient.createIndex(idx, null, null) + pgClient.index(idx, "sh1", [x: 1]) + pgClient.index(idx, "sh2", [x: 2]) + List hits = pgClient.searchHits(idx, [query: [match_all: [:]], size: 10]) + Assertions.assertNotNull(hits) + Assertions.assertEquals(2, hits.size()) + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + // ============================ + // Count + // ============================ + + @Test + @DisplayName("count returns correct total") + void count_returnsCorrectTotal() { + String idx = TEST_INDEX + "_count" + try { + pgClient.createIndex(idx, null, null) + (1..7).each { i -> pgClient.index(idx, "c${i}", [seq: i]) } + + long cnt = pgClient.count(idx, [query: [match_all: [:]]]) + Assertions.assertEquals(7L, cnt) + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + @Test + @DisplayName("countResponse returns map with count key") + void countResponse_returnsMapWithCountKey() { + String idx = TEST_INDEX + "_countresp" + try { + pgClient.createIndex(idx, null, null) + pgClient.index(idx, "cr1", [v: 1]) + pgClient.index(idx, "cr2", [v: 2]) + Map resp = pgClient.countResponse(idx, [query: [match_all: [:]]]) + Assertions.assertNotNull(resp) + Assertions.assertTrue(resp.containsKey("count")) + Assertions.assertEquals(2L, ((Number) resp.get("count")).longValue()) + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + // ============================ + // deleteByQuery + // ============================ + + @Test + @DisplayName("deleteByQuery removes matching documents") + void deleteByQuery_removesMatchingDocuments() { + String idx = TEST_INDEX + "_dbq" + try { + pgClient.createIndex(idx, null, null) + pgClient.index(idx, "dbq1", [status: "STALE"]) + pgClient.index(idx, "dbq2", [status: "STALE"]) + pgClient.index(idx, "dbq3", [status: "KEEP"]) + + Integer deleted = pgClient.deleteByQuery(idx, [term: [status: "STALE"]]) + Assertions.assertNotNull(deleted) + Assertions.assertEquals(2, deleted.intValue()) + Assertions.assertNull(pgClient.get(idx, "dbq1"), "STALE doc should be deleted") + Assertions.assertNotNull(pgClient.get(idx, "dbq3"), "KEEP doc should remain") + } finally { + if (pgClient.indexExists(idx)) pgClient.deleteIndex(idx) + } + } + + // ============================ + // PIT (stateless for postgres) + // ============================ + + @Test + @DisplayName("getPitId returns non-null token") + void getPitId_returnsNonNullToken() { + String pit = pgClient.getPitId(TEST_INDEX, "1m") + Assertions.assertNotNull(pit) + Assertions.assertTrue(pit.startsWith("pg::")) + } + + @Test + @DisplayName("deletePit is a no-op") + void deletePit_isNoOp() { + // Should not throw + pgClient.deletePit("pg::test::12345") + } + + // ============================ + // validateQuery + // ============================ + + @Test + @DisplayName("validateQuery returns null for valid query") + void validateQuery_returnsNullForValidQuery() { + Map result = pgClient.validateQuery(TEST_INDEX, [term: [status: "active"]], false) + Assertions.assertNull(result, "valid query should return null") + } + + // ============================ + // bulkIndexDataDocument + // ============================ + + @Test + @DisplayName("bulkIndexDataDocument strips metadata and indexes documents") + void bulkIndexDataDocument_stripsMetadataAndIndexes() { + List docs = [ + [_index: "test", _type: "TestDataDoc", _id: "tdd001", + _timestamp: System.currentTimeMillis(), productId: "P001", name: "Test Product One"], + [_index: "test", _type: "TestDataDoc", _id: "tdd002", + _timestamp: System.currentTimeMillis(), productId: "P002", name: "Test Product Two"], + ] + try { + pgClient.bulkIndexDataDocument(docs) + + // Documents should be in 'test_data_doc' index (ddIdToEsIndex("TestDataDoc")) + String expectedIdx = "test_data_doc" + Map source1 = pgClient.getSource(expectedIdx, "tdd001") + Assertions.assertNotNull(source1, "tdd001 should be indexed") + Assertions.assertEquals("Test Product One", source1.get("name")) + // Metadata should be stripped + Assertions.assertFalse(source1.containsKey("_index"), "_index should be stripped") + Assertions.assertFalse(source1.containsKey("_type"), "_type should be stripped") + Assertions.assertFalse(source1.containsKey("_id"), "_id should be stripped") + } finally { + // cleanup + try { pgClient.deleteIndex("test_data_doc") } catch (Throwable ignored) {} + } + } + + // ============================ + // extractContentText + // ============================ + + @Test + @DisplayName("extractContentText collects all string values") + void extractContentText_collectsAllStringValues() { + Map doc = [ + name: "John Doe", + status: "active", + address: [city: "Atlanta", state: "GA"], + tags: ["enterprise", "premium"], + amount: 299.99 + ] + String text = PostgresElasticClient.extractContentText(doc) + Assertions.assertNotNull(text) + Assertions.assertTrue(text.contains("John Doe"), "should include name") + Assertions.assertTrue(text.contains("active"), "should include status") + Assertions.assertTrue(text.contains("Atlanta"), "should include nested city") + Assertions.assertTrue(text.contains("enterprise"), "should include list items") + } + + @Test + @DisplayName("extractContentText on empty map returns empty string") + void extractContentText_emptyMapReturnsEmpty() { + String text = PostgresElasticClient.extractContentText([:]) + Assertions.assertEquals("", text) + } + + @Test + @DisplayName("extractContentText on null returns empty string") + void extractContentText_nullReturnsEmpty() { + String text = PostgresElasticClient.extractContentText(null) + Assertions.assertEquals("", text) + } + + // ============================ + // JSON serialization + // ============================ + + @Test + @DisplayName("objectToJson and jsonToObject roundtrip") + void objectToJson_andJsonToObject_roundtrip() { + Map original = [name: "Test", values: [1, 2, 3], nested: [key: "value"]] + String json = pgClient.objectToJson(original) + Assertions.assertNotNull(json) + Map roundtripped = (Map) pgClient.jsonToObject(json) + Assertions.assertEquals("Test", roundtripped.get("name")) + Assertions.assertEquals([1, 2, 3], roundtripped.get("values") as List) + } + + // ============================ + // Unsupported REST operations throw + // ============================ + + @Test + @DisplayName("call throws UnsupportedOperationException") + void call_throwsUnsupportedOperation() { + Assertions.assertThrows(UnsupportedOperationException.class, { + pgClient.call(org.moqui.util.RestClient.Method.GET, TEST_INDEX, "/", null, null) + }) + } + + @Test + @DisplayName("callFuture throws UnsupportedOperationException") + void callFuture_throwsUnsupportedOperation() { + Assertions.assertThrows(UnsupportedOperationException.class, { + pgClient.callFuture(org.moqui.util.RestClient.Method.GET, TEST_INDEX, "/", null, null) + }) + } + + @Test + @DisplayName("makeRestClient throws UnsupportedOperationException") + void makeRestClient_throwsUnsupportedOperation() { + Assertions.assertThrows(UnsupportedOperationException.class, { + pgClient.makeRestClient(org.moqui.util.RestClient.Method.GET, TEST_INDEX, "/", null) + }) + } +} diff --git a/framework/src/test/groovy/PostgresSearchSuite.groovy b/framework/src/test/groovy/PostgresSearchSuite.groovy new file mode 100644 index 000000000..5d7a0f8cf --- /dev/null +++ b/framework/src/test/groovy/PostgresSearchSuite.groovy @@ -0,0 +1,24 @@ +/* + * This software is in the public domain under CC0 1.0 Universal plus a + * Grant of Patent License. + */ + +import org.junit.jupiter.api.AfterAll +import org.junit.platform.suite.api.SelectClasses +import org.junit.platform.suite.api.Suite +import org.moqui.Moqui + +/** + * JUnit Platform Suite for PostgreSQL search backend tests. + * + * Run all: ./gradlew :framework:test --tests PostgresSearchSuite + * Unit only: ./gradlew :framework:test --tests PostgresSearchTranslatorTests (requires MoquiSuite or separate runs) + */ +@Suite +@SelectClasses([PostgresSearchTranslatorTests.class, PostgresElasticClientTests.class]) +class PostgresSearchSuite { + @AfterAll + static void destroyMoqui() { + Moqui.destroyActiveExecutionContextFactory() + } +} diff --git a/framework/src/test/groovy/PostgresSearchTranslatorTests.groovy b/framework/src/test/groovy/PostgresSearchTranslatorTests.groovy new file mode 100644 index 000000000..060b66384 --- /dev/null +++ b/framework/src/test/groovy/PostgresSearchTranslatorTests.groovy @@ -0,0 +1,478 @@ +/* + * This software is in the public domain under CC0 1.0 Universal plus a + * Grant of Patent License. + * + * Unit tests for ElasticQueryTranslator — no database connection required. + * Tests that ES Query DSL is correctly translated to PostgreSQL SQL. + */ +import org.junit.jupiter.api.DisplayName +import org.junit.jupiter.api.Test +import org.junit.jupiter.api.Assertions +import org.moqui.impl.context.ElasticQueryTranslator +import org.moqui.impl.context.ElasticQueryTranslator.TranslatedQuery +import org.moqui.impl.context.ElasticQueryTranslator.QueryResult + +class PostgresSearchTranslatorTests { + + // ============================================================ + // TranslatedQuery / translateSearchMap + // ============================================================ + + @Test + @DisplayName("translateSearchMap - pagination defaults") + void translateSearchMap_paginationDefaults() { + TranslatedQuery tq = ElasticQueryTranslator.translateSearchMap([:]) + Assertions.assertEquals(0, tq.fromOffset, "default from should be 0") + Assertions.assertEquals(20, tq.sizeLimit, "default size should be 20") + Assertions.assertEquals("TRUE", tq.whereClause) + } + + @Test + @DisplayName("translateSearchMap - explicit pagination") + void translateSearchMap_explicitPagination() { + TranslatedQuery tq = ElasticQueryTranslator.translateSearchMap([from: 40, size: 10]) + Assertions.assertEquals(40, tq.fromOffset) + Assertions.assertEquals(10, tq.sizeLimit) + } + + @Test + @DisplayName("translateSearchMap - query_string sets tsqueryExpr") + void translateSearchMap_queryStringSetstsqueryExpr() { + TranslatedQuery tq = ElasticQueryTranslator.translateSearchMap([ + query: [query_string: [query: "hello world", lenient: true]] + ]) + Assertions.assertNotNull(tq.tsqueryExpr, "tsqueryExpr should be set for query_string") + Assertions.assertTrue(tq.whereClause.contains("content_tsv"), "WHERE should use content_tsv") + } + + @Test + @DisplayName("translateSearchMap - sort spec") + void translateSearchMap_sortSpec() { + TranslatedQuery tq = ElasticQueryTranslator.translateSearchMap([ + sort: [[postDate: [order: "desc"]]] + ]) + Assertions.assertNotNull(tq.orderBy, "orderBy should be set") + Assertions.assertTrue(tq.orderBy.contains("DESC"), "orderBy should be DESC") + } + + @Test + @DisplayName("translateSearchMap - highlight fields extracted") + void translateSearchMap_highlightFieldsExtracted() { + TranslatedQuery tq = ElasticQueryTranslator.translateSearchMap([ + query: [match_all: [:]], + highlight: [fields: [title: [:], description: [:]]] + ]) + Assertions.assertTrue(tq.highlightFields.containsKey("title")) + Assertions.assertTrue(tq.highlightFields.containsKey("description")) + } + + // ============================================================ + // match_all + // ============================================================ + + @Test + @DisplayName("match_all translates to TRUE") + void matchAll_translatesTrue() { + QueryResult qr = ElasticQueryTranslator.translateQuery([match_all: [:]]) + Assertions.assertEquals("TRUE", qr.clause) + Assertions.assertTrue(qr.params.isEmpty()) + } + + // ============================================================ + // query_string + // ============================================================ + + @Test + @DisplayName("query_string translates to websearch_to_tsquery with tsqueryExpr") + void queryString_translatesWebsearchToTsquery() { + QueryResult qr = ElasticQueryTranslator.translateQuery([query_string: [query: "moqui framework"]]) + Assertions.assertTrue(qr.clause.contains("content_tsv"), "clause should use content_tsv") + Assertions.assertTrue(qr.clause.contains("@@"), "clause should have @@ operator") + Assertions.assertNotNull(qr.tsqueryExpr, "should produce tsqueryExpr for scoring") + Assertions.assertFalse(qr.params.isEmpty(), "should have at least one param for the query string") + } + + @Test + @DisplayName("query_string wildcard stripped for tsquery") + void queryString_wildcardStripped() { + QueryResult qr = ElasticQueryTranslator.translateQuery([query_string: [query: "moqui*"]]) + // Wildcard queries are cleaned to simple word for websearch_to_tsquery + Assertions.assertTrue(qr.clause.contains("content_tsv")) + } + + // ============================================================ + // term + // ============================================================ + + @Test + @DisplayName("term translates to field = value") + void term_translatesEquality() { + QueryResult qr = ElasticQueryTranslator.translateQuery([term: [status: "ACTIVE"]]) + Assertions.assertTrue(qr.clause.contains("->>'status'"), "should use ->>'status'") + Assertions.assertTrue(qr.clause.contains("="), "should use equality") + Assertions.assertEquals(1, qr.params.size()) + Assertions.assertEquals("ACTIVE", qr.params[0]) + } + + @Test + @DisplayName("term on _id translates to doc_id equality") + void term_onIdFieldUsesDocId() { + QueryResult qr = ElasticQueryTranslator.translateQuery([term: ["_id": "TEST_001"]]) + Assertions.assertTrue(qr.clause.contains("doc_id"), "should use doc_id for _id field") + Assertions.assertEquals("TEST_001", qr.params[0]) + } + + @Test + @DisplayName("term on nested field path uses JSONB path access") + void term_nestedFieldPath() { + QueryResult qr = ElasticQueryTranslator.translateQuery([term: ["address.city": "Atlanta"]]) + Assertions.assertTrue(qr.clause.contains("document->'address'->>'city'"), "should use nested path") + Assertions.assertEquals("Atlanta", qr.params[0]) + } + + // ============================================================ + // terms + // ============================================================ + + @Test + @DisplayName("terms translates to IN clause") + void terms_translatesInClause() { + QueryResult qr = ElasticQueryTranslator.translateQuery([terms: [statusId: ["ACTIVE", "PENDING", "DRAFT"]]]) + Assertions.assertTrue(qr.clause.contains("IN"), "should use IN operator") + Assertions.assertEquals(3, qr.params.size()) + Assertions.assertTrue(qr.params.containsAll(["ACTIVE", "PENDING", "DRAFT"])) + } + + @Test + @DisplayName("terms with empty list translates to FALSE") + void terms_emptyListTranslatesFalse() { + QueryResult qr = ElasticQueryTranslator.translateQuery([terms: [statusId: []]]) + Assertions.assertEquals("FALSE", qr.clause) + } + + // ============================================================ + // range + // ============================================================ + + @Test + @DisplayName("range with gte and lte") + void range_gteAndLte() { + QueryResult qr = ElasticQueryTranslator.translateQuery([range: [orderDate: [gte: "2024-01-01", lte: "2024-12-31"]]]) + Assertions.assertTrue(qr.clause.contains(">="), "should have >=") + Assertions.assertTrue(qr.clause.contains("<="), "should have <=") + Assertions.assertEquals(2, qr.params.size()) + } + + @Test + @DisplayName("range with gt only") + void range_gtOnly() { + QueryResult qr = ElasticQueryTranslator.translateQuery([range: [amount: [gt: "100"]]]) + Assertions.assertTrue(qr.clause.contains(">"), "should have >") + Assertions.assertFalse(qr.clause.contains(">="), "should not have >= if only gt") + Assertions.assertEquals(1, qr.params.size()) + Assertions.assertEquals("100", qr.params[0]) + } + + @Test + @DisplayName("range on date field gets timestamptz cast") + void range_dateFieldGetsTimestamptzCast() { + QueryResult qr = ElasticQueryTranslator.translateQuery([range: [orderDate: [gte: "2024-01-01"]]]) + Assertions.assertTrue(qr.clause.contains("::timestamptz"), "date fields should cast to timestamptz") + } + + @Test + @DisplayName("range on amount field gets numeric cast") + void range_amountFieldGetsNumericCast() { + QueryResult qr = ElasticQueryTranslator.translateQuery([range: [grandTotal: [gte: "0"]]]) + Assertions.assertTrue(qr.clause.contains("::numeric"), "amount fields should cast to numeric") + } + + // ============================================================ + // exists + // ============================================================ + + @Test + @DisplayName("exists translates to JSONB document ? field") + void exists_translatesJsonbHasKey() { + QueryResult qr = ElasticQueryTranslator.translateQuery([exists: [field: "email"]]) + Assertions.assertTrue(qr.clause.contains("document ?"), "should use JSONB ? operator") + Assertions.assertTrue(qr.clause.contains("email")) + } + + // ============================================================ + // bool + // ============================================================ + + @Test + @DisplayName("bool must translates to AND") + void boolMust_translatesAnd() { + QueryResult qr = ElasticQueryTranslator.translateQuery([bool: [must: [ + [term: [type: "ORDER"]], + [term: [status: "PLACED"]] + ]]]) + Assertions.assertTrue(qr.clause.contains("AND"), "must should generate AND") + Assertions.assertEquals(2, qr.params.size()) + } + + @Test + @DisplayName("bool should translates to OR") + void boolShould_translatesOr() { + QueryResult qr = ElasticQueryTranslator.translateQuery([bool: [should: [ + [term: [status: "PLACED"]], + [term: [status: "SHIPPED"]] + ]]]) + Assertions.assertTrue(qr.clause.contains("OR"), "should should generate OR") + } + + @Test + @DisplayName("bool must_not translates to NOT") + void boolMustNot_translatesNot() { + QueryResult qr = ElasticQueryTranslator.translateQuery([bool: [must_not: [ + [term: [status: "CANCELLED"]] + ]]]) + Assertions.assertTrue(qr.clause.toUpperCase().contains("NOT"), "must_not should generate NOT") + } + + @Test + @DisplayName("bool filter translates same as must") + void boolFilter_translatesSameAsMust() { + QueryResult qr = ElasticQueryTranslator.translateQuery([bool: [filter: [ + [term: [tenantId: "DEMO"]] + ]]]) + Assertions.assertTrue(qr.clause.contains("->>'tenantId'"), "filter should translate term like must") + } + + @Test + @DisplayName("bool combined must and must_not") + void boolCombinedMustAndMustNot() { + QueryResult qr = ElasticQueryTranslator.translateQuery([bool: [ + must: [[term: [type: "ORDER"]]], + must_not: [[term: [status: "CANCELLED"]]] + ]]) + Assertions.assertTrue(qr.clause.contains("AND"), "should have AND") + Assertions.assertTrue(qr.clause.toUpperCase().contains("NOT"), "should have NOT") + Assertions.assertEquals(2, qr.params.size()) + } + + // ============================================================ + // nested + // ============================================================ + + @Test + @DisplayName("nested query translates to EXISTS subquery with jsonb_array_elements") + void nested_translatesExistsSubquery() { + QueryResult qr = ElasticQueryTranslator.translateQuery([nested: [ + path: "orderItems", + query: [term: ["orderItems.productId": "PROD_001"]] + ]]) + Assertions.assertTrue(qr.clause.contains("EXISTS"), "nested should use EXISTS subquery") + Assertions.assertTrue(qr.clause.contains("jsonb_array_elements"), "nested should use jsonb_array_elements") + Assertions.assertTrue(qr.clause.contains("orderItems"), "should reference the nested path") + } + + // ============================================================ + // ids + // ============================================================ + + @Test + @DisplayName("ids query translates to doc_id IN list") + void ids_translatesDocIdIn() { + QueryResult qr = ElasticQueryTranslator.translateQuery([ids: [values: ["ID1", "ID2", "ID3"]]]) + Assertions.assertTrue(qr.clause.contains("doc_id IN"), "ids should use doc_id IN") + Assertions.assertEquals(3, qr.params.size()) + } + + @Test + @DisplayName("ids with empty values translates to FALSE") + void ids_emptyValuesTranslatesFalse() { + QueryResult qr = ElasticQueryTranslator.translateQuery([ids: [values: []]]) + Assertions.assertEquals("FALSE", qr.clause) + } + + // ============================================================ + // translateSort + // ============================================================ + + @Test + @DisplayName("translateSort - map with order desc") + void translateSort_mapWithOrderDesc() { + String result = ElasticQueryTranslator.translateSort([[orderDate: [order: "desc"]]]) + Assertions.assertNotNull(result) + Assertions.assertTrue(result.contains("DESC")) + Assertions.assertTrue(result.contains("orderDate")) + } + + @Test + @DisplayName("translateSort - score special field") + void translateSort_scoreSpecialField() { + String result = ElasticQueryTranslator.translateSort([[_score: [order: "desc"]]]) + Assertions.assertNotNull(result) + Assertions.assertTrue(result.contains("_score"), "should produce _score sort entry") + } + + @Test + @DisplayName("translateSort - keyword suffix stripped") + void translateSort_keywordSuffixStripped() { + String result = ElasticQueryTranslator.translateSort([["statusId.keyword": [order: "asc"]]]) + Assertions.assertFalse(result.contains(".keyword"), ".keyword suffix should be stripped") + Assertions.assertTrue(result.contains("statusId")) + } + + @Test + @DisplayName("translateSort - string shorthand") + void translateSort_stringShorthand() { + String result = ElasticQueryTranslator.translateSort(["orderDate"]) + Assertions.assertNotNull(result) + Assertions.assertTrue(result.contains("orderDate")) + } + + @Test + @DisplayName("translateSort - null returns null") + void translateSort_nullReturnsNull() { + String result = ElasticQueryTranslator.translateSort(null) + Assertions.assertNull(result) + } + + @Test + @DisplayName("translateSort - empty list returns null") + void translateSort_emptyListReturnsNull() { + String result = ElasticQueryTranslator.translateSort([]) + Assertions.assertNull(result) + } + + // ============================================================ + // Security: sanitizeFieldName — SQL injection prevention + // ============================================================ + + @Test + @DisplayName("sanitizeFieldName rejects SQL injection via single quote") + void sanitizeFieldName_rejectsSingleQuote() { + Assertions.assertThrows(IllegalArgumentException) { + ElasticQueryTranslator.sanitizeFieldName("status'; DROP TABLE users;--") + } + } + + @Test + @DisplayName("sanitizeFieldName rejects SQL injection via semicolon") + void sanitizeFieldName_rejectsSemicolon() { + Assertions.assertThrows(IllegalArgumentException) { + ElasticQueryTranslator.sanitizeFieldName("field;DELETE FROM moqui_search_index") + } + } + + @Test + @DisplayName("sanitizeFieldName rejects parentheses") + void sanitizeFieldName_rejectsParentheses() { + Assertions.assertThrows(IllegalArgumentException) { + ElasticQueryTranslator.sanitizeFieldName("field()OR 1=1") + } + } + + @Test + @DisplayName("sanitizeFieldName rejects double dash comment") + void sanitizeFieldName_rejectsDoubleDash() { + Assertions.assertThrows(IllegalArgumentException) { + ElasticQueryTranslator.sanitizeFieldName("field--comment") + } + } + + @Test + @DisplayName("sanitizeFieldName rejects null field") + void sanitizeFieldName_rejectsNull() { + Assertions.assertThrows(IllegalArgumentException) { + ElasticQueryTranslator.sanitizeFieldName(null) + } + } + + @Test + @DisplayName("sanitizeFieldName rejects empty field") + void sanitizeFieldName_rejectsEmpty() { + Assertions.assertThrows(IllegalArgumentException) { + ElasticQueryTranslator.sanitizeFieldName("") + } + } + + @Test + @DisplayName("sanitizeFieldName rejects spaces") + void sanitizeFieldName_rejectsSpaces() { + Assertions.assertThrows(IllegalArgumentException) { + ElasticQueryTranslator.sanitizeFieldName("field name") + } + } + + @Test + @DisplayName("sanitizeFieldName rejects UNION SELECT injection") + void sanitizeFieldName_rejectsUnionSelect() { + Assertions.assertThrows(IllegalArgumentException) { + ElasticQueryTranslator.sanitizeFieldName("x' UNION SELECT * FROM pg_shadow--") + } + } + + @Test + @DisplayName("sanitizeFieldName accepts valid field names") + void sanitizeFieldName_acceptsValidNames() { + // These should NOT throw + Assertions.assertEquals("statusId", ElasticQueryTranslator.sanitizeFieldName("statusId")) + Assertions.assertEquals("order.items.quantity", ElasticQueryTranslator.sanitizeFieldName("order.items.quantity")) + Assertions.assertEquals("@timestamp", ElasticQueryTranslator.sanitizeFieldName("@timestamp")) + Assertions.assertEquals("field-name", ElasticQueryTranslator.sanitizeFieldName("field-name")) + Assertions.assertEquals("_id", ElasticQueryTranslator.sanitizeFieldName("_id")) + } + + @Test + @DisplayName("sanitizeFieldName rejects oversized field name") + void sanitizeFieldName_rejectsOversized() { + String longField = "a" * 257 + Assertions.assertThrows(IllegalArgumentException) { + ElasticQueryTranslator.sanitizeFieldName(longField) + } + } + + @Test + @DisplayName("term query with SQL injection field name is rejected") + void term_sqlInjectionFieldRejected() { + Assertions.assertThrows(IllegalArgumentException) { + ElasticQueryTranslator.translateQuery([term: ["status'; DROP TABLE x;--": "active"]]) + } + } + + @Test + @DisplayName("range query with SQL injection field name is rejected") + void range_sqlInjectionFieldRejected() { + Assertions.assertThrows(IllegalArgumentException) { + ElasticQueryTranslator.translateQuery([range: ["x' OR '1'='1": [gte: "2024-01-01"]]]) + } + } + + @Test + @DisplayName("exists query with SQL injection field name is rejected") + void exists_sqlInjectionFieldRejected() { + Assertions.assertThrows(IllegalArgumentException) { + ElasticQueryTranslator.translateQuery([exists: [field: "x'; DELETE FROM moqui_search_index;--"]]) + } + } + + // ============================================================ + // Full searchMap round-trip + // ============================================================ + + @Test + @DisplayName("full searchMap with bool query and sort") + void fullSearchMap_boolQueryAndSort() { + Map searchMap = [ + from: 0, size: 25, + sort: [[orderDate: [order: "desc"]]], + query: [bool: [ + must: [[term: [statusId: "OrderPlaced"]]], + filter: [[range: [orderDate: [gte: "2024-01-01"]]]] + ]], + highlight: [fields: [productName: [:]]] + ] + TranslatedQuery tq = ElasticQueryTranslator.translateSearchMap(searchMap) + Assertions.assertEquals(0, tq.fromOffset) + Assertions.assertEquals(25, tq.sizeLimit) + Assertions.assertNotNull(tq.orderBy) + Assertions.assertTrue(tq.whereClause.contains("AND")) + Assertions.assertTrue(tq.highlightFields.containsKey("productName")) + } +} diff --git a/framework/xsd/moqui-conf-3.xsd b/framework/xsd/moqui-conf-3.xsd index d060a6176..c2fb655a1 100644 --- a/framework/xsd/moqui-conf-3.xsd +++ b/framework/xsd/moqui-conf-3.xsd @@ -594,16 +594,29 @@ along with this software (see the LICENSE.md file). If not, see - + + Backend type for this cluster. Use 'elastic' (default) for ElasticSearch/OpenSearch via HTTP REST. + Use 'postgres' to store and search documents in PostgreSQL using JSONB and tsvector. + When type is 'postgres', url should be the entity datasource group name (e.g. 'transactional'). + + + + + + + + For type=elastic: full URL to ElasticSearch/OpenSearch (e.g. http://localhost:9200). + For type=postgres: the Moqui entity datasource group name to use (e.g. 'transactional'), or omit to use the default transactional group. + - Prefix added to all ES index names just before requests - Must follow ES index name requirements (lower case, etc) - No separator character is added, recommend ending with underscore (_) + Prefix added to all index names just before requests. + For type=elastic: Must follow ES index name requirements (lower case, etc). No separator character is added, recommend ending with underscore (_). + For type=postgres: Used as a prefix to the index_name column value in moqui_document table. - - + For type=elastic only: HTTP connection pool max size. + For type=elastic only: HTTP request queue size.