This repository contains the source code for the official website as well as the official app of conveniat27, built with Next.js and Payload CMS.
- Core Technologies
- Prerequisites
- Getting Started
- Project Structure
- Key Concepts
- Database Maintenance
- SSH Tunneling / Troubleshooting
- License
- Framework: Next.js (App Router)
- TRPC: tRPC (for type-safe API routes)
- CMS: Payload CMS (Headless, Self-hosted)
- Language: TypeScript (with strict type checking)
- UI: React, shadcn/ui, Tailwind CSS, Headless UI
- Icons: Lucide React
- Database: MongoDB (self-hosted), MinIO (S3-compatible object storage, self-hosted), PostgreSQL (self-hosted)
- PWA: Serwist (for Service Worker management)
- Code Quality: ESLint, Prettier
- Development Environment: Docker (Devcontainer)
Ensure you have the following installed on your system:
- Git
- Docker & Docker Compose
- An IDE that supports Devcontainers (e.g., VS Code with the Dev Containers extension, WebStorm).
- Clone the repository
- Copy the
.env.examplefile to.envand fill empty values. - Open the project using the provided devconatiner inside your IDE (VSCode or Webstorm are tested).
- Start Developing using the following commands:
The above command launches a local development server with hot-reloading enabled. You can open the website on
docker compose up --build
http://localhost:3000.
- Install Dependencies:
pnpm install - Start Development Server:
docker compose up --build(this uses the defaultdevprofile defined in.env) - Stop Development Server:
docker compose down - Clear Database & Volumes: To completely reset the database and remove Docker volumes (useful for reseeding):
After running this, you'll need to restart the server with
docker compose down --volumes
docker compose up --buildto re-initialize and potentially re-seed the database based on Payload's configuration.
The project includes an optional observability stack (Prometheus, Grafana, Loki, Tempo). To start the project with these tools enabled locally:
docker compose --profile observability up --buildOnce the development server is running, you can typically access the Payload CMS admin interface at:
http://localhost:3000/admin (or your configured admin route)
The project structure is influenced by Next.js App Router conventions and principles from Bulletproof React, emphasizing modularity and maintainability.
public/ # Static assets (images, fonts, etc.)
src/
|
+-- app/ # Next.js App Router: Layouts, Pages, Route Handlers
| |-- (entrypoint)/ # Entrypoint for the APP (manually localized)
| |-- (payload)/ # Routes related to Payload Admin UI
| |-- (frontend)/ # Routes for the main website frontend / app
|
+-- components/ # Globally shared React components
|
+-- config/ # Global application configurations (e.g., exported env vars)
|
+-- features/ # Feature-based modules (self-contained units of functionality)
| |-- service-worker/ # Serwist service worker logic
| |-- payload-cms/ # Payload CMS specific configurations, collections, globals, hooks
| +-- ... # Other features
|
+-- hooks/ # Globally shared React hooks
|
+-- lib/ # Globally shared utility functions, libraries, clients
|
+-- types/ # Globally shared TypeScript types and interfaces
|
+-- utils/ # Globally shared low-level utility functions
- Most application logic resides within the
src/featuresdirectory. - Each sub-directory in
src/featuresrepresents a distinct feature (e.g.,chat,map,payload-cms). - Encapsulation: Code within a feature folder should primarily relate to that specific feature.
- Structure within Features: A feature can internally have its own
components,hooks,api,types,utilssubdirectories, scoped to that feature. - Import Restrictions: ESLint rules (
import/no-restricted-pathsineslint.config.mjs) enforce unidirectional dependencies:appcan import fromfeaturesand shared directories (components,hooks, etc.).featurescannot import fromappor shared directories.- Features generally should not import directly from other features, promoting loose coupling. Exceptions are
explicitly defined (e.g.,
payload-cmsandnext-authcan be imported more broadly). - Shared directories (
components,hooks,lib,types,utils) should not import fromapporfeatures.
- Payload CMS Exception: The
payload-cmsfeature is central and can be imported by other parts of the application as it defines the core data structures / content types used throughout the app.
This structure aids scalability, maintainability, and team collaboration by keeping concerns separated.
A core aspect of this project is that most frontend pages are generated based on data managed within Payload CMS.
- CMS Configuration (
src/features/payload-cms/payload.config.ts,src/features/payload-cms/settings): Defines data structures (Collections, Globals) and their fields. Collections might represent page types, blog posts, etc. - Routing (
src/app/(frontend)/[locale]/(payload-pages)/[...slugs]/page.tsx): This dynamic route catches most frontend URL paths. - Route Resolution: The application resolves the incoming URL (
slugs) against Collections and Globals defined in Payload CMS (via thesrc/features/payload-cms/routeResolutionTable.ts). - Layout & Component Mapping: Once the corresponding CMS data is found for a URL, a specific page layout (
src/features/payload-cms/page-layouts) is rendered. Complex CMS fields (like Blocks or Rich Text) are mapped to React components using converters (src/features/payload-cms/converters).
To improve performance, calls to Payload CMS are cached server-side using Next.js 'use cache' functionality.
In development mode (NODE_ENV=development), the custom cache handler (Redis/FileSystem) is disabled. Next.js uses its default in-memory cache in development.
This application utilizes Serwist (@serwist/next) to implement Service Worker
functionality, enabling PWA features:
- Offline Access: Pre-cached pages (like the
/~offlinepage) and potentially other assets allow basic functionality when the user is offline. - Caching: Improves performance by caching assets and network requests.
- Reliability: Provides a more resilient user experience on flaky networks.
By default, the Service Worker is disabled during local development (docker compose up) to prevent caching issues
with Hot Module Replacement (HMR).
To enable the Service Worker locally and simulate a production-like environment (including file watching and
rebuilding), use the service-worker profile:
docker compose --profile service-worker up --watch --buildThis uses docker watch to sync file changes and trigger rebuilds, leveraging the Turbopack file system cache for
faster subsequent builds.
Maintaining code quality and consistency is crucial.
The project enforces strict TypeScript settings (tsconfig.json), including:
strict, strictNullChecks, noImplicitAny, noUncheckedIndexedAccess, exactOptionalPropertyTypes, etc. This helps
catch errors at compile time and improves code reliability.
- ESLint (
eslint.config.mjs): Used for identifying and reporting on patterns in JavaScript/TypeScript code. Includes rules fromeslint:recommended,typescript-eslint,unicorn,react-hooks,next/core-web-vitals, and custom rules for conventions and import restrictions. - Prettier: Used for automatic code formatting to ensure a consistent style. Integrated via
eslint-plugin-prettier. - Run Checks: (Ensure these scripts exist in your
package.json)# Run ESLint checks and fix issues pnpm run lint
As mentioned in the Project Structure section, ESLint rules strictly enforce module boundaries to
maintain a clean and understandable architecture. Path aliases (@/*, @payload-config) defined in tsconfig.json are
used for cleaner imports.
- shadcn/ui: Provides beautifully designed, accessible components built on Radix UI and
Tailwind CSS. Components are typically copied into the project (
src/components/ui) rather than installed as a dependency. - Headless UI: Used for unstyled, accessible UI components providing underlying logic for elements like modals, dropdowns, etc.
- Lucide React: Provides a wide range of clean and consistent SVG icons.
- Configuration is managed via environment variables.
.env.exampleserves as a template listing the required variables.- Create a
.envfile (copied from.env.example) for local development. Never commit.envfiles to Git. - Populate
.envwith necessary credentials (database URLs, API keys, secrets, etc.).
The easiest way to build the page into a production ready bundle is to use the provided Docker Compose file. This will build the Next.js application and Payload CMS, and prepare it for deployment.
docker compose -f docker-compose.prod.yml up --buildHowever, you can also build the application manually using the following commands.
Please ensure that you have deleted node_modules, src/lib/prisma/*, and .next
before running the commands to ensure a clean build.
Also make sure that you DON'T have any .env file in the root of the project, as this will
cause issues with the build process.
# Export environment variables
export $(grep -v '^#' .env | grep '^NEXT_PUBLIC_' | xargs)
export BUILD_TARGET="production"
export NODE_ENV="production"
export DISABLE_SERVICE_WORKER="true" # speeds up build process (optional)
export PRISMA_OUTPUT="src/lib/prisma/client/"
# Install dependencies
pnpm install
# Create build info file
bash create_build_info.sh
# Generate Prisma client
npx prisma generate --no-hints
# Build the Next.js application
pnpm next buildTo analyze the bundle size of the Next.js application, you can use the next-bundle-analyzer package.
Xou can run the following command to analyze the bundle size. This will generate a report and open it in
your default browser.
ANALYZE=true pnpm buildWe follow a standard Git workflow for managing changes:
- Branching: Create a new branch for each (bigger) feature or bug fix.
- Use descriptive names (e.g.,
feature/new-header,bugfix/fix-footer). - For small changes, you can commit directly to the
devbranch. - For larger features, create a feature branch from
devand merge it back when complete. - You are allowed to force push to your feature branches. However, if multiple developers are collaborating on the
same feature branch, always coordinate and communicate with your teammates before force pushing, as it can
overwrite others' work. Avoid force pushing to
dev, and never force push tomain.
- Use descriptive names (e.g.,
- Pull Requests: When ready, open a pull request (PR) against the
devbranch.- Ensure the PR description is clear about the changes made.
- Request reviews from team members.
- Once approved, we merge the PR into
dev.
- Releases: When ready to deploy, merge
devintomain.- We do not use squash merging for releases, instead, we use regular merging to preserve commit history.
- After every release, we rebase the
devbranch frommainto keep it up to date without introducing merge commits. - We may squash merge features into
devto keep the history clean.
- Hotfixes: For urgent fixes, create a hotfix branch from
main, apply the fix, and merge it back into bothmainanddev. Hotfix branches should be named likehotfix/fix-issue.
The following commands are used to generate and apply migrations to the postgreSQL database. Here you can find an in-depth guide on how to use Prisma Migrate.
###############################################
# Generate and apply migrations to remote databases
###############################################
# 1. Establish SSH Tunnel (separate terminal)
pnpm db:tunnel-dev # or pnpm db:tunnel-prod
# 2. Set password (for the remote database)
export DB_PASSWORD=
# 3. Connect via the tunnel on localhost:5433
# (Note: 5433 is used for the tunnel, 5432 is for your local instance)
export CHAT_DATABASE_URL="postgres://conveniat27:$DB_PASSWORD@localhost:5433/conveniat27"
# for konekta
pnpm db:tunnel-konekta
export DB_PASSWORD=
export CHAT_DATABASE_URL="postgres://konekta:$DB_PASSWORD@localhost:5433/konekta"
# Check status
npx prisma migrate diff --from-config-datasource --to-schema prisma/schema.prisma
# Create/Apply migrations
npx prisma migrate dev --schema prisma/schema.prisma # for dev
npx prisma migrate deploy --schema prisma/schema.prisma # for prodIf you see warnings about collation version mismatch (e.g.,
The database was created using collation version 2.36, but the operating system provides version 2.41), you need to
update the collation version to match the current OS.
Run the following SQL command against the database:
ALTER
DATABASE conveniat27 REFRESH COLLATION VERSION;SSH into the server and run the following command to execute the SQL command:
If asked for a password for the user conveniat27, use the production database password.
docker run --rm -it --network conveniat_backend-net postgres:17 \
psql -h conveniat_postgres -U conveniat27 -d conveniat27 -c "ALTER DATABASE conveniat27 REFRESH COLLATION VERSION;"If asked for a password for the user conveniat27, use the development database password.
docker run --rm -it --network conveniat-dev_backend-net postgres:17 \
psql -h conveniat-dev_postgres -U conveniat27 -d conveniat27 -c "ALTER DATABASE conveniat27 REFRESH COLLATION VERSION;"To open an interactive psql shell (instead of running a single command), simply omit the -c argument:
Production:
docker run --rm -it --network conveniat_backend-net postgres:17 \
psql -h conveniat_postgres -U conveniat27 -d conveniat27Development:
docker run --rm -it --network conveniat-dev_backend-net postgres:17 \
psql -h conveniat-dev_postgres -U conveniat27 -d conveniat27To directly connect to the database from your local machine, use the provided script to open an SSH Tunnel. This tunnel supports both Postgres (local 5433) and MongoDB (local 27018).
The tunnel runs in the foreground and forwards traffic to the remote infrastructure.
pnpm db:tunnel-prod # For Production
pnpm db:tunnel-dev # For DevelopmentYou can easily sync the entire database state (Postgres + MongoDB) between your local environment and the remote servers.
Copies the state from the remote database to your local Docker instance.
- Open a tunnel (see above).
- Run the pull command:
pnpm db:pull
- Follow the prompts for the remote database passwords.
Updates the remote Development state with your local data. Safety confirmation required.
- Open the dev tunnel:
pnpm db:tunnel-dev. - Run the push command:
pnpm db:push-dev
If you encounter errors when opening a tunnel, follow these steps:
This usually happens because a previous tunnel container is still running on the remote host. The updated commands above
automatically attempt to stop the existing container (db-tunnel-prod or db-tunnel-dev) before starting a new one.
If the automatic stopping fails, SSH into the host and run:
docker rm -f db-tunnel-prod # or db-tunnel-devThe scripts use the -t flag in SSH and --init flag in Docker to ensure that when you press Ctrl+C on your local
machine, the signal is propagated correctly to the remote container, allowing it to exit and clean itself up (via
--rm).
How this works:
- The tunnel script starts a forwarder container on the manager node.
- It maps local ports (
5433for PG,27018for Mongo) to the manager, which in turn forwards to the internal network. - The sync scripts use
pg_dump/psqlandmongodump/mongorestoreto stream data over these ports. - Your local machine tunnels to the manager's mapped ports.
This project is licensed under the MIT License — see the LICENSE file for details.