This guide explains the GeneWeb project's CI/CD pipeline, how it works, and how to use it.
The project uses GitHub Actions for continuous integration and continuous deployment:
- CI (Continuous Integration): Tests run automatically on every push and pull request
- CD (Continuous Deployment): Code is automatically deployed to staging on successful tests
- Manual Production Deployment: Production requires manual approval
Push to feature branch
↓
[Lint & Format Check] ← Must pass
↓
[Unit Tests] ← Must pass (blocks PR if fails)
↓
[Integration Tests] ← Must pass (blocks PR if fails)
↓
[Golden Master Tests] ← Informational (doesn't block yet)
↓
[Build Docker Image] ← For testing only, not pushed
Result: If all tests pass, PR can be merged after review approval.
Merge PR to main
↓
[All Tests Run] ← Same as PR
↓
[Build & Push Docker Image]
↓
[Deploy to Staging]
↓
[Smoke Tests on Staging]
↓
Manual: Deploy to Production (click button)
Git tag v1.0.0
↓
[All Tests Run]
↓
[Build Docker Image]
↓
[Create GitHub Release]
↓
[Push to Docker Registry]
↓
Manual: Deploy to Production
Checks:
- Python code style (pylint)
- Code formatting (black)
- Import sorting (isort)
Result: ✅ Pass = clean code style
Failure Example:
E501 line too long (145/120 characters) (line-too-long)
C0304 final newline missing (missing-final-newline)
Checks:
- 191 individual function tests
- Each function tested in isolation
- OCaml behavior verified against Python
- Code coverage >85%
Result: ✅ Pass = functions work correctly
Failure Example:
FAILED tests/python/unit/test_name_lower.py::test_name_lower
AssertionError: 'john' != 'JOHN'
Checks:
- Multi-component workflows (GEDCOM import/export)
- Database operations
- Concurrent requests
- Error handling
Result: ✅ Pass = system works end-to-end
Failure Example:
FAILED tests/python/integration/test_gedcom_roundtrip.py
RuntimeError: gwd failed to start on port 23179
Checks:
- Output matches known-good reference ("golden")
- Detects any unexpected changes
- 12 golden references (HTML pages, GEDCOM exports)
Status: ℹ️ Currently informational (doesn't block)
Example Output:
Golden test: GEDCOM export
Expected lines: 245
Actual lines: 245
✓ PASSED
If it fails:
Golden test: GEDCOM export
Expected lines: 245
Actual lines: 243
✗ FAILED - Output differs
--- expected
+++ actual
@@ -15,3 +15,2 @@
DATE 1 JAN 1950
- Go to Actions tab in repository
- Click on the workflow run (shows commit message)
- See test results by stage:
- ✅ Green checkmark = passed
- ❌ Red X = failed
- ⏭️ Skipped = not run
- Create a pull request
- GitHub automatically runs tests
- See status below PR title:
✅ All checks passed - Ready to merge or ❌ Some checks failed - Fix and push again
Click on the failed check to see details:
FAILED tests/python/unit/test_sosa_kinship.py::test_sosa_generation_number
Expected:
Person 1 generation number: 0
Actual:
Person 1 generation number: -1
Help: Check that sosa_kinship() handles root person correctly
# Auto-fix style issues
black .
isort .
# Push fixed code
git add .
git commit -m "fix: Format code"
git push# Run failing test locally
pytest tests/python/unit/test_name_lower.py::test_name_lower -v
# Debug: Add print statements
# Fix the bug
# Push fix
git add .
git commit -m "fix: Correct name_lower function"
git push# Start gwd first
cd GeneWeb
./gw/gwd -hd ./gw -bd ./bases -p 23179 -lang en &
cd ..
# Run test
pytest tests/python/integration/test_gedcom_import.py -v
# Fix the issue
# Clean up
pkill -f gwdIf golden test fails, output might have changed intentionally:
# Review the diff
# If changes are correct, regenerate golden
cd GeneWeb
./gw/gwd -hd ./gw -bd ./bases -p 23179 -lang en &
cd ..
./scripts/golden/run_golden.sh create
pytest tests/golden/ -v
pkill -f gwdThe main branch is protected:
- ✅ Unit tests must pass
- ✅ Integration tests must pass
- ✅ Code review approval required (1 reviewer)
- ℹ️ Golden tests (informational)
- ℹ️ Coverage tracking (>80% target)
If any required check fails, you cannot merge. You must:
- Fix the issue
- Push the fix
- Wait for tests to pass
- Then merge
- Create a pull request
- All tests pass
- Get code review approval
- Merge to main
- GitHub Actions automatically:
- Builds Docker image
- Pushes to staging
- Runs smoke tests
- Ensure staging is working
- Go to Actions tab
- Find the workflow run for main
- Click Review deployments
- Approve for Production
- Deployment starts
- Monitor in Actions tab
# Make sure you're on main and everything is committed
git checkout main
git pull origin main
# Create a tag
git tag v1.0.0 # Follow semantic versioning
# Push tag
git push origin v1.0.0- Detects the tag
- Runs all tests
- Creates a GitHub Release
- Builds Docker image with tag
v1.0.0 - Pushes to Docker registry
- Go to Actions tab
- See release workflow running
- Go to Releases tab to see new release
- Release includes:
- Release notes (from git commits)
- Docker image ready for deployment
When golden tests fail, a diff.txt file might be generated showing:
--- expected (OCaml output)
+++ actual (Python output)
@@ -15,3 +15,2 @@
DATE 1 JAN 1950
NAME John Doe
-EXTRA LINE
+DIFFERENT LINELegend:
-= Line in expected but missing in actual+= Line in actual but not in expected- No prefix = Line matches
Actions:
- If differences are bugs: Fix code
- If differences are intentional: Update golden reference
Total CI time: ~20-25 minutes
Lint: 1 min ┐
Unit Tests: 5 min ├─ Can run in parallel = 10 min total
Integration Tests: 10 min ┘
Golden Tests: 5 min ┐
├─ Can run in parallel with above
Build Docker: 5 min ┘
├─ Deploy to Staging: 2 min
└─ Smoke Tests: 1 min
Possible causes:
- Different Python version (CI uses 3.11)
- Different OCaml version
- Environment variables not set
- Timing issues in CI runner
Solution: Check CI logs for exact error
Possible causes:
- Port conflicts (if gwd not properly stopped)
- Timing-sensitive tests
- File system permissions
Solution: Run tests locally to reproduce
Possible causes:
- GitHub Actions quota exceeded
- Runner offline
- Workflow disabled
Solution: Check Actions tab for status
-
Run tests locally before pushing
pytest tests/ -v
-
Read CI logs carefully
- Scroll to the error
- Read the full error message
- Don't ignore warnings
-
One commit per feature
- Easier to identify issues
- Cleaner history
-
Write descriptive commit messages
fix: Correct date validation in date_validate() The function was not properly handling February dates in leap years. Added test case and fixed logic. -
Keep PRs focused
- One feature per PR
- Easier to review
- Easier to bisect if issues
- CI Issues: Check GitHub Issues (label:
ci) - Deployment Help: See DEPLOYMENT_GUIDE.md
- Test Help: See Testing documentation
| What | When | Time | Required |
|---|---|---|---|
| Lint | Every push | 1 min | ✅ Yes |
| Unit Tests | Every push | 5 min | ✅ Yes |
| Integration Tests | Every push | 10 min | ✅ Yes |
| Golden Tests | Every push | 5 min | ℹ️ No |
| Deploy to Staging | Push to main | 2 min | ✅ Automatic |
| Deploy to Prod | Manual | 2 min | ⏳ Manual |
Pro Tips:
- Always check CI results before leaving your desk
- Fix CI failures immediately (don't accumulate)
- Use git tags for production releases
- Monitor staging before deploying to production