Version: v0.5.0+
Platinum Quality = 100% type hints + docstrings on all public methods + 95%+ test coverage + strict typing.
Read only what you need for your task:
- ARCHITECTURE.md - Data model, storage, versioning
- DEVELOPMENT_STANDARDS.md - How to code (constants, logging, types, translations)
- QUALITY_REFERENCE.md - Details on platinum quality requirements
- CODE_REVIEW_GUIDE.md - Phase 0 audit framework for reviewing code
- AGENT_TESTING_USAGE_GUIDE.md - Test validation & debugging
- Kid → User
- Parent → Approver role on User
- Assignee/Approver → Role capabilities, not separate lifecycle records
- Item/Record → Storage JSON object with UUID
- Entity → Home Assistant platform object
STOP using "Entity" for data records! This causes catastrophic confusion.
| ❌ NEVER Say | ✅ ALWAYS Say | Example |
|---|---|---|
| "Chore Entity" | "Chore Item" / "Chore Record" | "Update the Chore Item in storage" |
| "Kid Entity" | "User Item" / "User Record" | "Fetch User Item by UUID" |
| "Parent Entity" | "User role" / "Approver role" | "Check User approver role" |
| "Badge Entity" | "Badge Item" / "Badge Record" | "Create new Badge Item" |
Remember:
- Item/Record = JSON data in
.storage/choreops/choreops_data - Entity = Home Assistant platform object (Sensor, Button, Select)
- Entity ID = HA registry string like
sensor.kc_alice_points
When in doubt: If it has a UUID and lives in storage, it's an Item. If it has an entity_id and lives in HA registry, it's an Entity.
Nothing is complete until ALL THREE pass:
./utils/quick_lint.sh --fix # Must pass (includes boundary checks)
mypy custom_components/choreops/ # Zero errors required
python -m pytest tests/ -v --tb=line # All tests passIntegrated Quality Gates (as of January 2026):
- Ruff check/format (code quality + formatting)
- MyPy (type checking)
- Boundary checker (architectural rules) ← NEW
Error Recovery: If mypy fails more than twice on the same error, STOP and ask for clarification. Do NOT suppress with # type: ignore.
Before declaring work ready for merge to main, verify the workflow metadata required by release automation:
- The change is going through a PR to
main, not a direct push - The PR body includes
Closes #...when the work resolves an issue - The PR has the correct release-note category label from
.github/release.yml - Excluded labels such as
needs-triage,needs-info, or blocked/in-progress status labels are removed - The PR title is release-note friendly
- Validation run is recorded in the PR
Issue lifecycle rule:
- Treat
status:*labels as open-issue workflow state only - Treat
release: pendingandrelease: shippedas closed-issue delivery state - Do not suggest using
release:*labels as release-note categories; changelog grouping remains PR-label based
Use docs/DEVELOPMENT_STANDARDS.md as the canonical policy. Do not invent alternate merge rules in agent responses.
ALL user-facing text → const.py constants → translations/en.json
- Exceptions:
translation_domain=const.DOMAIN, translation_key=const.TRANS_KEY_* - Notifications:
TRANS_KEY_NOTIF_TITLE_*/TRANS_KEY_NOTIF_MESSAGE_* - Config flow errors:
CFOP_ERROR_*→TRANS_KEY_CFOF_*
Use internal_id (UUID) for logic. NEVER use entity names for lookups.
Entity data → .storage/choreops/choreops_data (schema v42+)
Config entry → system settings only (points theme, intervals, retention, display toggles)
Details: See ARCHITECTURE.md § Data Architecture for storage structure, system settings breakdown, and reload performance comparisons.
100% coverage enforced by MyPy in CI/CD. Modern syntax: str | None not Optional[str]
Type System Strategy (See ARCHITECTURE.md § Type System Architecture):
- TypedDict: Static structures with fixed keys (entity definitions, config objects)
- dict[str, Any]: Dynamic structures accessed with variable keys (runtime-built data)
- Goal: Achieve zero mypy errors without type suppressions by matching types to actual code patterns
const.LOGGER.debug("Value: %s", var) # ✅ Correct
const.LOGGER.debug(f"Value: {var}") # ❌ NEVER f-strings in logsBefore writing ANY code, ask these questions:
- YES →
helpers/(if it's a tool) ormanagers/(if it's a workflow) - NO →
utils/(if it's a tool) orengines/(if it's logic)
- YES → MUST be in
managers/(only Managers can write) - NO →
engines/(read-only logic) orutils/(formatting)
- YES →
engines/(schedule math, FSM transitions, point calculations) - NO → Check if it needs HA objects (sensors, buttons)
Does it write to _data?
/ \
YES NO
| |
MANAGERS/ Does it need hass?
/ \
YES NO
| |
HELPERS/ Is it pure?
or / \
MANAGERS/ YES NO
| |
ENGINES/ UTILS/
| Task | Location | Reason |
|---|---|---|
| Calculate next chore due date | engines/schedule_engine.py |
Pure math, no HA, no state |
| Update user points | managers/economy_manager.py |
Writes to storage |
| Format points display | utils/math_utils.py |
Pure function, no HA |
| Get user by user_id | helpers/entity_helpers.py |
Needs HA registry access |
| Parse datetime string | utils/dt_utils.py |
Pure datetime parsing |
| Build chore data dict | data_builders.py |
Sanitization, no HA |
Non-Negotiable: Only Manager methods can call persistence methods.
_persist_and_update()= default for user-visible state changes_persist()= internal bookkeeping only
Forbidden:
- ❌
options_flow.pywriting to storage - ❌
services.pycalling_persist()directly - ❌ Any file outside
managers/modifying_data
Correct Pattern:
# services.py - Service delegates to manager
async def handle_claim_chore(call: ServiceCall) -> None:
chore_id = call.data[SERVICE_FIELD_CHORE_ID]
await coordinator.chore_manager.claim_chore(chore_id) # ✅ Manager handles write
# managers/chore_manager.py - Manager owns the write
async def claim_chore(self, chore_id: str) -> None:
self._data[DATA_CHORES][chore_id]["state"] = CHORE_STATE_CLAIMED
self.coordinator._persist_and_update() # ✅ Default for user-visible changes
async_dispatcher_send(self.hass, SIGNAL_SUFFIX_CHORE_UPDATED)Non-Negotiable: Managers NEVER call other Managers' write methods directly. All cross-domain logic uses the Event Bus (Dispatcher).
Why This Matters:
- Prevents circular dependencies between managers
- Enables testability - each manager can be tested in isolation
- Maintains data consistency - listeners only react to confirmed state changes
- Avoids "phantom state" - no signals for operations that failed
Pattern:
# ❌ WRONG: Direct manager coupling
await self.coordinator.economy_manager.deposit(user_id, 50)
# ✅ CORRECT: Signal-based communication
self.emit(const.SIGNAL_SUFFIX_BADGE_EARNED, user_id=user_id, points=50)
# EconomyManager listens for BADGE_EARNED and deposits automaticallyTransactional Integrity (Critical):
async def create_chore(self, user_input: dict[str, Any]) -> dict[str, Any]:
# 1. Build the entity dict
chore_dict = db.build_chore(user_input)
internal_id = chore_dict[const.DATA_CHORE_INTERNAL_ID]
# 2. Write to in-memory storage (atomic operation)
self._data[DATA_CHORES][internal_id] = dict(chore_dict)
# 3. Persist
self.coordinator._persist_and_update()
# 4. ONLY emit signal after successful write ⚠️
self.emit(const.SIGNAL_SUFFIX_CHORE_CREATED, chore_id=internal_id)
return chore_dictRule: Emit signals ONLY after persistence succeeds. Never emit in a try block before persistence.
Reference: DEVELOPMENT_STANDARDS.md § 5.3
- Check if helper exists:
helpers/entity_helpers.py(entity lookups),helpers/flow_helpers.py(flow validation) - Find constant:
grep TRANS_KEY custom_components/choreops/const.py - Use test scenario:
scenario_medium(most common),scenario_full(complex)
- Copy patterns from existing files (don't invent new patterns)
- Consolidate duplicates: If code appears 2+ times, extract to helper function
- Verify coordinator methods: When editing
coordinator.py, search file FIRST to verify method exists. Never assume methods exist by name. - Use
conftest.pyhelpers:get_user_by_name()(or legacy helper wrappers),construct_entity_id() - Mock notifications through current user-role notifier method names (avoid introducing kid-specific names)
Run quality gates (in this order):
./utils/quick_lint.sh --fix(catches most issues fast)mypy custom_components/choreops/(type errors)python -m pytest tests/ -v(validates behavior)
❌ Calling "Chore" an "Entity" → ✅ Use "Chore Item" or "Chore Record"
❌ Hardcoded strings → ✅ Use const.TRANS_KEY_*
❌ Optional[str] → ✅ Use str | None
❌ F-strings in logs → ✅ Use lazy logging %s
❌ Entity names for lookups → ✅ Use internal_id (UUID)
❌ Touching config_entry.data → ✅ Use .storage/choreops/choreops_data
❌ Direct storage writes → ✅ Use Manager method that calls the correct persistence method
❌ Importing homeassistant in utils/ → ✅ Keep utils pure (no HA imports)
❌ Writing to _data outside Managers → ✅ Delegate to Manager methods
Key Files:
const.py- All constants (TRANSKEY_, DATA__, CFOF**, SERVICE**, etc.)coordinator.py- Infrastructure hub (routing, persistence, manager lifecycle)managers/- Stateful workflows (ChoreManager, EconomyManager, UIManager, StatisticsManager, etc.)helpers/- HA-aware utilities (entity, flow, device, auth, backup, translation helpers)utils/- Pure Python utilities (datetime, math, validation, formatting)translations/en.json- Master translation file
Constant Naming Patterns (See DEVELOPMENT_STANDARDS.md § 3. Constant Naming Standards):
DATA_*= Storage keys for Domain Items (singular names:DATA_USER_*,DATA_CHORE_*)CFOF_*= Config/Options flow input fields (plural with_INPUT_)CONF_*= System settings in config_entry.options (system settings only)TRANS_KEY_*= Translation identifiersATTR_*= Entity state attributes (for HA Entities, not Items)SERVICE_*/SERVICE_FIELD_*= Service actions and parameters
DateTime Functions (See DEVELOPMENT_STANDARDS.md § 6. DateTime & Scheduling Standards):
- ALWAYS use
dt_*helpers fromutils/dt_utils.py(never rawdatetimemodule) - Examples:
dt_now_iso(),dt_parse(),dt_add_interval(),dt_next_schedule()
Common Test Scenarios (run after making changes):
pytest tests/test_workflow_*.py -v # Entity state validation
pytest tests/test_config_flow.py -v # UI flow changes
pytest tests/ -x # Stop on first failure (debugging)Datetime: Always UTC-aware ISO strings. Use utils/dt_utils.py helper functions (dt_now(), dt_parse(), dt_to_utc())
Agent Tip: When stuck, run ./utils/quick_lint.sh --fix first. It catches 80% of issues instantly.