diff --git a/.markdownlint.yaml b/.markdownlint.yaml new file mode 100644 index 0000000..a28fa58 --- /dev/null +++ b/.markdownlint.yaml @@ -0,0 +1,9 @@ +MD024: # no-duplicate-heading + siblings_only: true +MD033: # no-inline-html + allowed_elements: + - br + - summary + - details + - strong +MD013: false # Line length \ No newline at end of file diff --git a/code_samples/authorization/get_decision.mdx b/code_samples/authorization/get_decision.mdx index 8f644b1..7278a52 100644 --- a/code_samples/authorization/get_decision.mdx +++ b/code_samples/authorization/get_decision.mdx @@ -85,6 +85,7 @@ import ( "log" "github.com/opentdf/platform/protocol/go/authorization" + "github.com/opentdf/platform/protocol/go/entity" "github.com/opentdf/platform/protocol/go/policy" "github.com/opentdf/platform/sdk" ) diff --git a/docs/architecture.mdx b/docs/architecture.mdx index 1565510..9ba0cbe 100644 --- a/docs/architecture.mdx +++ b/docs/architecture.mdx @@ -46,37 +46,37 @@ graph TD class POLICY,AUTHZ,ERS,KAS opentdfService class ATTR_SOURCES,IDP,CLIENT externalSystem - click POLICY "components/policy/" "Go to Policy Service docs" - click AUTHZ "components/authorization" "Go to Authorization Service docs" - click ERS "components/entity_resolution" "Go to Entity Resolution Service docs" - click KAS "components/key_access" "Go to Key Access Server docs" + click POLICY "components/policy/" + click AUTHZ "components/authorization" + click ERS "components/entity_resolution" + click KAS "components/key_access" ``` ### [Policy Service](components/policy/) -The **Policy Service** is where all access control policies are defined and managed. It provides the tools and APIs to create a rich set of policies that govern data access. This includes not only attributes and their values, but also the definitions of **actions, obligations, and key access mappings**. +The Policy Service is where all access control policies are defined and managed. It provides the tools and APIs to create a rich set of policies that govern data access. This includes not only attributes and their values, but also the definitions of **actions, obligations, and key access mappings**. -In the context of the NIST ABAC model, the Policy Service functions as the **Policy Administration Point (PAP)**. +In the context of the NIST ABAC model, the Policy Service functions as the [Policy Administration Point (PAP)](https://csrc.nist.gov/glossary/term/policy_administration_point). ### [Authorization Service](components/authorization) -The **Authorization Service** is the core decision-making engine of the platform. It is responsible for evaluating the rich policies from the Policy Service against a set of attributes to render an authorization decision. +The Authorization Service is the core decision-making engine of the platform. It is responsible for evaluating the rich policies from the Policy Service against a set of attributes to render an authorization decision. -In the context of the NIST ABAC model, it functions as the **Policy Decision Point (PDP)**. +In the context of the NIST ABAC model, it functions as the [Policy Decision Point (PDP)](https://csrc.nist.gov/glossary/term/policy_decision_point). ### [Entity Resolution Service (ERS)](components/entity_resolution) -The **Entity Resolution Service** is responsible for gathering the attributes about a subject needed for a decision. By default, it can derive attributes from claims in an authentication token. Optionally, it can be configured to connect to external attribute sources (LDAP, SQL) to "hydrate" the entity with more attributes. +The Entity Resolution Service is responsible for gathering the attributes about a subject needed for a decision. By default, it can derive attributes from claims in an authentication token. Optionally, it can be configured to connect to external attribute sources (LDAP, SQL) to "hydrate" the entity with more attributes. -In the context of the NIST ABAC model, the ERS functions as the **Policy Information Point (PIP)**. +In the context of the NIST ABAC model, the ERS functions as the [Policy Information Point (PIP)](https://csrc.nist.gov/glossary/term/policy_information_point). ### [Key Access Server (KAS)](components/key_access) -The **Key Access Server (KAS)** enforces access control decisions. Its role is more extensive than a typical enforcement point: +The Key Access Server (KAS) enforces access control decisions. Its role is more extensive than a typical enforcement point: - **Cryptographic Enforcement:** It enforces decisions by granting or withholding cryptographic keys for TDF decryption. - **Encryption Enablement:** It manages key exchanges and enables various TDF encryption modes. -In the context of the NIST ABAC model, the KAS functions as the **Policy Enforcement Point (PEP)**. +In the context of the NIST ABAC model, the KAS functions as the [Policy Enforcement Point (PEP)](https://csrc.nist.gov/glossary/term/policy_enforcement_point). -Furthermore, the OpenTDF platform is designed for flexibility. Developers can **build and integrate their own custom PEPs**. These custom enforcement points can leverage the platform's robust Authorization (PDP) and Policy (PAP) services while implementing enforcement logic tailored to specific applications. These custom PEPs can also optionally interface with the KAS to take advantage of its powerful cryptographic capabilities. \ No newline at end of file +Furthermore, the OpenTDF platform is designed for flexibility. Developers can **build and integrate their own custom PEPs**. These custom enforcement points can leverage the platform's robust Authorization ([PDP](https://csrc.nist.grov/glossary/term/policy_decision_point)) and Policy ([PAP](https://csrc.nist.gov/glossary/term/policy_administration_point)) services while implementing enforcement logic tailored to specific applications. These custom PEPs can also optionally interface with the KAS to take advantage of its powerful cryptographic capabilities. \ No newline at end of file diff --git a/docs/components/policy/index.md b/docs/components/policy/index.md index bcb98c2..a1e7c35 100644 --- a/docs/components/policy/index.md +++ b/docs/components/policy/index.md @@ -7,9 +7,26 @@ slug: /components/policy Policy is the all-encompassing name for configuration of cryptographically-bound Attribute Based Access Control (ABAC) within the Platform. ```mermaid -graph LR; - Data<-- Resource Mappings -->Attributes; - Attributes<-- Subject Mappings -->Entities; +graph TD; + subgraph "Policy Framework" + Data["Data/Resources"] + Attributes["Attributes
(Namespaces, Definitions, Values)"] + Entities["Entities/Subjects"] + Actions["Actions
(decrypt, transmit, etc.)"] + end + + Data -.->|Resource Mappings| Attributes + Entities -.->|Subject Mappings| Attributes + Actions -.->|Subject Mappings| Entities + + subgraph "Access Decision" + Decision{"Access Control
Decision"} + end + + Attributes --> Decision + Actions --> Decision + Entities --> Decision + Data --> Decision ``` TDF creation and decryption are driven by the Policy within a Platform instance and the TDF manifest. In other words, on a TDF decryption request, the platform services (KAS, Authorization) compare attributes on the TDF against the requester's entitlements to make a decision to release the key or not. diff --git a/docs/components/policy/keymanagement/base_key.md b/docs/components/policy/keymanagement/base_key.md index 35c3efd..a4f0a17 100644 --- a/docs/components/policy/keymanagement/base_key.md +++ b/docs/components/policy/keymanagement/base_key.md @@ -1,31 +1,238 @@ --- -sidebar_position: 1 +sidebar_position: 4 slug: /components/policy/keymanagement/base_key --- -# Base Key +# Base Keys -A base key, or default key, is a key that will be used when no grants or key mappings are found from the attributes passed in. Base keys are set by an admin for the purpose of ensuring that insecure keys are not used by default when no mappings/grants are found. +Base Keys provide a system-wide default encryption key that serves as a fallback when no specific key mappings are found for the requested attributes. This ensures consistent encryption behavior and prevents the use of potentially insecure default keys. -## How to get/set a base key +## Base Key Management -:::important -You must have admin permission to use any key operations. +Base keys are managed through the OpenTDF platform's key management system. The platform ensures that: + +- **Only one base key exists at any time** - Setting a new base key automatically replaces the previous one +- **No manual cleanup required** - The system handles base key replacement automatically +- **Seamless transitions** - Applications continue working during base key updates +- **Active key enforcement** - Only active keys can be designated as base keys + +### Automatic Replacement Behavior + +When you set a new base key using `otdfctl`, the platform: + +1. **Validates** the new key is active and accessible +2. **Replaces** the existing base key reference automatically +3. **Updates** all internal references immediately +4. **Confirms** the change through the API response + +No additional cleanup or management steps are required. + +## Managing Base Keys + +### Set a Base Key + +**Using Key ID:** + +```bash +# Set base key by key UUID +otdfctl policy kas-registry key base set --id "550e8400-e29b-41d4-a716-446655440000" +``` + +**Using KAS and Key ID:** + +```bash +# Set base key by KAS name and key ID +otdfctl policy kas-registry key base set \ + --kas-name "production-kas" \ + --key-id "prod-rsa-2024" + +# Set base key by KAS URI and key ID +otdfctl policy kas-registry key base set \ + --kas-uri "https://kas.example.com" \ + --key-id "prod-rsa-2024" +``` + +### Get Current Base Key + +```bash +# Get the current base key information +otdfctl policy kas-registry key base get +``` + +**Example Response:** + +```json +{ + "base_key": { + "kas_uri": "https://kas.example.com", + "public_key": { + "algorithm": "ALGORITHM_RSA_2048", + "kid": "prod-rsa-2024", + "pem": "-----BEGIN PUBLIC KEY-----\n...\n-----END PUBLIC KEY-----\n" + }, + "kas_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890" + } +} +``` + +## Base Key Requirements + +### Key Status Requirement + +:::important Active Keys Only +Only keys with `KEY_STATUS_ACTIVE` status can be set as base keys. The system will reject attempts to set rotated or inactive keys as the base key. ::: -When using the [kas-registry proto](https://github.com/opentdf/platform/blob/main/service/policy/kasregistry/key_access_server_registry.proto#L659-L662) and an active connection to the platform you can use the above base key rpcs to set and retrieve the base key. +### Automatic Rotation Updates + +When a base key is rotated: + +1. The old key status changes to `KEY_STATUS_ROTATED` +2. The new key becomes `KEY_STATUS_ACTIVE` +3. **The base key reference is automatically updated** to point to the new active key +4. Applications continue using the rotated key seamlessly + +```bash +# After rotation, the base key automatically points to the new key +# No manual base key update required +otdfctl policy kas-registry key rotate --id "current-base-key-uuid" ... +``` + +## SDK Behavior with Base Keys + +### Key Resolution Priority + +The SDK follows this precedence when resolving keys for TDF operations: + +1. **Attribute Value** key mappings (most specific) +2. **Attribute Definition** key mappings +3. **Namespace** key mappings +4. **Base Key** (system fallback) +5. ~~Passed-in KAS info~~ (deprecated behavior) + +### Information Override + +When a base key is configured, the SDK will: + +✅ **Override client-provided KAS information** with base key's KAS details +✅ **Override client-provided key algorithm** with base key's algorithm +✅ **Use base key for all encryption operations** when no specific mappings exist + +### NanoTDF Compatibility + +:::warning ECC Requirement for NanoTDF +**NanoTDF requires Elliptic Curve keys** for optimal performance and compatibility. + +- ✅ If base key is ECC (`ALGORITHM_EC_P256`, `ALGORITHM_EC_P384`, `ALGORITHM_EC_P521`) → NanoTDF works +- ❌ If base key is RSA (`ALGORITHM_RSA_2048`, `ALGORITHM_RSA_4096`) → SDK falls back to client-provided key or errors + +**Recommendation:** Use ECC keys as base keys to ensure full NanoTDF compatibility. +::: + +### Future Strict Mode + +:::note Upcoming Changes +In future SDK versions (post v0.5.0), the platform will enforce **strict mode**: + +- SDKs will **require** a base key to be set +- No fallback to client-provided KAS information +- Encryption will **fail** if no base key or key mappings are found + +This ensures consistent security policy enforcement across all TDF operations. +::: + +## Operational Examples + +### Complete Base Key Setup + +```bash +# 1. Register KAS +KAS_ID=$(otdfctl policy kas-registry create \ + --uri "https://kas.example.com" \ + --name "production-kas" \ + --output json | jq -r '.key_access_server.id') + +# 2. Create an ECC key for broad compatibility +KEY_ID=$(otdfctl policy kas-registry key create \ + --kas-id "$KAS_ID" \ + --key-id "base-ec-2024" \ + --algorithm "ALGORITHM_EC_P256" \ + --key-mode "KEY_MODE_CONFIG_ROOT_KEY" \ + --public-key-pem "$(cat ec_public_key.pem)" \ + --wrapping-key-id "root-kek" \ + --wrapped-key "$(base64 < wrapped_ec_key.bin)" \ + --output json | jq -r '.kas_key.key.id') + +# 3. Set as base key +otdfctl policy kas-registry key base set --id "$KEY_ID" + +# 4. Verify configuration +otdfctl policy kas-registry key base get +``` + +### Base Key Rotation + +```bash +# Get current base key ID +CURRENT_BASE_KEY=$(otdfctl policy kas-registry key base get --output json | jq -r '.base_key.kas_id') + +# Rotate the base key (automatically updates base key reference) +otdfctl policy kas-registry key rotate \ + --id "$CURRENT_BASE_KEY" \ + --new-key-id "base-ec-2025" \ + --algorithm "ALGORITHM_EC_P384" \ + --key-mode "KEY_MODE_CONFIG_ROOT_KEY" \ + --wrapping-key-id "root-kek" \ + --wrapping-key "$(base64 < new_wrapped_key.bin)" + +# Verify base key now points to new key +otdfctl policy kas-registry key base get +``` + +## Monitoring and Troubleshooting + +### Check Base Key Status + +```bash +# Get current base key details +BASE_KEY_INFO=$(otdfctl policy kas-registry key base get --output json) +echo "$BASE_KEY_INFO" | jq '.base_key' + +# Check if base key is active +KEY_STATUS=$(otdfctl policy kas-registry key get \ + --id "$(echo "$BASE_KEY_INFO" | jq -r '.base_key.kas_id')" \ + --output json | jq -r '.kas_key.key.key_status') + +if [ "$KEY_STATUS" != "KEY_STATUS_ACTIVE" ]; then + echo "⚠️ Base key is not active: $KEY_STATUS" +else + echo "✅ Base key is active and ready" +fi +``` + +### Common Issues + +**Base Key Not Found:** + +- Verify a base key has been set: `otdfctl policy kas-registry key base get` +- Ensure the referenced key still exists and is active + +**NanoTDF Failures:** + +- Check if base key uses an ECC algorithm +- Consider setting an ECC base key for NanoTDF compatibility + +**SDK Fallback Behavior:** + +- Review SDK logs for base key resolution attempts +- Verify platform connectivity and authentication +- Check if key mappings exist for specific attributes -[otdfctl base key](https://github.com/opentdf/otdfctl/tree/main/docs/man/policy/kas-registry/key/base) provides documentation on how to set / get base keys with the OpenTDF CLI. +## API Reference -## Effects of using base key +The base key functionality is provided through these RPC endpoints: -1. When a base key is specified the SDK will prefer to use it over the passed in kas info list. The SDK will **overwrite** the following information if a base key is registered with the platform: - 1. The passed in kas information list - 2. The key algorithm -2. If a base key is not present, the SDK will fallback to using the passed in kas information list and key algorithm. -3. If the base key is not of type ECC, it **cannot** be used with NanoTDF. - 1. If the registered base key is not of type ECC, the SDK will fallback to using the passed in kas url and key type. +- **`SetBaseKey`** - Set a new base key (upserts automatically) +- **`GetBaseKey`** - Retrieve current base key information -:::note -In upcoming versions of the SDK, post v0.5.0, the SDK will prefer to error when no base key is set; instead of falling back. -::: \ No newline at end of file +Both endpoints are defined in the [KAS Registry service](https://github.com/opentdf/platform/blob/main/service/policy/kasregistry/key_access_server_registry.proto#L693-L696). diff --git a/docs/components/policy/keymanagement/index.md b/docs/components/policy/keymanagement/index.md index 0176cb2..385aca3 100644 --- a/docs/components/policy/keymanagement/index.md +++ b/docs/components/policy/keymanagement/index.md @@ -5,52 +5,175 @@ slug: /components/policy/keymanagement # Key Management -Key management is the new way for managing key encryption keys for the platform. Asymmetric keys or a reference to a key will now be stored either within the platform database directly, or stored externally using a Key Management Provider or a Hardware Security Module (abstracted as a Key Manager). Those keys will then be used for TDF creation and decryption. +The OpenTDF platform's key management system provides flexible encryption key storage and management for TDF creation and decryption. Keys can be stored locally within the platform database or externally using Key Management Providers (KMPs) or Hardware Security Modules (HSMs) through configurable **Key Managers**. -## How does everything work? +## Architecture Overview + +The key management system provides secure, flexible encryption key management through several key concepts: + +### Key Algorithms + +Supported algorithms (defined in `Algorithm` enum): + +- **`ALGORITHM_RSA_2048`** - RSA 2048-bit keys +- **`ALGORITHM_RSA_4096`** - RSA 4096-bit keys +- **`ALGORITHM_EC_P256`** - Elliptic Curve P-256 (secp256r1) +- **`ALGORITHM_EC_P384`** - Elliptic Curve P-384 (secp384r1) +- **`ALGORITHM_EC_P521`** - Elliptic Curve P-521 (secp521r1) + +### Key Modes + +Keys operate in different modes (defined in `KeyMode` enum): + +- **`KEY_MODE_CONFIG_ROOT_KEY`** - Private key wrapped by local KEK from configuration +- **`KEY_MODE_PROVIDER_ROOT_KEY`** - Private key wrapped by external provider KEK +- **`KEY_MODE_REMOTE`** - All cryptographic operations performed by remote KMS/HSM +- **`KEY_MODE_PUBLIC_KEY_ONLY`** - Only public key available (for external KAS) + +### Key Status + +Keys have lifecycle states (defined in `KeyStatus` enum): + +- **`KEY_STATUS_ACTIVE`** - Key is active and can be used for operations +- **`KEY_STATUS_ROTATED`** - Key has been rotated out but may still decrypt older TDFs + +## How Key Management Works + +### Local Key Management (CONFIG_ROOT_KEY) + +```mermaid +sequenceDiagram + participant SDK + participant KAS + participant Platform + participant KeyManager as Basic Key Manager + + SDK->>+KAS: Rewrap request (with key ID) + KAS->>+Platform: Retrieve key by ID + Platform->>+KAS: Returns AsymmetricKey object + KAS->>+KeyManager: Unwrap private key (using local KEK) + KeyManager->>+KAS: Returns unwrapped private key + KAS->>+KAS: Perform rewrap operation locally + KAS->>+SDK: Return rewrapped key +``` + +### External Provider Management (PROVIDER_ROOT_KEY / REMOTE) ```mermaid sequenceDiagram - User->>+SDK: Decrypt TDF - SDK->>+KAS: Rewrap key - KAS->>+Platform: Retrieve key - Platform->>+KAS: Returns key - KAS->>+Key Manager: Decrypt data encryption key - KAS->>+KAS: Key Manager performs rewrap of KAO + participant SDK + participant KAS + participant Platform + participant KeyManager as Provider Key Manager + participant KMS as External KMS/HSM + + SDK->>+KAS: Rewrap request (with key ID) + KAS->>+Platform: Retrieve key by ID + Platform->>+KAS: Returns AsymmetricKey with provider config + KAS->>+KeyManager: Instantiate manager by provider name + KeyManager->>+KMS: Perform unwrap/decrypt operation + KMS->>+KeyManager: Returns result + KeyManager->>+KAS: Returns unwrapped key or rewrapped result KAS->>+SDK: Return rewrapped key ``` :::note -The **key manager** is a part of the key access server, but is called out specifically because it is a new -concept for the platform. Read more about [managers](./key_managers.md). +**Key Managers** are pluggable components within KAS that handle different key storage backends. +The **provider configuration** determines which key manager to use for each key. ::: -The graphic above is a representation of a basic flow of how KAS now would perform a rewrap (decrypt) operation. KAS now needs to reach out to a third party to get the private key to rewrap the data encryption key that is sent in the rewrap request, which will then be unwrapped on the client and used to decrypt the payload of the TDF. +## Key Features and Capabilities + +### ✅ **Secure Key Management** + +- **Flexible storage options**: Keys can be managed locally in the platform database or through external providers (KMS/HSM) +- **Local storage**: Private keys encrypted with Key Encryption Keys (KEKs) and stored in platform database +- **External provider storage**: Keys managed by enterprise KMS or HSM with remote cryptographic operations +- **Complete lifecycle support**: Create, activate, rotate, and deactivate keys seamlessly +- **Strong cryptographic foundations**: Support for industry-standard algorithms and key types +- **Policy integration**: Keys are directly linked to your access control policies -In the example above, the KAS (key access server) reaches out to the platform to get a key, but it's totally possible with the new architecture that the platform database could only store a reference to the key. What this means is that the private key could be stored externally, say in a KMS or HSM, outside of the platform's control. +### ✅ **Key Management Operations** -The following is an example of how the code would work if you used a KMS, for all cryptographic operations: +- **Create keys** with various algorithms and storage modes +- **Rotate keys** with automatic mapping preservation +- **Set base/default keys** for fallback scenarios +- **Map keys** directly to policy objects (namespaces, attributes, values) +- **List and query** key mappings and associations + +### ✅ **External Provider Integration** + +- **Provider configurations** for KMS/HSM integration +- **Pluggable key managers** via shared library (.so) registration +- **Remote cryptographic operations** for enhanced security +- **Mixed mode support** - local and remote keys in same deployment + +## Legacy vs New Architecture + +:::important Legacy Migration +**Grants are deprecated** as of platform v0.7.0 and replaced by direct key mappings. + +**Before (Grants):** + +- Keys stored in KAS startup configuration +- Indirect association via KeyAccessServer grants +- Limited to cached/remote key modes +- Manual key rotation process + +**Now (Key Mappings):** + +- **Rich metadata support**: Keys maintain comprehensive information about their purpose and relationships +- **Direct policy integration**: Keys map directly to namespaces, attributes, and values +- **Multiple storage options**: Support for local, provider-managed, and remote key storage +- **Seamless rotation**: Automated key rotation preserves all existing policy mappings +- **External provider support**: Integration with enterprise KMS and HSM systems +::: + +## Key Rotation and Lifecycle + +The platform supports automated key rotation with mapping preservation: ```mermaid -sequenceDiagram - User->>+SDK: Decrypt TDF - SDK->>+KAS: Rewrap key - KAS->>+Platform: Retrieve key reference - Platform->>+KAS: Returns key - KAS->>+Key Manager: Get unwrapped key - Key Manager->>+KMS: Reach out to KMS for unwrapped data encryption key - KAS->>+KAS: Perform rewrap - KAS->>+SDK: Return rewrapped key +flowchart TD + A[Active Key] -->|Rotate Request| B[Create New Key] + B --> C[Copy All Mappings] + C --> D[Update Key Status] + D --> E[Old Key: ROTATED] + D --> F[New Key: ACTIVE] + E --> G[Base Key Auto-Updated] + F --> G ``` -The primary difference is that the **Key Manager** needs to reach out to the KMS to perform the rewrap operation. To connect KAS to a specific external key provider you would need to use a [provider configuration](./key_managers.md). To learn more about the following flow, read about [key managers](./key_managers.md) +### Rotation Process + +1. **Submit rotation request** with current active key and new key parameters +2. **New key created** with specified algorithm, mode, and provider configuration +3. **Mappings copied** from old key to new key automatically +4. **Status updated** - old key marked as `ROTATED`, new key marked as `ACTIVE` +5. **Base key updated** if the rotated key was the current base key + +### CLI Example + +```bash +# Rotate an active key +otdfctl policy kas-registry key rotate \ + --id "current-key-uuid" \ + --new-key-id "new-key-2024" \ + --algorithm "rsa:4096" \ + --key-mode "config-root-key" \ + --wrapping-key-id "root-kek" \ + --wrapping-key "base64-encoded-new-private-key" +``` + +The rotation response includes details about all affected mappings and resources. + +## Migration Path -## What is different with the new key architecture? +To migrate from the legacy grant-based system: -1. Previously, when starting the platform users would be expected to populate the [startup yaml file](https://github.com/opentdf/platform/blob/main/opentdf-dev.yaml#L150-L158) with asymmetric keys that would be used by KAS for and decryption. Now keys, or their reference, should be stored/registered with the platform via the [kas-registry](https://github.com/opentdf/platform/blob/main/service/policy/kasregistry/key_access_server_registry.proto#L644-L656). -2. Endpoints for easier management of keys through the kas-registry. - 1. Create/Read/Update - 2. Key Rotation -3. Grants are being deprecated for [key mappings](./key_mappings.md). -4. Admins can now set a [base key](./base_key.md) to be used as the default key when no key mappings/grants are found. -5. Asymmetric keys can now be stored outside of a Key Access Server, by registering a **key manager** at startup and creating a provider configuration that is tied to the key manager. +1. **Register your KAS** with the platform +2. **Create keys** using the new CreateKey API +3. **Create provider configurations** if using external systems +4. **Assign keys** to namespaces, attributes, or values +5. **Set a base key** for fallback scenarios +6. **Remove legacy** startup YAML key configurations diff --git a/docs/components/policy/keymanagement/key_managers.md b/docs/components/policy/keymanagement/key_managers.md index a7f12a4..ee8317b 100644 --- a/docs/components/policy/keymanagement/key_managers.md +++ b/docs/components/policy/keymanagement/key_managers.md @@ -1,60 +1,246 @@ --- -sidebar_position: 1 +sidebar_position: 2 slug: /components/policy/keymanagement/key_managers --- -# Key Managers +# Key Managers and Provider Configurations -With the new key architecture OpenTDF has added the flexibility for an organization to store keys outside of a key access server or the platform. For example, if you want to use OpenTDF and your encryption keys are stored within AWS KMS, this is now totally possible through **key managers**. +Key Managers are pluggable components within KAS that handle different key storage backends. They enable OpenTDF to integrate with external Key Management Systems (KMS), Hardware Security Modules (HSMs), and other cryptographic providers while maintaining a consistent interface. -## What is a key manager? +## Key Provider Configurations -A key manager is essentially a client that is used for either performing cryptographic operations or orchestrating the necessary procedures for retrieving the decrypted data encryption key, which can be used for decrypting the TDF. An example of a key manager can be found in the [basic manager](https://github.com/opentdf/platform/blob/main/service/internal/security/basic_manager.go). The basic manager is the default manager, which expects the private key to be stored within the platform and the symmetric key that wraps the private key to be stored within KAS. +**Provider Configurations** are the foundation of the key manager system. They define how KAS should connect to and interact with external key storage systems. -## How do I register a key manager? +### KeyProviderConfig Structure -If you have written your own key manager, you can register it with KAS by providing its factory method with a [server option](https://github.com/opentdf/platform/blob/main/service/pkg/server/options.go#L131) (.so), during server startup. By providing a way to instantiate an object of your key manager type we will then use it when necessary. You will also need to configure KAS to use the new key manager. This is done through provider configuration file (.cfg). +Provider configurations are stored in the `provider_config` database table with these fields: -### Provider configurations +```protobuf +message KeyProviderConfig { + string id = 1; // UUID of the configuration + string name = 2; // Unique provider name (e.g., "aws-kms-prod") + bytes config_json = 3; // JSON configuration specific to the provider + common.Metadata metadata = 100; +} +``` -Provider configurations are a way of entangling a key to a specific manager. The [key management proto](https://github.com/opentdf/platform/blob/main/service/policy/keymanagement/key_management.proto#L76) provides specifics on what RPCs are available as well as what is expected within each call. One of the most important parts of the provider configuration is the **name** field. The **name** field is what KAS will look for when trying to instantiate a specific manager. If no manager with that name is found, KAS will attempt to use the default manager. +The `config_json` field contains provider-specific configuration that varies by implementation. -The basic flow is as follows: -:::note -In the below example, r1 is just an example key id. -Key managers are a part of KAS, and not external services -although they can reach out to external services to perform -needed operations. -::: +### Manager Name Matching + +The **`name`** field in the provider configuration must match a registered key manager factory. KAS uses this name to instantiate the correct manager implementation: + +```mermaid +flowchart TD + A[Key with provider_config_id] --> B[Lookup Provider Config] + B --> C[Get provider name] + C --> D{Manager Factory Exists?} + D -->|Yes| E[Instantiate Manager] + D -->|No| F[Use Default Basic Manager] + E --> G[Perform Cryptographic Operation] + F --> G +``` + +## How Key Managers Work + +### Manager Instantiation Flow ```mermaid sequenceDiagram - SDK->>+KAS: Rewrap key (key id=r1) - KAS->>+Platform: Retrieve key (key id=r1) - Platform->>+KAS: Returns key - KAS->>+KAS: Get name of provider config from key - KAS->>+KAS: Instantiate key manager from provider name - KAS->>+Key Manager: Decrypt data encryption key - Key Manager->>+KAS: Return decrypted key - KAS->>+SDK: Return rewrapped data encryption key -``` - -When registering your provider configuration and your key manager the name must match. For example, the [basic manager name](https://github.com/opentdf/platform/blob/main/service/internal/security/basic_manager.go#L23) is **opentdf.io/basic**. For example, if you create a manager with name **opentdf.io/aws**, you would need to create a provider configuration with the same name and tie it to a key when creating the key. In addition, when registering the manager you need to pass in the same name as what the provider configuration expects. - -:::important -If private keys are meant to be stored within the platform database, -there should be no provider configuration added to the key. Since -an empty provider configuration is assumed to be tied to the -default manager. -::: + participant KAS + participant Platform + participant Factory as Manager Factory + participant Manager as Key Manager Instance + participant KMS as External KMS + + KAS->>+Platform: GetKey(key_id) + Platform->>+KAS: AsymmetricKey with provider_config + KAS->>+KAS: Extract provider name from config + KAS->>+Factory: Create manager by name + Factory->>+Manager: New instance with config + KAS->>+Manager: Unwrap/decrypt operation + Manager->>+KMS: Call external provider API + KMS->>+Manager: Return result + Manager->>+KAS: Return processed key material +``` + +### Default vs Custom Managers + +**Default Manager (`opentdf.io/basic`):** + +- Handles `KEY_MODE_CONFIG_ROOT_KEY` keys +- Uses local KEK (Key Encryption Key) from KAS configuration +- Private keys stored encrypted in the database +- No external dependencies + +**Custom Managers:** + +- Handle `KEY_MODE_PROVIDER_ROOT_KEY` and `KEY_MODE_REMOTE` keys +- Connect to external systems (AWS KMS, Azure Key Vault, etc.) +- Can perform remote cryptographic operations +- Loaded as shared libraries (.so files) -:::important -Provider configuration names must be unique. -You cannot delete provider configurations that are tied to keys. +## Creating Provider Configurations + +### Using the CLI + +```bash +# Create a provider configuration for AWS KMS +otdfctl policy keymanagement provider-config create \ + --name "aws-kms-production" \ + --config-json '{ + "region": "us-east-1", + "access_key_id": "AKIA...", + "secret_access_key": "...", + "kms_key_id": "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012" + }' + +# List all provider configurations +otdfctl policy keymanagement provider-config list + +# Get a specific configuration +otdfctl policy keymanagement provider-config get --name "aws-kms-production" +``` + +### Example Configuration JSON + +**AWS KMS Provider:** + +```json +{ + "region": "us-east-1", + "access_key_id": "AKIA...", + "secret_access_key": "...", + "kms_key_id": "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012", + "endpoint": "https://kms.us-east-1.amazonaws.com" +} +``` + +**HashiCorp Vault Provider:** + +```json +{ + "vault_addr": "https://vault.example.com:8200", + "vault_token": "hvs.CAESIJlWh...", + "mount_path": "transit", + "key_name": "opentdf-dek-key" +} +``` + +**Azure Key Vault Provider:** + +```json +{ + "vault_url": "https://myvault.vault.azure.net/", + "tenant_id": "12345678-1234-1234-1234-123456789012", + "client_id": "87654321-4321-4321-4321-210987654321", + "client_secret": "...", + "key_name": "opentdf-master-key" +} +``` + +## Registering Custom Key Managers + +### 1. Implement the KeyManager Interface + +```go +type KeyManager interface { + // Unwrap decrypts a wrapped key using the manager's key material + Unwrap(ctx context.Context, keyID string, wrappedKey []byte) ([]byte, error) + + // Wrap encrypts a key using the manager's key material + Wrap(ctx context.Context, keyID string, plainKey []byte) ([]byte, error) + + // GetPublicKey returns the public key for the given key ID + GetPublicKey(ctx context.Context, keyID string) ([]byte, error) +} +``` + +### 2. Create a Factory Function + +```go +func NewAWSKMSManager(config []byte) (KeyManager, error) { + var cfg AWSKMSConfig + if err := json.Unmarshal(config, &cfg); err != nil { + return nil, err + } + + session := session.Must(session.NewSession(&aws.Config{ + Region: aws.String(cfg.Region), + Credentials: credentials.NewStaticCredentials( + cfg.AccessKeyID, cfg.SecretAccessKey, ""), + })) + + return &AWSKMSManager{ + kms: kms.New(session), + keyID: cfg.KMSKeyID, + }, nil +} +``` + +### 3. Register with KAS at Startup + +```go +// In your main() or init() function +server.WithKeyManagerFactory( + "aws-kms", // This name must match provider config names + NewAWSKMSManager, +) +``` + +### 4. Build as Shared Library + +```bash +# Build your manager as a shared library +go build -buildmode=plugin -o aws-kms-manager.so aws_kms_manager.go + +# Load it when starting the platform +./platform --key-manager-plugin=./aws-kms-manager.so +``` + +## Migration from Legacy Configuration + +:::important Migration Required +**Legacy server startup flags** for key manager registration are deprecated. + +**Before:** + +```bash +./kas --key-manager-factories="aws-kms=/path/to/factory.so" +``` + +**Now:** + +1. Load the plugin at startup: `--key-manager-plugin=/path/to/factory.so` +2. Create provider configurations via API +3. Reference provider configs when creating keys ::: -## Helpful links +## Configuration Management + +### Best Practices + +- **Unique Names**: Provider configuration names must be globally unique +- **Secure Storage**: Store sensitive configuration values in secure secret management +- **Immutable Configs**: Provider configurations cannot be deleted if keys reference them +- **Validation**: Test configurations before creating keys that depend on them + +### Configuration Lifecycle + +```bash +# Create configuration +otdfctl policy keymanagement provider-config create --name "prod-hsm" --config-json '{...}' + +# Update configuration (if no keys depend on it) +otdfctl policy keymanagement provider-config update --id "config-uuid" --config-json '{...}' + +# Delete is only allowed if no keys reference it +otdfctl policy keymanagement provider-config delete --id "config-uuid" +``` + +## Reference Links -- [key manager interface](https://github.com/opentdf/platform/blob/main/service/trust/key_manager.go#L33). -- [NamedKeyManagerFactory](https://github.com/opentdf/platform/blob/main/service/trust/key_manager.go#L62) -- [KeyManagerFactory](https://github.com/opentdf/platform/blob/main/service/trust/delegating_key_service.go#L21) +- [KeyManager Interface](https://github.com/opentdf/platform/blob/main/service/trust/key_manager.go#L33) +- [Key Management Proto](https://github.com/opentdf/platform/blob/main/service/policy/keymanagement/key_management.proto) +- [Provider Configuration RPCs](https://github.com/opentdf/platform/blob/main/service/policy/keymanagement/key_management.proto#L76) +- [Server Options for Plugin Loading](https://github.com/opentdf/platform/blob/main/service/pkg/server/options.go) diff --git a/docs/components/policy/keymanagement/key_mappings.md b/docs/components/policy/keymanagement/key_mappings.md index d394f7a..4ae02fe 100644 --- a/docs/components/policy/keymanagement/key_mappings.md +++ b/docs/components/policy/keymanagement/key_mappings.md @@ -1,75 +1,322 @@ --- -sidebar_position: 1 +sidebar_position: 3 slug: /components/policy/keymanagement/key_mappings --- -# Key mappings +# Key Mappings -:::important -Before leveraging Key Mappings make sure to [migrate](../key_access_grants.md#migration-to-key-mappings) your existing KAS Grants. -::: +:::important Grants Deprecated +**KAS Grants have been deprecated** as of platform v0.7.0 and replaced by direct key mappings. -:::important -v0.5.0 of the SDK will prefer key mappings over -grants. Meaning if a key mapping shows up for an attribute -the SDK will use the mapping and not the grant. -You **should** migrate all grants over to mappings in one -sitting +Key mappings provide a more flexible and direct way to associate encryption keys with policy objects (namespaces, attributes, and attribute values) without the intermediate KeyAccessServer layer. ::: -:::important -As of v0.7.0 of the OpenTDF platform, -grants can no longer be assigned. -::: +Key mappings establish direct relationships between individual keys and policy objects. This replaces the legacy grant system that required intermediate KeyAccessServer associations. -Key mappings are now the replacement for key access server grants. The idea for mapping a key to an attribute definition/value/namespace is the same as grants, except now we separate the keys from key access servers. Previously an admin was expected to add a key directly to the KeyAccessServer object, as either **remote** or **cached**, and then assign a key access server to an attribute. That process is known as a grant. Now users should create a key, and assign that key to an attribute. +## Mapping Structures -## How do I create a key mapping +The system supports three types of key mappings: -:::note -The following example only shows how to create a mapping for -attribute definitions, you can also create mappings for namespaces -and attribute values. -::: +### NamespaceKey + +```protobuf +message NamespaceKey { + string namespace_id = 1; // UUID of the namespace + string key_id = 2; // UUID of the asymmetric key +} +``` + +### AttributeKey + +```protobuf +message AttributeKey { + string attribute_id = 1; // UUID of the attribute definition + string key_id = 2; // UUID of the asymmetric key +} +``` + +### ValueKey + +```protobuf +message ValueKey { + string value_id = 1; // UUID of the attribute value + string key_id = 2; // UUID of the asymmetric key +} +``` + +## Key Mapping Storage + +Key mappings are managed internally by the OpenTDF platform and accessible through the API: + +- **Namespace Mappings** - Keys associated with entire namespaces +- **Attribute Mappings** - Keys associated with specific attribute definitions +- **Value Mappings** - Keys associated with specific attribute values + +The platform automatically manages the relationships and provides efficient lookup mechanisms for key resolution during TDF operations. + +## Managing Key Mappings + +### Assign Keys to Policy Objects + +**Namespace Assignment:** + +```bash +# Assign key to namespace +otdfctl policy namespaces key assign \ + --namespace-id "550e8400-e29b-41d4-a716-446655440001" \ + --key-id "550e8400-e29b-41d4-a716-446655440002" +``` + +**Attribute Assignment:** -1. First you will need to [create a key access server](https://github.com/opentdf/platform/blob/main/service/policy/kasregistry/key_access_server_registry.proto#L630) +```bash +# Assign key to attribute definition +otdfctl policy attributes key assign \ + --attribute-id "550e8400-e29b-41d4-a716-446655440003" \ + --key-id "550e8400-e29b-41d4-a716-446655440002" +``` + +**Value Assignment:** + +```bash +# Assign key to attribute value +otdfctl policy attributes value key assign \ + --value-id "550e8400-e29b-41d4-a716-446655440004" \ + --key-id "550e8400-e29b-41d4-a716-446655440002" +``` + +### Remove Key Mappings + +```bash +# Remove key from namespace +otdfctl policy namespaces key remove \ + --namespace-id "550e8400-e29b-41d4-a716-446655440001" \ + --key-id "550e8400-e29b-41d4-a716-446655440002" + +# Remove key from attribute +otdfctl policy attributes key remove \ + --attribute-id "550e8400-e29b-41d4-a716-446655440003" \ + --key-id "550e8400-e29b-41d4-a716-446655440002" + +# Remove key from value +otdfctl policy attributes value key remove \ + --value-id "550e8400-e29b-41d4-a716-446655440004" \ + --key-id "550e8400-e29b-41d4-a716-446655440002" +``` -OpenTDF CLI +### List Key Mappings ```bash -otdfctl policy kas-registry create --uri http://example.com/kas --name example-kas +# List all key mappings +otdfctl policy kas-registry key mappings list + +# List mappings for a specific key +otdfctl policy kas-registry key mappings list --key-id "550e8400-e29b-41d4-a716-446655440002" + +# List mappings by KAS name +otdfctl policy kas-registry key mappings list --kas-name "production-kas" ``` -2. Next, you will need to [create a key](https://github.com/opentdf/platform/blob/main/service/policy/kasregistry/key_access_server_registry.proto#L644) +## Complete Workflow Example -OpenTDF CLI +Here's a complete example of setting up key mappings: + +### 1. Register a Key Access Server ```bash -otdfctl key create --key-id "rsa-key-1" --algorithm "rsa:2048" --mode "local" --kas "891cfe85-b381-4f85-9699-5f7dbfe2a9ab" --wrapping-key-id "virtru-stored-key" --wrapping-key "a8c4824daafcfa38ed0d13002e92b08720e6c4fcee67d52e954c1a6e045907d1" +# Create the KAS registration +otdfctl policy kas-registry create \ + --uri "https://kas.example.com" \ + --name "production-kas" ``` -3. [create a namespace](https://github.com/opentdf/platform/blob/main/service/policy/namespaces/namespaces.proto#L180) +### 2. Create an Asymmetric Key -OpenTDF CLI +**Local Key (CONFIG_ROOT_KEY mode):** ```bash -otdfctl policy attributes namespaces create --name opentdf.io +otdfctl policy kas-registry key create \ + --kas-id "a1b2c3d4-e5f6-7890-abcd-ef1234567890" \ + --key-id "prod-rsa-2024" \ + --algorithm "ALGORITHM_RSA_2048" \ + --key-mode "KEY_MODE_CONFIG_ROOT_KEY" \ + --public-key-pem "$(cat public_key.pem)" \ + --wrapping-key-id "root-kek-001" \ + --wrapped-key "$(base64 < wrapped_private_key.bin)" ``` -4. [create an attribute](https://github.com/opentdf/platform/blob/main/service/policy/attributes/attributes.proto#L415) +**External Provider Key (PROVIDER_ROOT_KEY mode):** + +```bash +# First create a provider configuration +otdfctl policy keymanagement provider-config create \ + --name "aws-kms-prod" \ + --config-json '{"region":"us-east-1","kms_key_id":"arn:aws:kms:..."} + +# Then create key that references the provider +otdfctl policy kas-registry key create \ + --kas-id "a1b2c3d4-e5f6-7890-abcd-ef1234567890" \ + --key-id "prod-aws-kms-2024" \ + --algorithm "ALGORITHM_EC_P256" \ + --key-mode "KEY_MODE_PROVIDER_ROOT_KEY" \ + --public-key-pem "$(cat ec_public_key.pem)" \ + --provider-config-id "b2c3d4e5-f6a7-8901-bcde-f23456789012" \ + --wrapped-key "$(base64 < kms_wrapped_key.bin)" +``` + +### 3. Create Policy Objects + +```bash +# Create namespace +NAMESPACE_ID=$(otdfctl policy namespaces create --name "example.com" --output json | jq -r '.namespace.id') + +# Create attribute definition +ATTRIBUTE_ID=$(otdfctl policy attributes create \ + --namespace "$NAMESPACE_ID" \ + --name "classification" \ + --rule "ATTRIBUTE_RULE_TYPE_ENUM_ANY_OF" \ + --output json | jq -r '.attribute.id') + +# Create attribute value +VALUE_ID=$(otdfctl policy attributes values create \ + --attribute "$ATTRIBUTE_ID" \ + --value "confidential" \ + --output json | jq -r '.value.id') +``` -OpenTDF CLI +### 4. Assign Key to Policy Objects ```bash -otdfctl policy attributes create --namespace 3d25d33e-2469-4990-a9ed-fdd13ce74436 --name myattribute --rule ANY_OF +# Get the key ID from creation response or list command +KEY_ID="c3d4e5f6-a7b8-9012-cdef-456789012345" + +# Assign key to namespace (affects all attributes in namespace) +otdfctl policy namespaces key assign \ + --namespace-id "$NAMESPACE_ID" \ + --key-id "$KEY_ID" + +# Assign key to specific attribute (overrides namespace mapping) +otdfctl policy attributes key assign \ + --attribute-id "$ATTRIBUTE_ID" \ + --key-id "$KEY_ID" + +# Assign key to specific value (most specific, overrides attribute mapping) +otdfctl policy attributes value key assign \ + --value-id "$VALUE_ID" \ + --key-id "$KEY_ID" +``` + +## Key Mapping Behavior + +### Precedence Order + +When the SDK looks up keys for attributes, it follows this precedence: + +1. **Attribute Value** mappings (most specific) +2. **Attribute Definition** mappings +3. **Namespace** mappings (least specific) +4. **Base Key** (fallback) + +### Multiple Keys per Policy Object + +- Policy objects can have multiple keys mapped to them +- The SDK will use the first active key found +- Key rotation automatically updates mappings to maintain continuity + +### Automatic Mapping Management + +**Key Status Tracking:** + +- The `was_mapped` flag tracks whether a key has ever been assigned to a policy object +- This flag is set automatically when the first mapping is created +- Used for lifecycle management and cleanup + +**Rotation Mapping Copy:** + +- When keys are rotated, all mappings are automatically copied to the new key +- The old key status changes to `KEY_STATUS_ROTATED` +- The new key becomes `KEY_STATUS_ACTIVE` and inherits all mappings + +## Migration from Grants + +:::warning Legacy Migration Required +**KAS Grants are no longer supported** in platform v0.7.0+. Existing grants must be migrated to key mappings. + +**Legacy Grant Structure (Deprecated):** + +```text +Namespace → KeyAccessServer → [remote|cached] key +Attribute → KeyAccessServer → [remote|cached] key +Value → KeyAccessServer → [remote|cached] key ``` -5. [assign a key to an attribute](https://github.com/opentdf/platform/blob/main/service/policy/attributes/attributes.proto#L457) +**New Mapping Structure:** -OpenTDF CLI +```text +Namespace → AsymmetricKey (with KAS association) +Attribute → AsymmetricKey (with KAS association) +Value → AsymmetricKey (with KAS association) +``` + +::: + +### Migration Steps + +1. **Audit existing grants** using deprecated `ListKeyAccessServerGrants` API +2. **Create keys** for each unique key material in your grants +3. **Create mappings** using the new `AssignPublicKeyTo*` RPCs +4. **Test functionality** with SDKs to ensure proper key resolution +5. **Remove legacy** grant configurations + +### Migration CLI Commands ```bash -otdfctl policy attributes key assign --attribute 3d25d33e-2469-4990-a9ed-fdd13ce74436 --key-id 8f7e6d5c-4b3a-2d1e-9f8d-7c6b5a432f1d +# List existing grants (deprecated API) +otdfctl policy kas-registry grants list --output json + +# For each grant, create corresponding key and mapping +# (Repeat for each grant found) +otdfctl policy kas-registry key create ... +otdfctl policy attributes key assign ... ``` -Now you have successfully created a key mapping. +## Troubleshooting + +### Common Issues + +**Key Not Found During TDF Operations:** + +- Verify key is in `KEY_STATUS_ACTIVE` state +- Check that mappings exist for the target policy objects +- Ensure the key's KAS is accessible from the requesting client + +**Multiple Keys Mapped:** + +- Review mapping precedence order +- Use `list key mappings` to see all associations +- Consider consolidating to a single key per policy object + +**Provider Configuration Errors:** + +- Validate provider config JSON syntax and values +- Ensure external KMS/HSM is accessible from KAS +- Check key manager plugin is loaded and registered + +### Debugging Commands + +```bash +# Check key status and details +otdfctl policy kas-registry key get --id "key-uuid" + +# View all mappings for troubleshooting +otdfctl policy kas-registry key mappings list + +# View mappings for a specific namespace +otdfctl policy namespaces get --namespace-id "namespace-uuid" --output json | jq '.namespace.kas_keys' + +# View mappings for a specific attribute +otdfctl policy attributes get --attribute-id "attribute-uuid" --output json | jq '.attribute.kas_keys' + +# View mappings for a specific value +otdfctl policy attributes values get --value-id "value-uuid" --output json | jq '.value.kas_keys' +``` diff --git a/docs/components/policy/keymanagement/migration_from_grants.md b/docs/components/policy/keymanagement/migration_from_grants.md new file mode 100644 index 0000000..57e9289 --- /dev/null +++ b/docs/components/policy/keymanagement/migration_from_grants.md @@ -0,0 +1,243 @@ +--- +sidebar_position: 6 +slug: /components/policy/keymanagement/migration +--- + +# Migration from Grants to Key Mappings + +:::warning Grants Deprecated +**KAS Grants have been deprecated** as of OpenTDF platform v0.7.0 and **removed entirely** in v0.8.0+. + +All existing grant-based configurations must be migrated to the new key mapping system. +::: + +## Migration Timeline + +- **v0.5.0**: Key mappings introduced alongside grants +- **v0.6.0**: SDKs prefer key mappings over grants when both exist +- **v0.7.0**: Grant assignment APIs deprecated (read-only) +- **v0.8.0**: Grant APIs removed entirely + +## Differences Between Systems + +### Legacy Grants (Deprecated) + +```mermaid +flowchart TD + A[Policy Object] --> B[KeyAccessServer Grant] + B --> C[KeyAccessServer] + C --> D[Cached/Remote Key] +``` + +**Characteristics:** + +- Indirect relationship through KeyAccessServer +- Limited to `cached` or `remote` key modes +- Manual key rotation required grant reassignment +- Keys embedded in KAS startup configuration + +### New Key Mappings + +```mermaid +flowchart TD + A[Policy Object] --> B[Direct Key Mapping] + B --> C[AsymmetricKey] + C --> D[KAS Association] +``` + +**Characteristics:** + +- Direct relationship to individual keys +- Support for all key modes (local, provider, remote) +- Automatic key rotation with mapping preservation +- Keys stored in database with rich metadata + +## Migration Process + +### 1. Audit Existing Grants + +```bash +# List all existing grants (deprecated API) +otdfctl policy kas-registry grants list --output json > existing_grants.json + +# Examine the grant structure +jq '.grants[] | {kas: .key_access_server, namespaces: .namespace_grants, attributes: .attribute_grants, values: .value_grants}' existing_grants.json +``` + +### 2. Extract Key Information + +For each grant, identify: + +- **KAS Details**: URI, name, public key information +- **Policy Objects**: Which namespaces/attributes/values are granted +- **Key Material**: Public key PEM, algorithm, key ID + +### 3. Create Keys in New System + +```bash +# For each unique key in your grants: + +# 1. Register KAS (if not already registered) +KAS_ID=$(otdfctl policy kas-registry create \ + --uri "https://existing-kas.example.com" \ + --name "migrated-kas" \ + --output json | jq -r '.key_access_server.id') + +# 2. Create the key in the new system +KEY_ID=$(otdfctl policy kas-registry key create \ + --kas-id "$KAS_ID" \ + --key-id "migrated-key-2024" \ + --algorithm "ALGORITHM_RSA_2048" \ + --key-mode "KEY_MODE_CONFIG_ROOT_KEY" \ + --public-key-pem "$(cat existing_public_key.pem)" \ + --wrapping-key-id "legacy-kek" \ + --wrapped-key "$(base64 < existing_wrapped_key.bin)" \ + --output json | jq -r '.kas_key.key.id') +``` + +### 4. Create Key Mappings + +```bash +# For each policy object that was granted in the old system: + +# Namespace mappings +otdfctl policy namespaces key assign \ + --namespace-id "$NAMESPACE_ID" \ + --key-id "$KEY_ID" + +# Attribute mappings +otdfctl policy attributes key assign \ + --attribute-id "$ATTRIBUTE_ID" \ + --key-id "$KEY_ID" + +# Value mappings +otdfctl policy attributes value key assign \ + --value-id "$VALUE_ID" \ + --key-id "$KEY_ID" +``` + +### 5. Verify Migration + +```bash +# Verify key mappings are working +otdfctl policy kas-registry key mappings list + +# Test with SDK (create/decrypt a test TDF) +# Ensure the new mappings resolve correctly +``` + +### 6. Clean Up Legacy Configuration + +```bash +# Remove deprecated grant configurations +# (This varies by deployment method) + +# Update KAS startup YAML to remove embedded keys +# Update platform configuration to remove grant references +``` + +## Field Mapping Reference + +| Legacy Grant Field | New Mapping Equivalent | +|-------------------|----------------------| +| `KeyAccessServer.id` | `AsymmetricKey.kas_id` | +| `KeyAccessServer.uri` | `KeyAccessServer.uri` (unchanged) | +| `PublicKey.cached` | `KeyMode.CONFIG_ROOT_KEY` | +| `PublicKey.remote` | `KeyMode.REMOTE` | +| `KasPublicKey.kid` | `AsymmetricKey.key_id` | +| `KasPublicKey.pem` | `PublicKeyCtx.pem` | +| `KasPublicKey.alg` | `Algorithm` enum | + +## Deprecated API Mappings + +| Deprecated RPC | New RPC | +|----------------|---------| +| ~~`AssignKeyAccessServerToNamespace`~~ | `AssignPublicKeyToNamespace` | +| ~~`AssignKeyAccessServerToAttribute`~~ | `AssignPublicKeyToAttribute` | +| ~~`AssignKeyAccessServerToValue`~~ | `AssignPublicKeyToValue` | +| ~~`RemoveKeyAccessServerFromNamespace`~~ | `RemovePublicKeyFromNamespace` | +| ~~`RemoveKeyAccessServerFromAttribute`~~ | `RemovePublicKeyFromAttribute` | +| ~~`RemoveKeyAccessServerFromValue`~~ | `RemovePublicKeyFromValue` | +| ~~`ListKeyAccessServerGrants`~~ | `ListKeyMappings` | + +## Migration Script Template + +```bash +#!/bin/bash +set -e + +# Configuration +PLATFORM_URL="https://platform.example.com" +OUTPUT_DIR="./migration_data" + +mkdir -p "$OUTPUT_DIR" + +echo "🔍 Step 1: Auditing existing grants..." +otdfctl policy kas-registry grants list --output json > "$OUTPUT_DIR/grants.json" + +echo "📝 Step 2: Analyzing grant structure..." +jq -r '.grants[] | + .namespace_grants[]? as $ns | + .attribute_grants[]? as $attr | + .value_grants[]? as $val | + .key_access_server | + "\(.id),\(.uri),\(.name),\($ns.id // ""),\($attr.id // ""),\($val.id // "")"' \ + "$OUTPUT_DIR/grants.json" > "$OUTPUT_DIR/migration_plan.csv" + +echo "🔑 Step 3: Creating keys and mappings..." +# Process each unique KAS and create corresponding keys +# (Implement based on your specific grant structure) + +while IFS=, read -r kas_id kas_uri kas_name ns_id attr_id val_id; do + echo "Processing grant: KAS=$kas_name, NS=$ns_id, ATTR=$attr_id, VAL=$val_id" + + # Create key if not exists + # Create mappings + # Log progress + +done < "$OUTPUT_DIR/migration_plan.csv" + +echo "✅ Step 4: Verification..." +otdfctl policy kas-registry key mappings list + +echo "🎉 Migration completed! Remember to:" +echo " 1. Test TDF operations with new mappings" +echo " 2. Update client applications if needed" +echo " 3. Remove legacy grant configurations" +echo " 4. Archive migration data for rollback if needed" +``` + +## Rollback Considerations + +:::caution Rollback Planning +Plan your rollback strategy before migrating: + +1. **Backup**: Export current grant configurations +2. **Versioning**: Tag your platform version before upgrade +3. **Testing**: Validate mappings in staging environment +4. **Gradual**: Migrate in phases (namespace by namespace) + +**Note**: Once upgraded to v0.8.0+, rollback requires platform version downgrade and configuration restoration. +::: + +## Common Migration Issues + +**Key Algorithm Mismatches:** + +- Ensure algorithm enums match between old and new systems +- Verify ECC curves are correctly mapped + +**Missing Key Material:** + +- Extract private keys from KAS configuration files +- Ensure wrapping keys are available for local keys + +**Policy Object IDs:** + +- UUIDs must match exactly between grants and mappings +- Verify namespace/attribute/value IDs haven't changed + +**Multiple KAS Instances:** + +- Migrate one KAS at a time +- Ensure key IDs are unique across all KAS instances diff --git a/docs/components/policy/keymanagement/quickstart.md b/docs/components/policy/keymanagement/quickstart.md index e7974cd..8e7de7f 100644 --- a/docs/components/policy/keymanagement/quickstart.md +++ b/docs/components/policy/keymanagement/quickstart.md @@ -1,136 +1,292 @@ --- -sidebar_position: 1 +sidebar_position: 5 slug: /components/policy/keymanagement/quickstart --- -# Quickstart for using the new key management architecture +# Key Management Quickstart -:::important -This quickstart is meant to aid users trying to test the new key management -features, by standing up a new platform. This quickstart is not meant as a -migration guide. Currently, while key management is experimental, we recommend to use key management with **ONLY** new deployments. +This quickstart guide demonstrates the complete workflow for setting up OpenTDF's modern key management system with direct key mappings. You'll create a KAS, generate keys, set up policy objects, and establish key mappings. + +:::tip Migration Users +If you're migrating from the legacy grant system, see our [Migration Guide](./migration_from_grants.md) for detailed transition instructions. ::: -Below is a guide to setting up platform to use the new key management features. +## Prerequisites -Checklist: +- OpenTDF platform v0.7.0+ running +- `otdfctl` CLI tool installed and configured +- Admin permissions for key management operations -:::important -You can spin KAS up without activating the key management features. -This gives you time to create or migrate keys to the platform -before KAS expects keys to come from the platform, which would -be helpful for migrations. -::: +## Step 1: Register a Key Access Server -1. Key management is activated for KAS. -2. A key has been created. -3. Assign key mappings. (optional, but recommended) -4. Base Key is set (optional, but recommended) +First, register your KAS with the platform: -## Key management is activated for KAS +```bash +# Register KAS with platform +KAS_ID=$(otdfctl policy kas-registry create \ + --uri "https://kas.example.com" \ + --name "quickstart-kas" \ + --output json | jq -r '.key_access_server.id') -- To activate key management for KAS you will need to modify the **key_management** field within the opentdf.yaml file to be true. [opentdf-dev.yaml](https://github.com/opentdf/platform/blob/6203fbaebcdd57b5b3437679465149f8ff395484/opentdf-dev.yaml#L25). -- You will also need to set a root key to be used with the basic manager. The root key should be 32 bytes long. - - In addition, the root key should be hex encoded. +echo "Created KAS with ID: $KAS_ID" +``` -## Creating a key +## Step 2: Create Encryption Keys -:::note -You can also perform all key commands with the [OpenTDF CLI](https://github.com/opentdf/otdfctl) -::: +Create keys for different use cases: -1. You should already have created and registered a **Key Access Server** with the platform via the [Create Key Access Server Endpoint](https://github.com/opentdf/platform/blob/6203fbaebcdd57b5b3437679465149f8ff395484/service/policy/kasregistry/key_access_server_registry.proto#L59). -2. Once you have a **Key Access Server** registered you should then create a key for that registered KAS. +### Local Key (Recommended for Getting Started) -The definition for creating a key can be found in the key_access_registry [proto](https://github.com/opentdf/platform/blob/6203fbaebcdd57b5b3437679465149f8ff395484/service/policy/kasregistry/key_access_server_registry.proto#L379). +```bash +# Create RSA key for general use +RSA_KEY_ID=$(otdfctl policy kas-registry key create \ + --kas-id "$KAS_ID" \ + --key-id "quickstart-rsa-2024" \ + --algorithm "ALGORITHM_RSA_2048" \ + --key-mode "KEY_MODE_CONFIG_ROOT_KEY" \ + --public-key-pem "-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1234567890ABCDEF... +-----END PUBLIC KEY-----" \ + --wrapping-key-id "quickstart-kek" \ + --wrapped-key "$(echo 'base64-encoded-wrapped-private-key' | base64)" \ + --output json | jq -r '.kas_key.key.id') -Let's look at a valid request and dissect it: +echo "Created RSA key with ID: $RSA_KEY_ID" +``` -:::note -These keys can be found within the [policy_fixtures.yaml](https://github.com/opentdf/platform/blob/6203fbaebcdd57b5b3437679465149f8ff395484/service/internal/fixtures/policy_fixtures.yaml#L528) file located -in opentdf. -::: +### ECC Key (For NanoTDF Support) + +```bash +# Create ECC key for NanoTDF compatibility +ECC_KEY_ID=$(otdfctl policy kas-registry key create \ + --kas-id "$KAS_ID" \ + --key-id "quickstart-ecc-2024" \ + --algorithm "ALGORITHM_EC_P256" \ + --key-mode "KEY_MODE_CONFIG_ROOT_KEY" \ + --public-key-pem "-----BEGIN PUBLIC KEY----- +MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE1234567890ABCDEF... +-----END PUBLIC KEY-----" \ + --wrapping-key-id "quickstart-kek" \ + --wrapped-key "$(echo 'base64-encoded-wrapped-ecc-key' | base64)" \ + --output json | jq -r '.kas_key.key.id') -```json5 -{ - "kas_id": "db740f79-cf85-41b2-a27f-663b0e2b169b", - "key_id": "kas-key-1", - "key_algorithm": 1, - "key_mode": 1, - "public_key_ctx": { - "pem": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvVENDQWVXZ0F3SUJBZ0lVRjA5QWpIallJOENSekVGSmpjVEREY2lkZEgwd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0RqRU1NQW9HQTFVRUF3d0RhMkZ6TUI0WERUSTBNRFV3TmpFeU1UQXhNbG9YRFRJMU1EVXdOakV5TVRBeApNbG93RGpFTU1Bb0dBMVVFQXd3RGEyRnpNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDCkFRRUFsVEI5eks3dzF1MG1mOWI5bmg4U3p0K0t5ZFdvbjUyUDVNY2k4Z1YremFQWjlmM0picllHVWZWWG16RmEKbVErTjBmTjZRaDhVOWlzbzFPZ3VHWDB1eTRwV1k3em1XTXFtRjVpSk9INENBdTVnV2Vuc3I5R2FXM1lmeEtWRgpWUnpKcUx0U3pBT3lvQ0lhNVErSTJUdmdNeEZjSFYwSGN4OXU5ekdYdDdKNUdlV1pTM3I2OUg4MGRGUjdGc0lRCk1hTDZRUHhmUWNWOVJidW9weUFwOE43TktiU3p4OEZUZEJYUWE4QnVxTXNvNlZyK0crZC9oeVp6YlpVc1pEUzQKZ3RtNnJCQlUraE8zMEN6WnBaZHBETVNPdjljNGNZUXlpdElwRjBrbVdQcE02YitKUzRyN2hGUU5kY1BWVXBWeAovVGowRUFNaWsrcHpZQUxyalRLZjlHcmJkd0lEQVFBQm8xTXdVVEFkQmdOVkhRNEVGZ1FVa2RTM0JuWHNnZUtSCnVNL0hCNW9sM3lacVRvMHdId1lEVlIwakJCZ3dGb0FVa2RTM0JuWHNnZUtSdU0vSEI1b2wzeVpxVG8wd0R3WUQKVlIwVEFRSC9CQVV3QXdFQi96QU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFTZXoxQVZhNVhHVXBrNUg4THpySQp4U0VzUnRHUitSV2NJdGxMZVVLc3NPTTNzY01mUEZ6SnQyTldwd0NwSTRiY2FGQVgzeFlLR1lxVnZzVmpxbTFVCnRKYmh6dzFhUVBUT2ZvNDZlOXNGK2lxZGJpbEplRUlQZllDb0w2VXR0Rm96TC9LZ2k1eWFlSXBScTFiaHFwVjcKTVRmSm1CbHVIckZhdWNFaEFMTDJoK0tsQ1R6amJsQnBZN1hpVFZHc3JZc0V2MmF3NEh2b1pZVkZVV3IxQ1JXYgppcDB4dFZ1SXE5RFhha0ZJYWVQWlZnMHRCczVBejBzUGlpNUdUVjUzVXdmcjY4VjhBYXFRSE9yVGRQL2ZadkN3CmRXTWdKSnltc21VUis1cTJCTnJvZHlTWDd4RzZxenE0Mm5BV1ZwSlNvb0g5ZWdSYXZuZ0Q5UXRreWU5KzBuRW0KVGc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==" - }, - "private_key_ctx": { - "key_id": "config", - "wrapped_key": "lcbq682Gsii+jEkQ62sWIXkWWVJjgWPuvhmBt9KiYFJ/o3XClyRd4c0KDFtuohdBw2jvVGweFfspi9eWGMIzEl6bcERc52q5HviH5kcT9RcnJhUv3iN635RwHzNvjGEVKKUNBv4pfEN30w7opHPigz0KQ1VBMJNO3YD9FVA7UoHx0eJcF7ZQJJGXxwpQV0MtwlqmRpq4Q2U9vpYrj6787QHGf9hegIXNA+r0r3TCLpwc5Y3A9hHkyjw7QgXE8ueJILuMm87CerJpyZOCsrH3vAjiYAVv7/dEJqjX1AgaB84T7hxDFej9JKhIy9eKZNrNwMYmAsHpbA3jg5GEHB7E3fXEqhi1SNYaGVidtoJuS/3kVnILKNY6ifto6YLLBAuiPARc56UnUSKHYCWmNJ7BC8cHa7aHt9eQmeUTbqVf0h7h463qYCazD8XOALQ0nCyOCr9uLdWZ5A8Klqf3+j+728kZaBE7kJgR5PXtt1LLyXq8fHHOEsNYs3OtR8cwjEudjoUTI17ubBNb8XeM5zaZ/WfufCqjvPecDKzprUo7ejpkf6J8BtE+YHVQJ7M1bVoVeCKEMe/+zycfispsnflCVm6BMtYLvkU/23Edb9cM7PQGyELFNx32AtpQzLEE+5eZYaaQNYkxDiatsvd2D24ERMXfXjuah810NIZTe18q+F2+Y0KnMOKdzjWp2Qwo4uHorbvwQFL39vkTrfe23zXUcd67ZBNjIQCFsJk9nlFONF8ksnWW49hwqJ95urmhnsHdRfPkmW6HvtRuSrSw/ef8ft1Ff2VF1CJQp+Jw6bzJZdPK7lNlf7Qw7kkDKjHlgM5gjveeRWf19Cqji7G3QaI/NkqCtJeZYKffFSTmm+Hcz4mzLBJssxMth/RDZC5SiUI82agHtBByT/C+p64A7r6FxSovdMN/fMNYILw4dPWuqt6p6egMIr0SUapiPriN0bF3Z7gyGT1Avr0ugiOjAB7/zrl2Giaej1TWMXKO3wbx/EsQWi03VrlVjpHv9vcheZ7FO9bDBTRLJdr4BE8Ek4bFBk+qFFNAvWp/Fd1b6qb++DuNHj85WdwXMOiog5IRK7ceIMOLPcmxNq7axyZy+Nm2Cehr3yBONC1PFixScLHbYPBeMODhbGYz94EZbLG9bi/PyFBb4DXHiKzlk4KSgpXgQKPpRElziCbaPwZZ0eYcPHhjKU4t7ki7NTv2QXUMg+uvuVVj2mpjR12ynbbKlI42hPG1amEWBe3Rc6fxqnnBuRryLZ7DanpknvWKyQq7KZGHrFziZ0A7CURz1gOdxwlCAq/Fp1gj10jM0ZI+yhwTFrxBP2WtquKWADHF0pFnSVuJJRysPTvisRaS57wviVYBtBrtFbfaLpFs+pCebbO5+GQgopZVZH39sIgcGHcpxEMst0BwrnPfH3DQlbjVgkZQf6BbeJC5aA/kDU5RdSbQ2563BpdLEDDCwoiU95nKpMm8O3U210GQ0x/xWEaC1t2d9S6FUElTUJRXYVMePvgrZ2vFj9ya6o6nKUkxEp2iIdKgurDx0EMHvv5OtLFXNuWYWHSJ7s3DOrV66qQtXhqzUIlF4GXDUaJfgsmQ941M7yAj/tbn0GPad0ClEvVwnnpjg4Jhgf5vZBh/SpnMkhbkTHB+4j8kOWELwVsfCStSkFFtp1cUmER9EhjK8cJbw/aJU59fFfd2WGCzULHfA21DKrWzIl388kk/xqwT7Q9iOvcSR7ZstMtGTQXnxCX/Wp8Toj64H3dlGjzLhFx6uBNwXT114Rxl8efKuoUg7tdw1qWm9JeP/vMQzU9KwfyHLK6gCAIfZJgyY+rur7F+eSOEIDyidFrK3SVLQ7OYGFeWhTMKMGw7ZWenL/qX7APQ+vGh2jP52U7elOKx/TlNFyPrG/aYP3t04SueYRGvEp+zfdUM1nf/7v/DxssURKosAGGkRpo0KCsTSB8S8qu9M+07b0RDEWj/PtukegoEDu+6xk3tZdjrZVM21eDCh8q0OGm/Jx0HMeePqHco3xGS1JX1otAZwPHZiI2iccZK+WmJLLZIrXK4lHo/wYiB+NyIpd2VwgSuvnkxaNQyvlmukRi2BO35dGC5QZcUywiBfThK7sD/B22G46eQSGy7jy9IJbZ1xM+5Gquq97O8+V9R+xG2umY7H+Xot6PZF5OOHDFb6pETI8l42Ct5vlII2dfYqjedzXp/oWkuqnoaPz3gQN0KbFh9lk6oUuI1iLhpC1lHGo7zo1Wq441eQogKwaXSCplP414Qn8tQ4zW/E7cghKnzV185+iGupqNshY0eKUmK0vFLwvG+vw==" - }, -} +echo "Created ECC key with ID: $ECC_KEY_ID" ``` -- The **kas_id** should match the uuid generated by the database when the key access server was registered via the **CreateKeyAccessServer** request performed in step 1. (Required) -- The **key_id** can be any name you wish to associate with the key to be created, which will show up in the manifest of the TDF. (Required) -- The **key_algorithm** is the specific cipher bit/shape of the key and can be one of the following: (Required) - -| Key Algorithm | `alg` Value | -| -------------- | ----------- | -| `rsa:2048` | 1 | -| `rsa:4096` | 2 | -| `ec:secp256r1` | 3 | -| `ec:secp384r1` | 4 | -| `ec:secp521r1` | 5 | - -- The **key_mode** basically tells the KAS during a rewrap where to expect the key so that it can perform a decryption. Available modes are: (Required) - -| Key Mode | `mode` Value | Description | -| ---------------------------- | ------------ | ------------------------------------------------------------------------------------------------------- | -| `KEY_MODE_CONFIG_ROOT_KEY` | 1 | The symmetric wrapping (root) key is stored with the platform configuration and the wrapped KAS key is stored in the platform database.| -| `KEY_MODE_PROVIDER_ROOT_KEY` | 2 | The symmetric wrapping (root) key is stored external to the platform using a KMS or HSM interface. The wrapped KAS key is stored in the platform's database.| -| `KEY_MODE_REMOTE` | 3 | The private portion of the asymmetric key is stored external to the platform using a KMS or HSM interface. This is also referred to as `STRICT_MODE`. No symmetric (root) key is required here since the platform is not storing any sensitive key material. | -| `KEY_MODE_PUBLIC_KEY_ONLY` | 4 | No private key information is stored. This is used when importing another org's policy information.| - -- The **public_key_ctx** holds the public key for the asymmetric key pair. (Required) -- The **private_key_ctx** holds the encrypted private key and a **key_id** specific to the symmetric key that is wrapping the private key. For keys of mode **KEY_MODE_REMOTE**, the **key_id** within private_key_ctx is used for identifying the remote private key. Our [key managers](./key_managers.md) use the **key_id** field present within the private_key_ctx when making requests to your external KMS/HSM instead of the **key_id** field at the root of the object. We do this to allow for larger key identifiers, which might be necessary for external providers. (Ex: arns with AWS) - -:::important -Wrapped_Key is only required for KEY_MODE_CONFIG_ROOT_KEY and KEY_MODE_PROVIDER_ROOT_KEY. -Key_Id is required for all key modes except KEY_MODE_PUBLIC_KEY_ONLY -::: +:::note Key Material +In production, replace the example key material with your actual: -:::note -You can also specify metadata for the key via a common metadata structure, but that is not covered here. +- Public key PEM from your key generation process +- Base64-encoded wrapped private key +- Appropriate wrapping key ID from your KAS configuration ::: -The above JSON request covers registering a key where the asymmetric key pair will be stored within the platform's database, and the expected symmetric key that decrypts the private key will be stored within KAS. What if you want to only store a reference to a key and have that reference point to a key elsewhere? Say for a KMS, for example. That's where **KEY_MODE_REMOTE** is handy. - -```json5 -{ - "kas_id": "db740f79-cf85-41b2-a27f-663b0e2b169b", - "key_id": "aws-key-1", - "key_algorithm": 1, - "key_mode": 3, - "public_key_ctx": { - "pem": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvVENDQWVXZ0F3SUJBZ0lVRjA5QWpIallJOENSekVGSmpjVEREY2lkZEgwd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0RqRU1NQW9HQTFVRUF3d0RhMkZ6TUI0WERUSTBNRFV3TmpFeU1UQXhNbG9YRFRJMU1EVXdOakV5TVRBeApNbG93RGpFTU1Bb0dBMVVFQXd3RGEyRnpNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDCkFRRUFsVEI5eks3dzF1MG1mOWI5bmg4U3p0K0t5ZFdvbjUyUDVNY2k4Z1YremFQWjlmM0picllHVWZWWG16RmEKbVErTjBmTjZRaDhVOWlzbzFPZ3VHWDB1eTRwV1k3em1XTXFtRjVpSk9INENBdTVnV2Vuc3I5R2FXM1lmeEtWRgpWUnpKcUx0U3pBT3lvQ0lhNVErSTJUdmdNeEZjSFYwSGN4OXU5ekdYdDdKNUdlV1pTM3I2OUg4MGRGUjdGc0lRCk1hTDZRUHhmUWNWOVJidW9weUFwOE43TktiU3p4OEZUZEJYUWE4QnVxTXNvNlZyK0crZC9oeVp6YlpVc1pEUzQKZ3RtNnJCQlUraE8zMEN6WnBaZHBETVNPdjljNGNZUXlpdElwRjBrbVdQcE02YitKUzRyN2hGUU5kY1BWVXBWeAovVGowRUFNaWsrcHpZQUxyalRLZjlHcmJkd0lEQVFBQm8xTXdVVEFkQmdOVkhRNEVGZ1FVa2RTM0JuWHNnZUtSCnVNL0hCNW9sM3lacVRvMHdId1lEVlIwakJCZ3dGb0FVa2RTM0JuWHNnZUtSdU0vSEI1b2wzeVpxVG8wd0R3WUQKVlIwVEFRSC9CQVV3QXdFQi96QU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFTZXoxQVZhNVhHVXBrNUg4THpySQp4U0VzUnRHUitSV2NJdGxMZVVLc3NPTTNzY01mUEZ6SnQyTldwd0NwSTRiY2FGQVgzeFlLR1lxVnZzVmpxbTFVCnRKYmh6dzFhUVBUT2ZvNDZlOXNGK2lxZGJpbEplRUlQZllDb0w2VXR0Rm96TC9LZ2k1eWFlSXBScTFiaHFwVjcKTVRmSm1CbHVIckZhdWNFaEFMTDJoK0tsQ1R6amJsQnBZN1hpVFZHc3JZc0V2MmF3NEh2b1pZVkZVV3IxQ1JXYgppcDB4dFZ1SXE5RFhha0ZJYWVQWlZnMHRCczVBejBzUGlpNUdUVjUzVXdmcjY4VjhBYXFRSE9yVGRQL2ZadkN3CmRXTWdKSnltc21VUis1cTJCTnJvZHlTWDd4RzZxenE0Mm5BV1ZwSlNvb0g5ZWdSYXZuZ0Q5UXRreWU5KzBuRW0KVGc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==" - }, - "private_key_ctx": { - "key_id": "arn:aws:kms:REGION:ACCOUNT-ID:key/KEY-ID", - }, - "provider_config_id": "948e8167-6f32-4eee-89b7-f0cd42ce70ea" -} +## Step 3: Set Base Key (System Default) + +```bash +# Set ECC key as base key for broad compatibility +otdfctl policy kas-registry key base set --id "$ECC_KEY_ID" + +echo "Set base key to ECC key for NanoTDF support" ``` -The above is a valid request for registering a key with the platform where the private key is stored externally from the platform. The difference between the two requests is: +## Step 4: Create Policy Structure + +Set up your attribute-based access control hierarchy: + +```bash +# Create namespace +NAMESPACE_ID=$(otdfctl policy namespaces create \ + --name "quickstart.example.com" \ + --output json | jq -r '.namespace.id') + +echo "Created namespace with ID: $NAMESPACE_ID" + +# Create attribute definition +ATTRIBUTE_ID=$(otdfctl policy attributes create \ + --namespace "$NAMESPACE_ID" \ + --name "classification" \ + --rule "ATTRIBUTE_RULE_TYPE_ENUM_ANY_OF" \ + --output json | jq -r '.attribute.id') -- The **wrapped_key** should not be within the **private_key_ctx**, in addition a provider configuration has been registered with the system. See details on [provider configuration](./key_managers.md#provider-configurations). In small detail, adding a provider configuration reference to a key tells KAS what [key manager](./key_managers.md) should be used to complete a rewrap operation. +echo "Created attribute with ID: $ATTRIBUTE_ID" + +# Create attribute values +SECRET_VALUE_ID=$(otdfctl policy attributes values create \ + --attribute "$ATTRIBUTE_ID" \ + --value "secret" \ + --output json | jq -r '.value.id') + +CONFIDENTIAL_VALUE_ID=$(otdfctl policy attributes values create \ + --attribute "$ATTRIBUTE_ID" \ + --value "confidential" \ + --output json | jq -r '.value.id') + +echo "Created values: secret=$SECRET_VALUE_ID, confidential=$CONFIDENTIAL_VALUE_ID" +``` + +## Step 5: Create Key Mappings + +Map keys to policy objects at different specificity levels: + +```bash +# Namespace-level mapping (affects all attributes in namespace) +otdfctl policy namespaces key assign \ + --namespace-id "$NAMESPACE_ID" \ + --key-id "$RSA_KEY_ID" + +echo "Assigned RSA key to namespace (default for all attributes)" + +# Attribute-level mapping (overrides namespace mapping) +otdfctl policy attributes key assign \ + --attribute-id "$ATTRIBUTE_ID" \ + --key-id "$ECC_KEY_ID" + +echo "Assigned ECC key to classification attribute (overrides namespace)" + +# Value-level mapping (most specific, overrides attribute mapping) +otdfctl policy attributes value key assign \ + --value-id "$SECRET_VALUE_ID" \ + --key-id "$RSA_KEY_ID" + +echo "Assigned RSA key to 'secret' value (highest priority)" +``` + +## Step 6: Verify Configuration + +```bash +# List all key mappings +echo "=== Key Mappings ===" +otdfctl policy kas-registry key mappings list + +# Check base key +echo "=== Base Key ===" +otdfctl policy kas-registry key base get + +# List all keys +echo "=== All Keys ===" +otdfctl policy kas-registry key list +``` + +## Step 7: Test with SDK + +Now test the key resolution with TDF operations: + +### Python SDK Example + +```python +from opentdf import TDFClient, NanoTDFClient, LogLevel, OIDCCredentials + +# Configure client +creds = OIDCCredentials() +creds.set_client_credentials_client_secret( + client_id="your-client-id", + client_secret="your-client-secret", + organization_name="your-org", + oidc_endpoint="https://auth.example.com" +) + +client = TDFClient( + oidc_credentials=creds, + kas_url="https://kas.example.com", + platform_url="https://platform.example.com" +) + +# Test different attribute scenarios +test_cases = [ + # Uses secret value mapping → RSA key + {"attrs": ["https://quickstart.example.com/attr/classification/value/secret"], + "file": "secret_test.txt", "expected_key": "RSA"}, + + # Uses confidential value → falls back to attribute mapping → ECC key + {"attrs": ["https://quickstart.example.com/attr/classification/value/confidential"], + "file": "confidential_test.txt", "expected_key": "ECC"}, + + # No attributes → uses base key → ECC key + {"attrs": [], "file": "base_key_test.txt", "expected_key": "ECC (base)"} +] + +for test in test_cases: + print(f"Testing {test['expected_key']}: {test['attrs']}") + + # Create TDF + with open(test['file'], 'w') as f: + f.write(f"Test content for {test['expected_key']}") + + tdf_file = f"{test['file']}.tdf" + client.encrypt_file( + file_path=test['file'], + tdf_path=tdf_file, + attributes=test['attrs'] + ) + + # Decrypt to verify + decrypted_file = f"{test['file']}.decrypted" + client.decrypt_file(tdf_file, decrypted_file) + + print(f"✅ {test['expected_key']} encryption/decryption successful") +``` + +## Understanding Key Resolution + +The system resolves keys in this priority order: + +1. **🎯 Value-level mapping** (most specific) + - `quickstart.example.com/attr/classification/value/secret` → RSA Key + +2. **📝 Attribute-level mapping** + - `quickstart.example.com/attr/classification/value/confidential` → ECC Key + +3. **📁 Namespace-level mapping** + - Other attributes in `quickstart.example.com` → RSA Key + +4. **🔑 Base Key** (fallback) + - No attribute mappings found → ECC Key + +## Advanced: External Provider Integration + +For production deployments using external KMS: + +```bash +# 1. Create provider configuration +PROVIDER_ID=$(otdfctl policy keymanagement provider-config create \ + --name "aws-kms-prod" \ + --config-json '{ + "region": "us-east-1", + "access_key_id": "AKIA...", + "secret_access_key": "...", + "kms_key_id": "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012" + }' \ + --output json | jq -r '.provider_config.id') + +# 2. Create key that uses external provider +EXTERNAL_KEY_ID=$(otdfctl policy kas-registry key create \ + --kas-id "$KAS_ID" \ + --key-id "aws-kms-key-2024" \ + --algorithm "ALGORITHM_RSA_2048" \ + --key-mode "KEY_MODE_PROVIDER_ROOT_KEY" \ + --public-key-pem "$(cat kms_public_key.pem)" \ + --provider-config-id "$PROVIDER_ID" \ + --wrapped-key "$(base64 < kms_wrapped_key.bin)" \ + --output json | jq -r '.kas_key.key.id') + +echo "Created external provider key with ID: $EXTERNAL_KEY_ID" +``` -## Assigning Key Mappings (Optional, but recommended) +## Next Steps -Follow the [key mapping](./key_mappings.md) guide for creating mappings. +🎯 **You now have a fully functional key management system!** -## Setting base key (Optional, but recommended) +**Continue with:** -Follow the [base key setup](./base_key.md) guide for setting a base key. +- [Key Rotation](./index.md#key-rotation-and-lifecycle) - Learn automated key rotation +- [Provider Integration](./key_managers.md) - Connect external KMS/HSM systems +- [Advanced Mappings](./key_mappings.md) - Complex attribute hierarchies +- [Base Key Management](./base_key.md) - System-wide defaults -## Important additional comments +**For production:** -1. As of version 0.7.0 of the OpenTDF platform, there is no way to delete a key. If you would like to deactivate a key, use the **RotateKey** rpc. -2. When creating a key of mode **KEY_MODE_CONFIG_ROOT_KEY** the **wrapped_key** is expected to be base64 encoded. +- Set up proper key material generation and storage +- Configure external provider integrations +- Implement key rotation policies +- Monitor key mapping effectiveness diff --git a/docs/components/policy/subject_mappings.md b/docs/components/policy/subject_mappings.md index 5174b0b..67121e0 100644 --- a/docs/components/policy/subject_mappings.md +++ b/docs/components/policy/subject_mappings.md @@ -18,7 +18,7 @@ e-->f(Attribute Value); A Subject Mapping consists of: -1. A mapped [Attribute Value](./attributes#values) +1. A mapped [Attribute Value](./attributes) 2. A mapped Subject Condition Set 3. One or more policy [Actions](./actions) @@ -31,7 +31,7 @@ B(Subject Mapping)<--> C(Subject Condition Set); ## Actions on Attribute Value -Subject Mappings link entities to one or more Actions they are allowed to take on Attribute Values, known as entitlements. +Subject Mappings link entities to one or more Actions they are allowed to take on Attribute Values, known as entitlements. For more information, see the [Attribute Value](./attributes) and the [Actions](./actions.md) documentation. @@ -41,11 +41,12 @@ A Subject Condition Set is a logical structure to resolve a representation of th ### Examples -#### Subject Mapping for Executives: +#### Subject Mapping for Executives -Consider a flow where users with the idP role `vice_president` should be allowed to `read` data tagged `https://example.org/attr/role_level/value/vice_president`. +Consider a flow where users with the idP role `vice_president` should be allowed to `read` data tagged `https://example.org/attr/role_level/value/vice_president`. The Subject Mapping would contain: + 1. Action: `read` 2. Attribute Value: `https://example.org/attr/role_level/value/vice_president` 3. A Subject Condition Set with this matching logic: @@ -66,11 +67,12 @@ If the entity representation contains a field `role: vice_president`, the Subjec The inverse also applies, where an entity representation containing `role: `, or lacking the `role` field completely will resolve `false`, and the Subject Mapping would not apply (no entitlement). -#### Subject Mapping for Contributors: +#### Subject Mapping for Contributors Consider a flow where engineers with titles like `staff`, `senior`, `junior`, or `intern` should be able to `create` data tagged as `https://example.org/attr/department_level/value/contributor`. The Subject Mapping would contain: + 1. Action: `create` 2. Attribute Value: `https://example.org/attr/department_level/value/contributor` 3. A Subject Condition Set with this matching logic: @@ -95,4 +97,3 @@ subject_sets: ``` In plain language: If an entity's access token from the IdP or Entity Resolution Service (ERS) includes a `title` field with a value `staff`, `senior`, `junior`, or `intern` AND a `department` field with value `engineering`, then the Subject Mapping will apply to them, granting entitlement for the contained Action `create` on the Attribute Value of `contributor`. - diff --git a/docs/sdks/authorization.mdx b/docs/sdks/authorization.mdx index ea159be..c0cfbae 100644 --- a/docs/sdks/authorization.mdx +++ b/docs/sdks/authorization.mdx @@ -810,6 +810,269 @@ async function getBulkDecisions(sdk) { } ``` +#### V1 API (Legacy) + +```go +func getBulkDecisionsV1(client *sdk.SDK) { + // V1 API uses GetDecisions for bulk processing + decisionRequests := []*authorization.DecisionRequest{{ + Actions: []*policy.Action{{Name: "decrypt"}}, + EntityChains: []*authorization.EntityChain{{ + Id: "ec1", + Entities: []*authorization.Entity{{ + EntityType: &authorization.Entity_EmailAddress{ + EmailAddress: "user@company.com", + }, + Category: authorization.Entity_CATEGORY_SUBJECT, + }}, + }}, + ResourceAttributes: []*authorization.ResourceAttribute{ + { + AttributeValueFqns: []string{"https://company.com/attr/class/value/public"}, + }, + { + AttributeValueFqns: []string{"https://company.com/attr/class/value/confidential"}, + }, + }, + }} + + decisionRequest := &authorization.GetDecisionsRequest{ + DecisionRequests: decisionRequests, + } + + decisionResponse, err := client.Authorization.GetDecisions( + context.Background(), + decisionRequest, + ) + if err != nil { + log.Fatal(err) + } + + for _, dr := range decisionResponse.GetDecisionResponses() { + fmt.Printf("Entity chain %s: %v\n", + dr.GetEntityChainId(), + dr.GetDecision()) + if len(dr.GetObligations()) > 0 { + fmt.Printf("Obligations: %v\n", dr.GetObligations()) + } + } +} +``` + + + + +```java +public void getBulkDecisions(SDK sdk) throws ExecutionException, InterruptedException { + GetDecisionBulkRequest request = GetDecisionBulkRequest.newBuilder() + .addDecisionRequests( + GetDecisionMultiResourceRequest.newBuilder() + .setEntityIdentifier( + EntityIdentifier.newBuilder() + .setEntityChain( + EntityChain.newBuilder() + .addEntities( + Entity.newBuilder() + .setId("user-123") + .setEmailAddress("user@company.com") + ) + ) + ) + .setAction( + Action.newBuilder() + .setName("decrypt") + ) + .addResources( + Resource.newBuilder() + .setEphemeralId("resource-1") + .setAttributeValues( + Resource.AttributeValues.newBuilder() + .addFqns("https://company.com/attr/class/value/public") + ) + ) + .addResources( + Resource.newBuilder() + .setEphemeralId("resource-2") + .setAttributeValues( + Resource.AttributeValues.newBuilder() + .addFqns("https://company.com/attr/class/value/confidential") + ) + ) + ) + .build(); + + GetDecisionBulkResponse resp = sdk.getServices() + .authorization() + .getDecisionBulk(request) + .get(); + + for (GetDecisionMultiResourceResponse response : resp.getDecisionResponsesList()) { + if (response.hasAllPermitted()) { + System.out.println("All resources permitted: " + response.getAllPermitted().getValue()); + } + for (ResourceDecision resourceDecision : response.getResourceDecisionsList()) { + System.out.println("Resource " + resourceDecision.getEphemeralResourceId() + + ": " + resourceDecision.getDecision()); + } + } +} +``` + + + + +```javascript +async function getBulkDecisions(sdk) { + const request = { + decisionRequests: [{ + entityIdentifier: { + entityChain: { + entities: [{ + id: 'user-123', + emailAddress: 'user@company.com' + }] + } + }, + action: { + name: 'decrypt' + }, + resources: [ + { + ephemeralId: 'resource-1', + attributeValues: { + fqns: ['https://company.com/attr/class/value/public'] + } + }, + { + ephemeralId: 'resource-2', + attributeValues: { + fqns: ['https://company.com/attr/class/value/confidential'] + } + } + ] + }] + }; + + const response = await sdk.authorization.getDecisionBulk(request); + + response.decisionResponses.forEach(resp => { + if (resp.allPermitted !== undefined) { + console.log('All resources permitted:', resp.allPermitted.value); + } + resp.resourceDecisions.forEach(resourceDecision => { + console.log(`Resource ${resourceDecision.ephemeralResourceId}: ${resourceDecision.decision}`); + }); + }); +} +``` + + + + +```java +public void getBulkDecisions(SDK sdk) throws ExecutionException, InterruptedException { + GetDecisionBulkRequest request = GetDecisionBulkRequest.newBuilder() + .addDecisionRequests( + GetDecisionMultiResourceRequest.newBuilder() + .setEntityIdentifier( + EntityIdentifier.newBuilder() + .setEntityChain( + EntityChain.newBuilder() + .addEntities( + Entity.newBuilder() + .setId("user-123") + .setEmailAddress("user@company.com") + ) + ) + ) + .setAction( + Action.newBuilder() + .setName("decrypt") + ) + .addResources( + Resource.newBuilder() + .setEphemeralId("resource-1") + .setAttributeValues( + Resource.AttributeValues.newBuilder() + .addFqns("https://company.com/attr/class/value/public") + ) + ) + .addResources( + Resource.newBuilder() + .setEphemeralId("resource-2") + .setAttributeValues( + Resource.AttributeValues.newBuilder() + .addFqns("https://company.com/attr/class/value/confidential") + ) + ) + ) + .build(); + + GetDecisionBulkResponse resp = sdk.getServices() + .authorization() + .getDecisionBulk(request) + .get(); + + for (GetDecisionMultiResourceResponse response : resp.getDecisionResponsesList()) { + if (response.hasAllPermitted()) { + System.out.println("All resources permitted: " + response.getAllPermitted().getValue()); + } + for (ResourceDecision resourceDecision : response.getResourceDecisionsList()) { + System.out.println("Resource " + resourceDecision.getEphemeralResourceId() + + ": " + resourceDecision.getDecision()); + } + } +} +``` + + + + +```javascript +async function getBulkDecisions(sdk) { + const request = { + decisionRequests: [{ + entityIdentifier: { + entityChain: { + entities: [{ + id: 'user-123', + emailAddress: 'user@company.com' + }] + } + }, + action: { + name: 'decrypt' + }, + resources: [ + { + ephemeralId: 'resource-1', + attributeValues: { + fqns: ['https://company.com/attr/class/value/public'] + } + }, + { + ephemeralId: 'resource-2', + attributeValues: { + fqns: ['https://company.com/attr/class/value/confidential'] + } + } + ] + }] + }; + + const response = await sdk.authorization.getDecisionBulk(request); + + response.decisionResponses.forEach(resp => { + if (resp.allPermitted !== undefined) { + console.log('All resources permitted:', resp.allPermitted.value); + } + resp.resourceDecisions.forEach(resourceDecision => { + console.log(`Resource ${resourceDecision.ephemeralResourceId}: ${resourceDecision.decision}`); + }); + }); +} +``` + @@ -976,6 +1239,70 @@ async function getDecisionWithToken(sdk, jwtToken) { } ``` + + + +```java +public void getDecisionWithToken(SDK sdk, String jwtToken) throws ExecutionException, InterruptedException { + GetDecisionRequest request = GetDecisionRequest.newBuilder() + .setEntityIdentifier( + EntityIdentifier.newBuilder() + .setToken( + Token.newBuilder() + .setId("token-1") + .setJwt(jwtToken) + ) + ) + .setAction( + Action.newBuilder() + .setName("decrypt") + ) + .setResource( + Resource.newBuilder() + .setAttributeValues( + Resource.AttributeValues.newBuilder() + .addFqns("https://company.com/attr/classification/value/public") + ) + ) + .build(); + + GetDecisionResponse resp = sdk.getServices() + .authorization() + .getDecision(request) + .get(); + + System.out.println("Token-based decision: " + resp.getDecision().getDecision()); +} +``` + + + + +```javascript +async function getDecisionWithToken(sdk, jwtToken) { + const request = { + entityIdentifier: { + token: { + id: 'token-1', + jwt: jwtToken + } + }, + action: { + name: 'decrypt' + }, + resource: { + attributeValues: { + fqns: ['https://company.com/attr/classification/value/public'] + } + } + }; + + const response = await sdk.authorization.getDecision(request); + + console.log('Token-based decision:', response.decision.decision); +} +``` +