diff --git a/samtranslator/schema/schema.json b/samtranslator/schema/schema.json index 652111636..f325d9626 100644 --- a/samtranslator/schema/schema.json +++ b/samtranslator/schema/schema.json @@ -1743,7 +1743,7 @@ "type": "array" }, "Type": { - "markdownDescription": "The type represents the zone of trust for the analyzer.\n\n*Allowed Values* : ACCOUNT | ORGANIZATION | ACCOUNT_UNUSED_ACCESS | ORGANIZATION_UNUSED_ACCESS", + "markdownDescription": "The type represents the zone of trust for the analyzer.\n\n*Allowed Values* : ACCOUNT | ORGANIZATION | ACCOUNT_UNUSED_ACCESS | ACCOUNT_INTERNAL_ACCESS | ORGANIZATION_INTERNAL_ACCESS | ORGANIZATION_UNUSED_ACCESS", "title": "Type", "type": "string" } @@ -25157,7 +25157,7 @@ "type": "array" }, "Email": { - "markdownDescription": "", + "markdownDescription": "Specifies the email address associated with this trading partner.", "title": "Email", "type": "string" }, @@ -25167,7 +25167,7 @@ "type": "string" }, "Phone": { - "markdownDescription": "", + "markdownDescription": "Specifies the phone number associated with the partnership.", "title": "Phone", "type": "string" }, @@ -27242,7 +27242,7 @@ "type": "string" }, "ImageType": { - "markdownDescription": "The image type to match with the instance type to select an AMI. The supported values are different for `ECS` and `EKS` resources.\n\n- **ECS** - If the `imageIdOverride` parameter isn't specified, then a recent [Amazon ECS-optimized Amazon Linux 2 AMI](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#al2ami) ( `ECS_AL2` ) is used. If a new image type is specified in an update, but neither an `imageId` nor a `imageIdOverride` parameter is specified, then the latest Amazon ECS optimized AMI for that image type that's supported by AWS Batch is used.\n\n- **ECS_AL2** - [Amazon Linux 2](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#al2ami) : Default for all non-GPU instance families.\n- **ECS_AL2_NVIDIA** - [Amazon Linux 2 (GPU)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#gpuami) : Default for all GPU instance families (for example `P4` and `G4` ) and can be used for all non AWS Graviton-based instance types.\n- **ECS_AL2023** - [Amazon Linux 2023](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html) : AWS Batch supports Amazon Linux 2023.\n\n> Amazon Linux 2023 does not support `A1` instances.\n- **ECS_AL1** - [Amazon Linux](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#alami) . Amazon Linux has reached the end-of-life of standard support. For more information, see [Amazon Linux AMI](https://docs.aws.amazon.com/amazon-linux-ami/) .\n- **EKS** - If the `imageIdOverride` parameter isn't specified, then a recent [Amazon EKS-optimized Amazon Linux AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) ( `EKS_AL2` ) is used. If a new image type is specified in an update, but neither an `imageId` nor a `imageIdOverride` parameter is specified, then the latest Amazon EKS optimized AMI for that image type that AWS Batch supports is used.\n\n- **EKS_AL2** - [Amazon Linux 2](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : Default for all non-GPU instance families.\n- **EKS_AL2_NVIDIA** - [Amazon Linux 2 (accelerated)](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : Default for all GPU instance families (for example, `P4` and `G4` ) and can be used for all non AWS Graviton-based instance types.", + "markdownDescription": "The image type to match with the instance type to select an AMI. The supported values are different for `ECS` and `EKS` resources.\n\n- **ECS** - If the `imageIdOverride` parameter isn't specified, then a recent [Amazon ECS-optimized Amazon Linux 2 AMI](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#al2ami) ( `ECS_AL2` ) is used. If a new image type is specified in an update, but neither an `imageId` nor a `imageIdOverride` parameter is specified, then the latest Amazon ECS optimized AMI for that image type that's supported by AWS Batch is used.\n\n- **ECS_AL2** - [Amazon Linux 2](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#al2ami) : Default for all non-GPU instance families.\n- **ECS_AL2_NVIDIA** - [Amazon Linux 2 (GPU)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#gpuami) : Default for all GPU instance families (for example `P4` and `G4` ) and can be used for all non AWS Graviton-based instance types.\n- **ECS_AL2023** - [Amazon Linux 2023](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html) : AWS Batch supports Amazon Linux 2023.\n\n> Amazon Linux 2023 does not support `A1` instances.\n- **ECS_AL1** - [Amazon Linux](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#alami) . Amazon Linux has reached the end-of-life of standard support. For more information, see [Amazon Linux AMI](https://docs.aws.amazon.com/amazon-linux-ami/) .\n- **EKS** - If the `imageIdOverride` parameter isn't specified, then a recent [Amazon EKS-optimized Amazon Linux AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) ( `EKS_AL2` ) is used. If a new image type is specified in an update, but neither an `imageId` nor a `imageIdOverride` parameter is specified, then the latest Amazon EKS optimized AMI for that image type that AWS Batch supports is used.\n\n- **EKS_AL2** - [Amazon Linux 2](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : Default for all non-GPU instance families.\n- **EKS_AL2_NVIDIA** - [Amazon Linux 2 (accelerated)](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : Default for all GPU instance families (for example, `P4` and `G4` ) and can be used for all non AWS Graviton-based instance types.\n- **EKS_AL2023** - [Amazon Linux 2023](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : AWS Batch supports Amazon Linux 2023.\n\n> Amazon Linux 2023 does not support `A1` instances.\n- **EKS_AL2023_NVIDIA** - [Amazon Linux 2023 (accelerated)](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : GPU instance families and can be used for all non AWS Graviton-based instance types.", "title": "ImageType", "type": "string" } @@ -34533,7 +34533,7 @@ "title": "LoggingConfig" }, "SchemaHandlerPackage": { - "markdownDescription": "A URL to the Amazon S3 bucket containing the Hook project package that contains the necessary files for the Hook you want to register.\n\nFor information on generating a schema handler package, see [Modeling custom CloudFormation Hooks](https://docs.aws.amazon.com/cloudformation-cli/latest/hooks-userguide/hooks-model.html) in the *AWS CloudFormation Hooks User Guide* .\n\n> To register the Hook, you must have `s3:GetObject` permissions to access the S3 objects.", + "markdownDescription": "A URL to the Amazon S3 bucket for the Hook project package that contains the necessary files for the Hook you want to register.\n\nFor information on generating a schema handler package, see [Modeling custom CloudFormation Hooks](https://docs.aws.amazon.com/cloudformation-cli/latest/hooks-userguide/hooks-model.html) in the *AWS CloudFormation Hooks User Guide* .\n\n> To register the Hook, you must have `s3:GetObject` permissions to access the S3 objects.", "title": "SchemaHandlerPackage", "type": "string" }, @@ -34788,7 +34788,7 @@ "type": "string" }, "ModulePackage": { - "markdownDescription": "A URL to the S3 bucket containing the package that contains the template fragment and schema files for the module version to register.\n\nFor more information, see [Module structure and requirements](https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/modules-structure.html) in the *AWS CloudFormation Command Line Interface (CLI) User Guide* .\n\n> To register the module version, you must have `s3:GetObject` permissions to access the S3 objects.", + "markdownDescription": "A URL to the S3 bucket for the package that contains the template fragment and schema files for the module version to register.\n\nFor more information, see [Module structure and requirements](https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/modules-structure.html) in the *AWS CloudFormation Command Line Interface (CLI) User Guide* .\n\n> To register the module version, you must have `s3:GetObject` permissions to access the S3 objects.", "title": "ModulePackage", "type": "string" } @@ -35094,7 +35094,7 @@ "title": "LoggingConfig" }, "SchemaHandlerPackage": { - "markdownDescription": "A URL to the S3 bucket containing the resource project package that contains the necessary files for the resource you want to register.\n\nFor information on generating a schema handler package, see [Modeling resource types to use with AWS CloudFormation](https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/resource-type-model.html) in the *AWS CloudFormation Command Line Interface (CLI) User Guide* .\n\n> To register the resource version, you must have `s3:GetObject` permissions to access the S3 objects.", + "markdownDescription": "A URL to the S3 bucket for the resource project package that contains the necessary files for the resource you want to register.\n\nFor information on generating a schema handler package, see [Modeling resource types to use with AWS CloudFormation](https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/resource-type-model.html) in the *AWS CloudFormation Command Line Interface (CLI) User Guide* .\n\n> To register the resource version, you must have `s3:GetObject` permissions to access the S3 objects.", "title": "SchemaHandlerPackage", "type": "string" }, @@ -35210,7 +35210,7 @@ "type": "array" }, "TemplateURL": { - "markdownDescription": "The URL of a file containing the template body. The URL must point to a template (max size: 1 MB) that's located in an Amazon S3 bucket. The location for an Amazon S3 bucket must start with `https://` .\n\nWhether an update causes interruptions depends on the resources that are being updated. An update never causes a nested stack to be replaced.", + "markdownDescription": "The URL of a file that contains the template body. The URL must point to a template (max size: 1 MB) that's located in an Amazon S3 bucket. The location for an Amazon S3 bucket must start with `https://` .\n\nWhether an update causes interruptions depends on the resources that are being updated. An update never causes a nested stack to be replaced.", "title": "TemplateURL", "type": "string" }, @@ -35359,12 +35359,12 @@ "type": "array" }, "TemplateBody": { - "markdownDescription": "The structure that contains the template body, with a minimum length of 1 byte and a maximum length of 51,200 bytes.\n\nYou must include either `TemplateURL` or `TemplateBody` in a StackSet, but you can't use both. Dynamic references in the `TemplateBody` may not work correctly in all cases. It's recommended to pass templates containing dynamic references through `TemplateUrl` instead.", + "markdownDescription": "The structure that contains the template body, with a minimum length of 1 byte and a maximum length of 51,200 bytes.\n\nYou must include either `TemplateURL` or `TemplateBody` in a StackSet, but you can't use both. Dynamic references in the `TemplateBody` may not work correctly in all cases. It's recommended to pass templates that contain dynamic references through `TemplateUrl` instead.", "title": "TemplateBody", "type": "string" }, "TemplateURL": { - "markdownDescription": "The URL of a file containing the template body. The URL must point to a template (max size: 1 MB) that's located in an Amazon S3 bucket or a Systems Manager document. The location for an Amazon S3 bucket must start with `https://` .\n\nConditional: You must specify only one of the following parameters: `TemplateBody` , `TemplateURL` .", + "markdownDescription": "The URL of a file that contains the template body. The URL must point to a template (max size: 1 MB) that's located in an Amazon S3 bucket or a Systems Manager document. The location for an Amazon S3 bucket must start with `https://` .\n\nConditional: You must specify only one of the following parameters: `TemplateBody` , `TemplateURL` .", "title": "TemplateURL", "type": "string" } @@ -35459,12 +35459,12 @@ "additionalProperties": false, "properties": { "FailureToleranceCount": { - "markdownDescription": "The number of accounts, per Region, for which this operation can fail before CloudFormation stops the operation in that Region. If the operation is stopped in a Region, CloudFormation doesn't attempt the operation in any subsequent Regions.\n\nConditional: You must specify either `FailureToleranceCount` or `FailureTolerancePercentage` (but not both).", + "markdownDescription": "The number of accounts per Region this operation can fail in before CloudFormation stops the operation in that Region. If the operation is stopped in a Region, CloudFormation doesn't attempt the operation in any subsequent Regions.\n\nConditional: You must specify either `FailureToleranceCount` or `FailureTolerancePercentage` (but not both).", "title": "FailureToleranceCount", "type": "number" }, "FailureTolerancePercentage": { - "markdownDescription": "The percentage of accounts, per Region, for which this stack operation can fail before CloudFormation stops the operation in that Region. If the operation is stopped in a Region, CloudFormation doesn't attempt the operation in any subsequent Regions.\n\nWhen calculating the number of accounts based on the specified percentage, CloudFormation rounds *down* to the next whole number.\n\nConditional: You must specify either `FailureToleranceCount` or `FailureTolerancePercentage` , but not both.", + "markdownDescription": "The percentage of accounts per Region this stack operation can fail in before CloudFormation stops the operation in that Region. If the operation is stopped in a Region, CloudFormation doesn't attempt the operation in any subsequent Regions.\n\nWhen calculating the number of accounts based on the specified percentage, CloudFormation rounds *down* to the next whole number.\n\nConditional: You must specify either `FailureToleranceCount` or `FailureTolerancePercentage` , but not both.", "title": "FailureTolerancePercentage", "type": "number" }, @@ -35713,7 +35713,7 @@ "type": "number" }, "Handle": { - "markdownDescription": "A reference to the wait condition handle used to signal this wait condition. Use the `Ref` intrinsic function to specify an [AWS::CloudFormation::WaitConditionHandle](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cloudformation-waitconditionhandle.html) resource.\n\nAnytime you add a `WaitCondition` resource during a stack update, you must associate the wait condition with a new WaitConditionHandle resource. Don't reuse an old wait condition handle that has already been defined in the template. If you reuse a wait condition handle, the wait condition might evaluate old signals from a previous create or update stack command.\n\nUpdates aren't supported.", + "markdownDescription": "A reference to the wait condition handle used to signal this wait condition. Use the `Ref` intrinsic function to specify an [AWS::CloudFormation::WaitConditionHandle](https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/aws-resource-cloudformation-waitconditionhandle.html) resource.\n\nAnytime you add a `WaitCondition` resource during a stack update, you must associate the wait condition with a new WaitConditionHandle resource. Don't reuse an old wait condition handle that has already been defined in the template. If you reuse a wait condition handle, the wait condition might evaluate old signals from a previous create or update stack command.\n\nUpdates aren't supported.", "title": "Handle", "type": "string" }, @@ -36597,7 +36597,7 @@ "type": "number" }, "OriginKeepaliveTimeout": { - "markdownDescription": "Specifies how long, in seconds, CloudFront persists its connection to the origin. The minimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't specify otherwise) is 5 seconds.\n\nFor more information, see [Keep-alive timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginKeepaliveTimeout) in the *Amazon CloudFront Developer Guide* .", + "markdownDescription": "Specifies how long, in seconds, CloudFront persists its connection to the origin. The minimum timeout is 1 second, the maximum is 120 seconds, and the default (if you don't specify otherwise) is 5 seconds.\n\nFor more information, see [Keep-alive timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginKeepaliveTimeout) in the *Amazon CloudFront Developer Guide* .", "title": "OriginKeepaliveTimeout", "type": "number" }, @@ -36607,7 +36607,7 @@ "type": "string" }, "OriginReadTimeout": { - "markdownDescription": "Specifies how long, in seconds, CloudFront waits for a response from the origin. This is also known as the *origin response timeout* . The minimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't specify otherwise) is 30 seconds.\n\nFor more information, see [Response timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginResponseTimeout) in the *Amazon CloudFront Developer Guide* .", + "markdownDescription": "Specifies how long, in seconds, CloudFront waits for a response from the origin. This is also known as the *origin response timeout* . The minimum timeout is 1 second, the maximum is 120 seconds, and the default (if you don't specify otherwise) is 30 seconds.\n\nFor more information, see [Response timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginResponseTimeout) in the *Amazon CloudFront Developer Guide* .", "title": "OriginReadTimeout", "type": "number" }, @@ -83822,7 +83822,7 @@ }, "Options": { "additionalProperties": true, - "markdownDescription": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n\n> On June 25, 2025, Amazon ECS is changing the default log driver mode from `blocking` to `non-blocking` to prioritize task availability over logging. To continue using the `blocking` mode after this change, do one of the following:\n> \n> - Set the `mode` option in your container definition's `logConfiguration` as `blocking` .\n> - Set the `defaultLogDriverMode` account setting to `blocking` .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", + "markdownDescription": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `non-blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n\n> On June 25, 2025, Amazon ECS changed the default log driver mode from `blocking` to `non-blocking` to prioritize task availability over logging. To continue using the `blocking` mode after this change, do one of the following:\n> \n> - Set the `mode` option in your container definition's `logConfiguration` as `blocking` .\n> - Set the `defaultLogDriverMode` account setting to `blocking` .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -84229,8 +84229,6 @@ "items": { "$ref": "#/definitions/AWS::ECS::TaskDefinition.InferenceAccelerator" }, - "markdownDescription": "The Elastic Inference accelerators to use for the containers in the task.", - "title": "InferenceAccelerators", "type": "array" }, "IpcMode": { @@ -84885,13 +84883,9 @@ "additionalProperties": false, "properties": { "DeviceName": { - "markdownDescription": "The Elastic Inference accelerator device name. The `deviceName` must also be referenced in a container definition as a [ResourceRequirement](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ResourceRequirement.html) .", - "title": "DeviceName", "type": "string" }, "DeviceType": { - "markdownDescription": "The Elastic Inference accelerator type to use.", - "title": "DeviceType", "type": "string" } }, @@ -84992,7 +84986,7 @@ }, "Options": { "additionalProperties": true, - "markdownDescription": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n\n> On June 25, 2025, Amazon ECS is changing the default log driver mode from `blocking` to `non-blocking` to prioritize task availability over logging. To continue using the `blocking` mode after this change, do one of the following:\n> \n> - Set the `mode` option in your container definition's `logConfiguration` as `blocking` .\n> - Set the `defaultLogDriverMode` account setting to `blocking` .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", + "markdownDescription": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `non-blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n\n> On June 25, 2025, Amazon ECS changed the default log driver mode from `blocking` to `non-blocking` to prioritize task availability over logging. To continue using the `blocking` mode after this change, do one of the following:\n> \n> - Set the `mode` option in your container definition's `logConfiguration` as `blocking` .\n> - Set the `defaultLogDriverMode` account setting to `blocking` .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -166217,8 +166211,6 @@ "type": "string" }, "DBSnapshotIdentifier": { - "markdownDescription": "This parameter is not supported.\n\n`AWS::Neptune::DBInstance` does not support restoring from snapshots.\n\n`AWS::Neptune::DBCluster` does support restoring from snapshots.", - "title": "DBSnapshotIdentifier", "type": "string" }, "DBSubnetGroupName": { @@ -234317,7 +234309,7 @@ "additionalProperties": false, "properties": { "Direction": { - "markdownDescription": "Indicates whether the Resolver endpoint allows inbound or outbound DNS queries:\n\n- `INBOUND` : allows DNS queries to your VPC from your network\n- `OUTBOUND` : allows DNS queries from your VPC to your network", + "markdownDescription": "Indicates whether the Resolver endpoint allows inbound or outbound DNS queries:\n\n- `INBOUND` : allows DNS queries to your VPC from your network\n- `OUTBOUND` : allows DNS queries from your VPC to your network\n- `INBOUND_DELEGATION` : Resolver delegates queries to Route 53 private hosted zones from your network.", "title": "Direction", "type": "string" }, @@ -234348,7 +234340,7 @@ "items": { "type": "string" }, - "markdownDescription": "Protocols used for the endpoint. DoH-FIPS is applicable for inbound endpoints only.\n\nFor an inbound endpoint you can apply the protocols as follows:\n\n- Do53 and DoH in combination.\n- Do53 and DoH-FIPS in combination.\n- Do53 alone.\n- DoH alone.\n- DoH-FIPS alone.\n- None, which is treated as Do53.\n\nFor an outbound endpoint you can apply the protocols as follows:\n\n- Do53 and DoH in combination.\n- Do53 alone.\n- DoH alone.\n- None, which is treated as Do53.", + "markdownDescription": "Protocols used for the endpoint. DoH-FIPS is applicable for a default inbound endpoints only.\n\nFor an inbound endpoint you can apply the protocols as follows:\n\n- Do53 and DoH in combination.\n- Do53 and DoH-FIPS in combination.\n- Do53 alone.\n- DoH alone.\n- DoH-FIPS alone.\n- None, which is treated as Do53.\n\nFor a delegation inbound endpoint you can use Do53 only.\n\nFor an outbound endpoint you can apply the protocols as follows:\n\n- Do53 and DoH in combination.\n- Do53 alone.\n- DoH alone.\n- None, which is treated as Do53.", "title": "Protocols", "type": "array" }, @@ -234613,7 +234605,7 @@ "type": "string" }, "RuleType": { - "markdownDescription": "When you want to forward DNS queries for specified domain name to resolvers on your network, specify `FORWARD` .\n\nWhen you have a forwarding rule to forward DNS queries for a domain to your network and you want Resolver to process queries for a subdomain of that domain, specify `SYSTEM` .\n\nFor example, to forward DNS queries for example.com to resolvers on your network, you create a rule and specify `FORWARD` for `RuleType` . To then have Resolver process queries for apex.example.com, you create a rule and specify `SYSTEM` for `RuleType` .\n\nCurrently, only Resolver can create rules that have a value of `RECURSIVE` for `RuleType` .", + "markdownDescription": "When you want to forward DNS queries for specified domain name to resolvers on your network, specify `FORWARD` or `DELEGATE` . If a query matches multiple Resolver rules (example.com and www.example.com), outbound DNS queries are routed using the Resolver rule that contains the most specific domain name (www.example.com).\n\nWhen you have a forwarding rule to forward DNS queries for a domain to your network and you want Resolver to process queries for a subdomain of that domain, specify `SYSTEM` .\n\nFor example, to forward DNS queries for example.com to resolvers on your network, you create a rule and specify `FORWARD` for `RuleType` . To then have Resolver process queries for apex.example.com, you create a rule and specify `SYSTEM` for `RuleType` .\n\nCurrently, only Resolver can create rules that have a value of `RECURSIVE` for `RuleType` .", "title": "RuleType", "type": "string" }, @@ -236186,7 +236178,7 @@ "title": "ReplicationTime" }, "StorageClass": { - "markdownDescription": "The storage class to use when replicating objects, such as S3 Standard or reduced redundancy. By default, Amazon S3 uses the storage class of the source object to create the object replica.\n\nFor valid values, see the `StorageClass` element of the [PUT Bucket replication](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html) action in the *Amazon S3 API Reference* .", + "markdownDescription": "The storage class to use when replicating objects, such as S3 Standard or reduced redundancy. By default, Amazon S3 uses the storage class of the source object to create the object replica.\n\nFor valid values, see the `StorageClass` element of the [PUT Bucket replication](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html) action in the *Amazon S3 API Reference* .\n\n`FSX_OPENZFS` is not an accepted value when replicating objects.", "title": "StorageClass", "type": "string" } @@ -265090,7 +265082,7 @@ }, "DnsEntry": { "$ref": "#/definitions/AWS::VpcLattice::Service.DnsEntry", - "markdownDescription": "The DNS information of the service.", + "markdownDescription": "Describes the DNS information of the service. This field is read-only.", "title": "DnsEntry" }, "Name": { diff --git a/schema_source/cloudformation-docs.json b/schema_source/cloudformation-docs.json index 11b111a20..fe335ce9c 100644 --- a/schema_source/cloudformation-docs.json +++ b/schema_source/cloudformation-docs.json @@ -314,7 +314,7 @@ "AnalyzerName": "The name of the analyzer.", "ArchiveRules": "Specifies the archive rules to add for the analyzer. Archive rules automatically archive findings that meet the criteria you define for the rule.", "Tags": "An array of key-value pairs to apply to the analyzer. You can use the set of Unicode letters, digits, whitespace, `_` , `.` , `/` , `=` , `+` , and `-` .\n\nFor the tag key, you can specify a value that is 1 to 128 characters in length and cannot be prefixed with `aws:` .\n\nFor the tag value, you can specify a value that is 0 to 256 characters in length.", - "Type": "The type represents the zone of trust for the analyzer.\n\n*Allowed Values* : ACCOUNT | ORGANIZATION | ACCOUNT_UNUSED_ACCESS | ORGANIZATION_UNUSED_ACCESS" + "Type": "The type represents the zone of trust for the analyzer.\n\n*Allowed Values* : ACCOUNT | ORGANIZATION | ACCOUNT_UNUSED_ACCESS | ACCOUNT_INTERNAL_ACCESS | ORGANIZATION_INTERNAL_ACCESS | ORGANIZATION_UNUSED_ACCESS" }, "AWS::AccessAnalyzer::Analyzer AnalysisRule": { "Exclusions": "A list of rules for the analyzer containing criteria to exclude from analysis. Entities that meet the rule criteria will not generate findings." @@ -324,7 +324,7 @@ "ResourceTags": "An array of key-value pairs to match for your resources. You can use the set of Unicode letters, digits, whitespace, `_` , `.` , `/` , `=` , `+` , and `-` .\n\nFor the tag key, you can specify a value that is 1 to 128 characters in length and cannot be prefixed with `aws:` .\n\nFor the tag value, you can specify a value that is 0 to 256 characters in length. If the specified tag value is 0 characters, the rule is applied to all principals with the specified tag key." }, "AWS::AccessAnalyzer::Analyzer AnalyzerConfiguration": { - "InternalAccessConfiguration": "", + "InternalAccessConfiguration": "Specifies the configuration of an internal access analyzer for an AWS organization or account. This configuration determines how the analyzer evaluates access within your AWS environment.", "UnusedAccessConfiguration": "Specifies the configuration of an unused access analyzer for an AWS organization or account." }, "AWS::AccessAnalyzer::Analyzer ArchiveRule": { @@ -347,11 +347,11 @@ "ResourceTypes": "A list of resource types to apply to the internal access analysis rule criteria. The analyzer will only generate findings for resources of these types. These resource types are currently supported for internal access analyzers:\n\n- `AWS::S3::Bucket`\n- `AWS::RDS::DBSnapshot`\n- `AWS::RDS::DBClusterSnapshot`\n- `AWS::S3Express::DirectoryBucket`\n- `AWS::DynamoDB::Table`\n- `AWS::DynamoDB::Stream`" }, "AWS::AccessAnalyzer::Analyzer InternalAccessConfiguration": { - "InternalAccessAnalysisRule": "" + "InternalAccessAnalysisRule": "Contains information about analysis rules for the internal access analyzer. These rules determine which resources and access patterns will be analyzed." }, "AWS::AccessAnalyzer::Analyzer Tag": { - "Key": "The key name of the tag. You can specify a value that's 1 to 128 Unicode characters in length and can't be prefixed with `aws:` . digits, whitespace, `_` , `.` , `:` , `/` , `=` , `+` , `@` , `-` , and `\"` .\n\nFor more information, see [Tag](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-resource-tags.html) .", - "Value": "The value for the tag. You can specify a value that's 1 to 256 characters in length. You can use any of the following characters: the set of Unicode letters, digits, whitespace, `_` , `.` , `/` , `=` , `+` , and `-` .\n\nFor more information, see [Tag](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-resource-tags.html) ." + "Key": "The key name of the tag. You can specify a value that's 1 to 128 Unicode characters in length and can't be prefixed with `aws:` . digits, whitespace, `_` , `.` , `:` , `/` , `=` , `+` , `@` , `-` , and `\"` .\n\nFor more information, see [Tag](https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/aws-properties-resource-tags.html) .", + "Value": "The value for the tag. You can specify a value that's 1 to 256 characters in length. You can use any of the following characters: the set of Unicode letters, digits, whitespace, `_` , `.` , `/` , `=` , `+` , and `-` .\n\nFor more information, see [Tag](https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/aws-properties-resource-tags.html) ." }, "AWS::AccessAnalyzer::Analyzer UnusedAccessConfiguration": { "AnalysisRule": "Contains information about analysis rules for the analyzer. Analysis rules determine which entities will generate findings based on the criteria you define when you create the rule.", @@ -4566,15 +4566,19 @@ "AWS::B2BI::Partnership": { "Capabilities": "Returns one or more capabilities associated with this partnership.", "CapabilityOptions": "Contains the details for an Outbound EDI capability.", - "Email": "", + "Email": "Specifies the email address associated with this trading partner.", "Name": "Returns the name of the partnership.", - "Phone": "", + "Phone": "Specifies the phone number associated with the partnership.", "ProfileId": "Returns the unique, system-generated identifier for the profile connected to this partnership.", "Tags": "A key-value pair for a specific partnership. Tags are metadata that you can use to search for and group capabilities for various purposes." }, "AWS::B2BI::Partnership CapabilityOptions": { + "InboundEdi": "", "OutboundEdi": "A structure that contains the outbound EDI options." }, + "AWS::B2BI::Partnership InboundEdiOptions": { + "X12": "" + }, "AWS::B2BI::Partnership OutboundEdiOptions": { "X12": "A structure that contains an X12 envelope structure." }, @@ -4582,19 +4586,37 @@ "Key": "Specifies the name assigned to the tag that you create.", "Value": "Contains one or more values that you assigned to the key name that you create." }, + "AWS::B2BI::Partnership WrapOptions": { + "LineLength": "", + "LineTerminator": "", + "WrapBy": "" + }, + "AWS::B2BI::Partnership X12AcknowledgmentOptions": { + "FunctionalAcknowledgment": "", + "TechnicalAcknowledgment": "" + }, + "AWS::B2BI::Partnership X12ControlNumbers": { + "StartingFunctionalGroupControlNumber": "", + "StartingInterchangeControlNumber": "", + "StartingTransactionSetControlNumber": "" + }, "AWS::B2BI::Partnership X12Delimiters": { "ComponentSeparator": "", "DataElementSeparator": "", "SegmentTerminator": "" }, "AWS::B2BI::Partnership X12Envelope": { - "Common": "A container for the X12 outbound EDI headers." + "Common": "A container for the X12 outbound EDI headers.", + "WrapOptions": "" }, "AWS::B2BI::Partnership X12FunctionalGroupHeaders": { "ApplicationReceiverCode": "", "ApplicationSenderCode": "", "ResponsibleAgencyCode": "" }, + "AWS::B2BI::Partnership X12InboundEdiOptions": { + "AcknowledgmentOptions": "" + }, "AWS::B2BI::Partnership X12InterchangeControlHeaders": { "AcknowledgmentRequestedCode": "", "ReceiverId": "", @@ -4605,8 +4627,10 @@ "UsageIndicatorCode": "" }, "AWS::B2BI::Partnership X12OutboundEdiHeaders": { + "ControlNumbers": "", "Delimiters": "The delimiters, for example semicolon ( `;` ), that separates sections of the headers for the X12 object.", "FunctionalGroupHeaders": "The functional group headers for the X12 object.", + "Gs05TimeFormat": "", "InterchangeControlHeaders": "In X12 EDI messages, delimiters are used to mark the end of segments or elements, and are defined in the interchange control header.", "ValidateEdi": "Specifies whether or not to validate the EDI for this X12 object: `TRUE` or `FALSE` ." }, @@ -4631,10 +4655,14 @@ "Status": "Returns the state of the newly created transformer. The transformer can be either `active` or `inactive` . For the transformer to be used in a capability, its status must `active` .", "Tags": "A key-value pair for a specific transformer. Tags are metadata that you can use to search for and group capabilities for various purposes." }, + "AWS::B2BI::Transformer AdvancedOptions": { + "X12": "" + }, "AWS::B2BI::Transformer FormatOptions": { "X12": "" }, "AWS::B2BI::Transformer InputConversion": { + "AdvancedOptions": "", "FormatOptions": "", "FromFormat": "" }, @@ -4658,10 +4686,16 @@ "Key": "Specifies the name assigned to the tag that you create.", "Value": "Contains one or more values that you assigned to the key name that you create." }, + "AWS::B2BI::Transformer X12AdvancedOptions": { + "SplitOptions": "" + }, "AWS::B2BI::Transformer X12Details": { "TransactionSet": "Returns an enumerated type where each value identifies an X12 transaction set. Transaction sets are maintained by the X12 Accredited Standards Committee.", "Version": "Returns the version to use for the specified X12 transaction set." }, + "AWS::B2BI::Transformer X12SplitOptions": { + "SplitBy": "" + }, "AWS::BCMDataExports::Export": { "Export": "The details that are available for an export.", "Tags": "" @@ -4927,7 +4961,7 @@ "AWS::Batch::ComputeEnvironment Ec2ConfigurationObject": { "ImageIdOverride": "The AMI ID used for instances launched in the compute environment that match the image type. This setting overrides the `imageId` set in the `computeResource` object.\n\n> The AMI that you choose for a compute environment must match the architecture of the instance types that you intend to use for that compute environment. For example, if your compute environment uses A1 instance types, the compute resource AMI that you choose must support ARM instances. Amazon ECS vends both x86 and ARM versions of the Amazon ECS-optimized Amazon Linux 2 AMI. For more information, see [Amazon ECS-optimized Amazon Linux 2 AMI](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#ecs-optimized-ami-linux-variants.html) in the *Amazon Elastic Container Service Developer Guide* .", "ImageKubernetesVersion": "The Kubernetes version for the compute environment. If you don't specify a value, the latest version that AWS Batch supports is used.", - "ImageType": "The image type to match with the instance type to select an AMI. The supported values are different for `ECS` and `EKS` resources.\n\n- **ECS** - If the `imageIdOverride` parameter isn't specified, then a recent [Amazon ECS-optimized Amazon Linux 2 AMI](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#al2ami) ( `ECS_AL2` ) is used. If a new image type is specified in an update, but neither an `imageId` nor a `imageIdOverride` parameter is specified, then the latest Amazon ECS optimized AMI for that image type that's supported by AWS Batch is used.\n\n- **ECS_AL2** - [Amazon Linux 2](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#al2ami) : Default for all non-GPU instance families.\n- **ECS_AL2_NVIDIA** - [Amazon Linux 2 (GPU)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#gpuami) : Default for all GPU instance families (for example `P4` and `G4` ) and can be used for all non AWS Graviton-based instance types.\n- **ECS_AL2023** - [Amazon Linux 2023](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html) : AWS Batch supports Amazon Linux 2023.\n\n> Amazon Linux 2023 does not support `A1` instances.\n- **ECS_AL1** - [Amazon Linux](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#alami) . Amazon Linux has reached the end-of-life of standard support. For more information, see [Amazon Linux AMI](https://docs.aws.amazon.com/amazon-linux-ami/) .\n- **EKS** - If the `imageIdOverride` parameter isn't specified, then a recent [Amazon EKS-optimized Amazon Linux AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) ( `EKS_AL2` ) is used. If a new image type is specified in an update, but neither an `imageId` nor a `imageIdOverride` parameter is specified, then the latest Amazon EKS optimized AMI for that image type that AWS Batch supports is used.\n\n- **EKS_AL2** - [Amazon Linux 2](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : Default for all non-GPU instance families.\n- **EKS_AL2_NVIDIA** - [Amazon Linux 2 (accelerated)](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : Default for all GPU instance families (for example, `P4` and `G4` ) and can be used for all non AWS Graviton-based instance types." + "ImageType": "The image type to match with the instance type to select an AMI. The supported values are different for `ECS` and `EKS` resources.\n\n- **ECS** - If the `imageIdOverride` parameter isn't specified, then a recent [Amazon ECS-optimized Amazon Linux 2 AMI](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#al2ami) ( `ECS_AL2` ) is used. If a new image type is specified in an update, but neither an `imageId` nor a `imageIdOverride` parameter is specified, then the latest Amazon ECS optimized AMI for that image type that's supported by AWS Batch is used.\n\n- **ECS_AL2** - [Amazon Linux 2](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#al2ami) : Default for all non-GPU instance families.\n- **ECS_AL2_NVIDIA** - [Amazon Linux 2 (GPU)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#gpuami) : Default for all GPU instance families (for example `P4` and `G4` ) and can be used for all non AWS Graviton-based instance types.\n- **ECS_AL2023** - [Amazon Linux 2023](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html) : AWS Batch supports Amazon Linux 2023.\n\n> Amazon Linux 2023 does not support `A1` instances.\n- **ECS_AL1** - [Amazon Linux](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#alami) . Amazon Linux has reached the end-of-life of standard support. For more information, see [Amazon Linux AMI](https://docs.aws.amazon.com/amazon-linux-ami/) .\n- **EKS** - If the `imageIdOverride` parameter isn't specified, then a recent [Amazon EKS-optimized Amazon Linux AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) ( `EKS_AL2` ) is used. If a new image type is specified in an update, but neither an `imageId` nor a `imageIdOverride` parameter is specified, then the latest Amazon EKS optimized AMI for that image type that AWS Batch supports is used.\n\n- **EKS_AL2** - [Amazon Linux 2](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : Default for all non-GPU instance families.\n- **EKS_AL2_NVIDIA** - [Amazon Linux 2 (accelerated)](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : Default for all GPU instance families (for example, `P4` and `G4` ) and can be used for all non AWS Graviton-based instance types.\n- **EKS_AL2023** - [Amazon Linux 2023](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : AWS Batch supports Amazon Linux 2023.\n\n> Amazon Linux 2023 does not support `A1` instances.\n- **EKS_AL2023_NVIDIA** - [Amazon Linux 2023 (accelerated)](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : GPU instance families and can be used for all non AWS Graviton-based instance types." }, "AWS::Batch::ComputeEnvironment EksConfiguration": { "EksClusterArn": "The Amazon Resource Name (ARN) of the Amazon EKS cluster. An example is `arn: *aws* :eks: *us-east-1* : *123456789012* :cluster/ *ClusterForBatch*` .", @@ -4937,12 +4971,14 @@ "LaunchTemplateId": "The ID of the launch template.", "LaunchTemplateName": "The name of the launch template.", "Overrides": "A launch template to use in place of the default launch template. You must specify either the launch template ID or launch template name in the request, but not both.\n\nYou can specify up to ten (10) launch template overrides that are associated to unique instance types or families for each compute environment.\n\n> To unset all override templates for a compute environment, you can pass an empty array to the [UpdateComputeEnvironment.overrides](https://docs.aws.amazon.com/batch/latest/APIReference/API_UpdateComputeEnvironment.html) parameter, or not include the `overrides` parameter when submitting the `UpdateComputeEnvironment` API operation.", + "UserdataType": "The EKS node initialization process to use. You only need to specify this value if you are using a custom AMI. The default value is `EKS_BOOTSTRAP_SH` . If *imageType* is a custom AMI based on EKS_AL2023 or EKS_AL2023_NVIDIA then you must choose `EKS_NODEADM` .", "Version": "The version number of the launch template, `$Default` , or `$Latest` .\n\nIf the value is `$Default` , the default version of the launch template is used. If the value is `$Latest` , the latest version of the launch template is used.\n\n> If the AMI ID that's used in a compute environment is from the launch template, the AMI isn't changed when the compute environment is updated. It's only changed if the `updateToLatestImageVersion` parameter for the compute environment is set to `true` . During an infrastructure update, if either `$Default` or `$Latest` is specified, AWS Batch re-evaluates the launch template version, and it might use a different version of the launch template. This is the case even if the launch template isn't specified in the update. When updating a compute environment, changing the launch template requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* . \n\nDefault: `$Default`\n\nLatest: `$Latest`" }, "AWS::Batch::ComputeEnvironment LaunchTemplateSpecificationOverride": { "LaunchTemplateId": "The ID of the launch template.\n\n*Note:* If you specify the `launchTemplateId` you can't specify the `launchTemplateName` as well.", "LaunchTemplateName": "The name of the launch template.\n\n*Note:* If you specify the `launchTemplateName` you can't specify the `launchTemplateId` as well.", "TargetInstanceTypes": "The instance type or family that this override launch template should be applied to.\n\nThis parameter is required when defining a launch template override.\n\nInformation included in this parameter must meet the following requirements:\n\n- Must be a valid Amazon EC2 instance type or family.\n- `optimal` isn't allowed.\n- `targetInstanceTypes` can target only instance types and families that are included within the [`ComputeResource.instanceTypes`](https://docs.aws.amazon.com/batch/latest/APIReference/API_ComputeResource.html#Batch-Type-ComputeResource-instanceTypes) set. `targetInstanceTypes` doesn't need to include all of the instances from the `instanceType` set, but at least a subset. For example, if `ComputeResource.instanceTypes` includes `[m5, g5]` , `targetInstanceTypes` can include `[m5.2xlarge]` and `[m5.large]` but not `[c5.large]` .\n- `targetInstanceTypes` included within the same launch template override or across launch template overrides can't overlap for the same compute environment. For example, you can't define one launch template override to target an instance family and another define an instance type within this same family.", + "UserdataType": "The EKS node initialization process to use. You only need to specify this value if you are using a custom AMI. The default value is `EKS_BOOTSTRAP_SH` . If *imageType* is a custom AMI based on EKS_AL2023 or EKS_AL2023_NVIDIA then you must choose `EKS_NODEADM` .", "Version": "The version number of the launch template, `$Default` , or `$Latest` .\n\nIf the value is `$Default` , the default version of the launch template is used. If the value is `$Latest` , the latest version of the launch template is used.\n\n> If the AMI ID that's used in a compute environment is from the launch template, the AMI isn't changed when the compute environment is updated. It's only changed if the `updateToLatestImageVersion` parameter for the compute environment is set to `true` . During an infrastructure update, if either `$Default` or `$Latest` is specified, AWS Batch re-evaluates the launch template version, and it might use a different version of the launch template. This is the case even if the launch template isn't specified in the update. When updating a compute environment, changing the launch template requires an infrastructure update of the compute environment. For more information, see [Updating compute environments](https://docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html) in the *AWS Batch User Guide* . \n\nDefault: `$Default`\n\nLatest: `$Latest`" }, "AWS::Batch::ComputeEnvironment UpdatePolicy": { @@ -6191,6 +6227,7 @@ "BlockedOutputsMessaging": "The message to return when the guardrail blocks a model response.", "ContentPolicyConfig": "The content filter policies to configure for the guardrail.", "ContextualGroundingPolicyConfig": "", + "CrossRegionConfig": "", "Description": "A description of the guardrail.", "KmsKeyArn": "The ARN of the AWS KMS key that you use to encrypt the guardrail.", "Name": "The name of the guardrail.", @@ -6222,6 +6259,9 @@ "AWS::Bedrock::Guardrail ContextualGroundingPolicyConfig": { "FiltersConfig": "" }, + "AWS::Bedrock::Guardrail GuardrailCrossRegionConfig": { + "GuardrailProfileArn": "" + }, "AWS::Bedrock::Guardrail ManagedWordsConfig": { "InputAction": "", "InputEnabled": "", @@ -7540,7 +7580,7 @@ "InputParams": "Specifies the S3 location where your input parameters are located." }, "AWS::CloudFormation::GuardHook S3Location": { - "Uri": "Specifies the S3 path to the file containing your Guard rules or input parameters (in the form `s3:///` ).\n\nFor Guard rules, the object stored in S3 must have one of the following file extensions: `.guard` , `.zip` , or `.tar.gz` .\n\nFor input parameters, the object stored in S3 must have one of the following file extensions: `.yaml` , `.json` , `.zip` , or `.tar.gz` .", + "Uri": "Specifies the S3 path to the file that contains your Guard rules or input parameters (in the form `s3:///` ).\n\nFor Guard rules, the object stored in S3 must have one of the following file extensions: `.guard` , `.zip` , or `.tar.gz` .\n\nFor input parameters, the object stored in S3 must have one of the following file extensions: `.yaml` , `.json` , `.zip` , or `.tar.gz` .", "VersionId": "For S3 buckets with versioning enabled, specifies the unique ID of the S3 object version to download your Guard rules or input parameters from.\n\nThe Guard Hook downloads files from S3 every time the Hook is invoked. To prevent accidental changes or deletions, we recommend using a version when configuring your Guard Hook." }, "AWS::CloudFormation::GuardHook StackFilters": { @@ -7578,7 +7618,7 @@ "AWS::CloudFormation::HookVersion": { "ExecutionRoleArn": "The Amazon Resource Name (ARN) of the task execution role that grants the Hook permission.", "LoggingConfig": "Contains logging configuration information for an extension.", - "SchemaHandlerPackage": "A URL to the Amazon S3 bucket containing the Hook project package that contains the necessary files for the Hook you want to register.\n\nFor information on generating a schema handler package, see [Modeling custom CloudFormation Hooks](https://docs.aws.amazon.com/cloudformation-cli/latest/hooks-userguide/hooks-model.html) in the *AWS CloudFormation Hooks User Guide* .\n\n> To register the Hook, you must have `s3:GetObject` permissions to access the S3 objects.", + "SchemaHandlerPackage": "A URL to the Amazon S3 bucket for the Hook project package that contains the necessary files for the Hook you want to register.\n\nFor information on generating a schema handler package, see [Modeling custom CloudFormation Hooks](https://docs.aws.amazon.com/cloudformation-cli/latest/hooks-userguide/hooks-model.html) in the *AWS CloudFormation Hooks User Guide* .\n\n> To register the Hook, you must have `s3:GetObject` permissions to access the S3 objects.", "TypeName": "The unique name for your hook. Specifies a three-part namespace for your hook, with a recommended pattern of `Organization::Service::Hook` .\n\n> The following organization namespaces are reserved and can't be used in your hook type names:\n> \n> - `Alexa`\n> - `AMZN`\n> - `Amazon`\n> - `ASK`\n> - `AWS`\n> - `Custom`\n> - `Dev`" }, "AWS::CloudFormation::HookVersion LoggingConfig": { @@ -7630,7 +7670,7 @@ }, "AWS::CloudFormation::ModuleVersion": { "ModuleName": "The name of the module being registered.", - "ModulePackage": "A URL to the S3 bucket containing the package that contains the template fragment and schema files for the module version to register.\n\nFor more information, see [Module structure and requirements](https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/modules-structure.html) in the *AWS CloudFormation Command Line Interface (CLI) User Guide* .\n\n> To register the module version, you must have `s3:GetObject` permissions to access the S3 objects." + "ModulePackage": "A URL to the S3 bucket for the package that contains the template fragment and schema files for the module version to register.\n\nFor more information, see [Module structure and requirements](https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/modules-structure.html) in the *AWS CloudFormation Command Line Interface (CLI) User Guide* .\n\n> To register the module version, you must have `s3:GetObject` permissions to access the S3 objects." }, "AWS::CloudFormation::PublicTypeVersion": { "Arn": "The Amazon Resource Number (ARN) of the extension.\n\nConditional: You must specify `Arn` , or `TypeName` and `Type` .", @@ -7651,7 +7691,7 @@ "AWS::CloudFormation::ResourceVersion": { "ExecutionRoleArn": "The Amazon Resource Name (ARN) of the IAM role for CloudFormation to assume when invoking the resource. If your resource calls AWS APIs in any of its handlers, you must create an IAM execution role that includes the necessary permissions to call those AWS APIs, and provision that execution role in your account. When CloudFormation needs to invoke the resource type handler, CloudFormation assumes this execution role to create a temporary session token, which it then passes to the resource type handler, thereby supplying your resource type with the appropriate credentials.", "LoggingConfig": "Logging configuration information for a resource.", - "SchemaHandlerPackage": "A URL to the S3 bucket containing the resource project package that contains the necessary files for the resource you want to register.\n\nFor information on generating a schema handler package, see [Modeling resource types to use with AWS CloudFormation](https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/resource-type-model.html) in the *AWS CloudFormation Command Line Interface (CLI) User Guide* .\n\n> To register the resource version, you must have `s3:GetObject` permissions to access the S3 objects.", + "SchemaHandlerPackage": "A URL to the S3 bucket for the resource project package that contains the necessary files for the resource you want to register.\n\nFor information on generating a schema handler package, see [Modeling resource types to use with AWS CloudFormation](https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/resource-type-model.html) in the *AWS CloudFormation Command Line Interface (CLI) User Guide* .\n\n> To register the resource version, you must have `s3:GetObject` permissions to access the S3 objects.", "TypeName": "The name of the resource being registered.\n\nWe recommend that resource names adhere to the following pattern: *company_or_organization* :: *service* :: *type* .\n\n> The following organization namespaces are reserved and can't be used in your resource names:\n> \n> - `Alexa`\n> - `AMZN`\n> - `Amazon`\n> - `AWS`\n> - `Custom`\n> - `Dev`" }, "AWS::CloudFormation::ResourceVersion LoggingConfig": { @@ -7659,7 +7699,7 @@ "LogRoleArn": "The ARN of the role that CloudFormation should assume when sending log entries to CloudWatch logs." }, "AWS::CloudFormation::Stack": { - "Capabilities": "In some cases, you must explicitly acknowledge that your stack template contains certain capabilities in order for CloudFormation to create the stack.\n\n- `CAPABILITY_IAM` and `CAPABILITY_NAMED_IAM`\n\nSome stack templates might include resources that can affect permissions in your AWS account ; for example, by creating new AWS Identity and Access Management (IAM) users. For those stacks, you must explicitly acknowledge this by specifying one of these capabilities.\n\nThe following IAM resources require you to specify either the `CAPABILITY_IAM` or `CAPABILITY_NAMED_IAM` capability.\n\n- If you have IAM resources, you can specify either capability.\n- If you have IAM resources with custom names, you *must* specify `CAPABILITY_NAMED_IAM` .\n- If you don't specify either of these capabilities, CloudFormation returns an `InsufficientCapabilities` error.\n\nIf your stack template contains these resources, we recommend that you review all permissions associated with them and edit their permissions if necessary.\n\n- [AWS::IAM::AccessKey](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-accesskey.html)\n- [AWS::IAM::Group](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-group.html)\n- [AWS::IAM::InstanceProfile](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html)\n- [AWS::IAM::Policy](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-policy.html)\n- [AWS::IAM::Role](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html)\n- [AWS::IAM::User](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-user.html)\n- [AWS::IAM::UserToGroupAddition](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-usertogroupaddition.html)\n\nFor more information, see [Acknowledging IAM resources in CloudFormation templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/control-access-with-iam.html#using-iam-capabilities) in the *AWS CloudFormation User Guide* .\n- `CAPABILITY_AUTO_EXPAND`\n\nSome template contain macros. Macros perform custom processing on templates; this can include simple actions like find-and-replace operations, all the way to extensive transformations of entire templates. Because of this, users typically create a change set from the processed template, so that they can review the changes resulting from the macros before actually creating the stack. If your stack template contains one or more macros, and you choose to create a stack directly from the processed template, without first reviewing the resulting changes in a change set, you must acknowledge this capability. This includes the [AWS::Include](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/transform-aws-include.html) and [AWS::Serverless](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/transform-aws-serverless.html) transforms, which are macros hosted by CloudFormation .\n\nIf you want to create a stack from a stack template that contains macros *and* nested stacks, you must create the stack directly from the template using this capability.\n\n> You should only create stacks directly from a stack template that contains macros if you know what processing the macro performs.\n> \n> Each macro relies on an underlying Lambda service function for processing stack templates. Be aware that the Lambda function owner can update the function operation without CloudFormation being notified. \n\nFor more information, see [Perform custom processing on CloudFormation templates with template macros](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-macros.html) in the *AWS CloudFormation User Guide* .", + "Capabilities": "In some cases, you must explicitly acknowledge that your stack template contains certain capabilities in order for CloudFormation to create the stack.\n\n- `CAPABILITY_IAM` and `CAPABILITY_NAMED_IAM`\n\nSome stack templates might include resources that can affect permissions in your AWS account ; for example, by creating new AWS Identity and Access Management (IAM) users. For those stacks, you must explicitly acknowledge this by specifying one of these capabilities.\n\nThe following IAM resources require you to specify either the `CAPABILITY_IAM` or `CAPABILITY_NAMED_IAM` capability.\n\n- If you have IAM resources, you can specify either capability.\n- If you have IAM resources with custom names, you *must* specify `CAPABILITY_NAMED_IAM` .\n- If you don't specify either of these capabilities, CloudFormation returns an `InsufficientCapabilities` error.\n\nIf your stack template contains these resources, we recommend that you review all permissions associated with them and edit their permissions if necessary.\n\n- [AWS::IAM::AccessKey](https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/aws-resource-iam-accesskey.html)\n- [AWS::IAM::Group](https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/aws-resource-iam-group.html)\n- [AWS::IAM::InstanceProfile](https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/aws-resource-iam-instanceprofile.html)\n- [AWS::IAM::Policy](https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/aws-resource-iam-policy.html)\n- [AWS::IAM::Role](https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/aws-resource-iam-role.html)\n- [AWS::IAM::User](https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/aws-resource-iam-user.html)\n- [AWS::IAM::UserToGroupAddition](https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/aws-resource-iam-usertogroupaddition.html)\n\nFor more information, see [Acknowledging IAM resources in CloudFormation templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/control-access-with-iam.html#using-iam-capabilities) in the *AWS CloudFormation User Guide* .\n- `CAPABILITY_AUTO_EXPAND`\n\nSome template contain macros. Macros perform custom processing on templates; this can include simple actions like find-and-replace operations, all the way to extensive transformations of entire templates. Because of this, users typically create a change set from the processed template, so that they can review the changes resulting from the macros before actually creating the stack. If your stack template contains one or more macros, and you choose to create a stack directly from the processed template, without first reviewing the resulting changes in a change set, you must acknowledge this capability. This includes the [AWS::Include](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/transform-aws-include.html) and [AWS::Serverless](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/transform-aws-serverless.html) transforms, which are macros hosted by CloudFormation .\n\nIf you want to create a stack from a stack template that contains macros *and* nested stacks, you must create the stack directly from the template using this capability.\n\n> You should only create stacks directly from a stack template that contains macros if you know what processing the macro performs.\n> \n> Each macro relies on an underlying Lambda service function for processing stack templates. Be aware that the Lambda function owner can update the function operation without CloudFormation being notified. \n\nFor more information, see [Perform custom processing on CloudFormation templates with template macros](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-macros.html) in the *AWS CloudFormation User Guide* .", "ChangeSetId": "The unique ID of the change set.", "CreationTime": "The time at which the stack was created.", "Description": "A user-defined description associated with the stack.", @@ -7674,13 +7714,13 @@ "RootId": "For nested stacks, the stack ID of the top-level stack to which the nested stack ultimately belongs.\n\nFor more information, see [Nested stacks](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html) in the *AWS CloudFormation User Guide* .", "StackId": "Unique identifier of the stack.", "StackName": "The name that's associated with the stack. The name must be unique in the Region in which you are creating the stack.\n\n> A stack name can contain only alphanumeric characters (case sensitive) and hyphens. It must start with an alphabetical character and can't be longer than 128 characters.", - "StackPolicyBody": "Structure containing the stack policy body. For more information, go to [Prevent updates to stack resources](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html) in the *AWS CloudFormation User Guide* . You can specify either the `StackPolicyBody` or the `StackPolicyURL` parameter, but not both.", - "StackPolicyURL": "Location of a file containing the stack policy. The URL must point to a policy (maximum size: 16 KB) located in an S3 bucket in the same Region as the stack. You can specify either the `StackPolicyBody` or the `StackPolicyURL` parameter, but not both.", + "StackPolicyBody": "Structure that contains the stack policy body. For more information, see [Prevent updates to stack resources](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html) in the *AWS CloudFormation User Guide* . You can specify either the `StackPolicyBody` or the `StackPolicyURL` parameter, but not both.", + "StackPolicyURL": "Location of a file that contains the stack policy. The URL must point to a policy (maximum size: 16 KB) located in an S3 bucket in the same Region as the stack. You can specify either the `StackPolicyBody` or the `StackPolicyURL` parameter, but not both.", "StackStatus": "Current status of the stack.", "StackStatusReason": "Success/failure message associated with the stack status.", "Tags": "Key-value pairs to associate with this stack. CloudFormation also propagates these tags to the resources created in the stack. A maximum number of 50 tags can be specified.", - "TemplateBody": "Structure containing the template body with a minimum length of 1 byte and a maximum length of 51,200 bytes.\n\nConditional: You must specify either the `TemplateBody` or the `TemplateURL` parameter, but not both.", - "TemplateURL": "The URL of a file containing the template body. The URL must point to a template (max size: 1 MB) that's located in an Amazon S3 bucket. The location for an Amazon S3 bucket must start with `https://` .\n\nWhether an update causes interruptions depends on the resources that are being updated. An update never causes a nested stack to be replaced.", + "TemplateBody": "Structure that contains the template body with a minimum length of 1 byte and a maximum length of 51,200 bytes.\n\nConditional: You must specify either the `TemplateBody` or the `TemplateURL` parameter, but not both.", + "TemplateURL": "The URL of a file that contains the template body. The URL must point to a template (max size: 1 MB) that's located in an Amazon S3 bucket. The location for an Amazon S3 bucket must start with `https://` .\n\nWhether an update causes interruptions depends on the resources that are being updated. An update never causes a nested stack to be replaced.", "TimeoutInMinutes": "The length of time, in minutes, that CloudFormation waits for the nested stack to reach the `CREATE_COMPLETE` state. The default is no timeout. When CloudFormation detects that the nested stack has reached the `CREATE_COMPLETE` state, it marks the nested stack resource as `CREATE_COMPLETE` in the parent stack and resumes creating the parent stack. If the timeout period expires before the nested stack reaches `CREATE_COMPLETE` , CloudFormation marks the nested stack as failed and rolls back both the nested stack and parent stack.\n\nUpdates aren't supported." }, "AWS::CloudFormation::Stack Output": { @@ -7691,7 +7731,7 @@ }, "AWS::CloudFormation::Stack Tag": { "Key": "*Required* . A string used to identify this tag. You can specify a maximum of 128 characters for a tag key. Tags owned by AWS have the reserved prefix: `aws:` .", - "Value": "*Required* . A string containing the value for this tag. You can specify a maximum of 256 characters for a tag value." + "Value": "*Required* . A string that contains the value for this tag. You can specify a maximum of 256 characters for a tag value." }, "AWS::CloudFormation::StackSet": { "AdministrationRoleARN": "The Amazon Resource Number (ARN) of the IAM role to use to create this stack set. Specify an IAM role only if you are using customized administrator roles to control which users or groups can manage specific stack sets within the same administrator account.\n\nUse customized administrator roles to control which users or groups can manage specific stack sets within the same administrator account. For more information, see [Grant self-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs-self-managed.html) in the *AWS CloudFormation User Guide* .\n\nValid only if the permissions model is `SELF_MANAGED` .", @@ -7707,8 +7747,8 @@ "StackInstancesGroup": "A group of stack instances with parameters in some specific accounts and Regions.", "StackSetName": "The name to associate with the stack set. The name must be unique in the Region where you create your stack set.", "Tags": "Key-value pairs to associate with this stack. CloudFormation also propagates these tags to supported resources in the stack. You can specify a maximum number of 50 tags.\n\nIf you don't specify this parameter, CloudFormation doesn't modify the stack's tags. If you specify an empty value, CloudFormation removes all associated tags.", - "TemplateBody": "The structure that contains the template body, with a minimum length of 1 byte and a maximum length of 51,200 bytes.\n\nYou must include either `TemplateURL` or `TemplateBody` in a StackSet, but you can't use both. Dynamic references in the `TemplateBody` may not work correctly in all cases. It's recommended to pass templates containing dynamic references through `TemplateUrl` instead.", - "TemplateURL": "The URL of a file containing the template body. The URL must point to a template (max size: 1 MB) that's located in an Amazon S3 bucket or a Systems Manager document. The location for an Amazon S3 bucket must start with `https://` .\n\nConditional: You must specify only one of the following parameters: `TemplateBody` , `TemplateURL` ." + "TemplateBody": "The structure that contains the template body, with a minimum length of 1 byte and a maximum length of 51,200 bytes.\n\nYou must include either `TemplateURL` or `TemplateBody` in a StackSet, but you can't use both. Dynamic references in the `TemplateBody` may not work correctly in all cases. It's recommended to pass templates that contain dynamic references through `TemplateUrl` instead.", + "TemplateURL": "The URL of a file that contains the template body. The URL must point to a template (max size: 1 MB) that's located in an Amazon S3 bucket or a Systems Manager document. The location for an Amazon S3 bucket must start with `https://` .\n\nConditional: You must specify only one of the following parameters: `TemplateBody` , `TemplateURL` ." }, "AWS::CloudFormation::StackSet AutoDeployment": { "Enabled": "If set to `true` , StackSets automatically deploys additional stack instances to AWS Organizations accounts that are added to a target organization or organizational unit (OU) in the specified Regions. If an account is removed from a target organization or OU, StackSets deletes stack instances from the account in the specified Regions.", @@ -7725,8 +7765,8 @@ }, "AWS::CloudFormation::StackSet OperationPreferences": { "ConcurrencyMode": "Specifies how the concurrency level behaves during the operation execution.\n\n- `STRICT_FAILURE_TOLERANCE` : This option dynamically lowers the concurrency level to ensure the number of failed accounts never exceeds the value of `FailureToleranceCount` +1. The initial actual concurrency is set to the lower of either the value of the `MaxConcurrentCount` , or the value of `FailureToleranceCount` +1. The actual concurrency is then reduced proportionally by the number of failures. This is the default behavior.\n\nIf failure tolerance or Maximum concurrent accounts are set to percentages, the behavior is similar.\n- `SOFT_FAILURE_TOLERANCE` : This option decouples `FailureToleranceCount` from the actual concurrency. This allows stack set operations to run at the concurrency level set by the `MaxConcurrentCount` value, or `MaxConcurrentPercentage` , regardless of the number of failures.", - "FailureToleranceCount": "The number of accounts, per Region, for which this operation can fail before CloudFormation stops the operation in that Region. If the operation is stopped in a Region, CloudFormation doesn't attempt the operation in any subsequent Regions.\n\nConditional: You must specify either `FailureToleranceCount` or `FailureTolerancePercentage` (but not both).", - "FailureTolerancePercentage": "The percentage of accounts, per Region, for which this stack operation can fail before CloudFormation stops the operation in that Region. If the operation is stopped in a Region, CloudFormation doesn't attempt the operation in any subsequent Regions.\n\nWhen calculating the number of accounts based on the specified percentage, CloudFormation rounds *down* to the next whole number.\n\nConditional: You must specify either `FailureToleranceCount` or `FailureTolerancePercentage` , but not both.", + "FailureToleranceCount": "The number of accounts per Region this operation can fail in before CloudFormation stops the operation in that Region. If the operation is stopped in a Region, CloudFormation doesn't attempt the operation in any subsequent Regions.\n\nConditional: You must specify either `FailureToleranceCount` or `FailureTolerancePercentage` (but not both).", + "FailureTolerancePercentage": "The percentage of accounts per Region this stack operation can fail in before CloudFormation stops the operation in that Region. If the operation is stopped in a Region, CloudFormation doesn't attempt the operation in any subsequent Regions.\n\nWhen calculating the number of accounts based on the specified percentage, CloudFormation rounds *down* to the next whole number.\n\nConditional: You must specify either `FailureToleranceCount` or `FailureTolerancePercentage` , but not both.", "MaxConcurrentCount": "The maximum number of accounts in which to perform this operation at one time. This is dependent on the value of `FailureToleranceCount` . `MaxConcurrentCount` is at most one more than the `FailureToleranceCount` .\n\nNote that this setting lets you specify the *maximum* for operations. For large deployments, under certain circumstances the actual number of accounts acted upon concurrently may be lower due to service throttling.\n\nConditional: You must specify either `MaxConcurrentCount` or `MaxConcurrentPercentage` , but not both.", "MaxConcurrentPercentage": "The maximum percentage of accounts in which to perform this operation at one time.\n\nWhen calculating the number of accounts based on the specified percentage, CloudFormation rounds down to the next whole number. This is true except in cases where rounding down would result is zero. In this case, CloudFormation sets the number as one instead.\n\nNote that this setting lets you specify the *maximum* for operations. For large deployments, under certain circumstances the actual number of accounts acted upon concurrently may be lower due to service throttling.\n\nConditional: You must specify either `MaxConcurrentCount` or `MaxConcurrentPercentage` , but not both.", "RegionConcurrencyType": "The concurrency type of deploying StackSets operations in Regions, could be in parallel or one Region at a time.", @@ -7743,7 +7783,7 @@ }, "AWS::CloudFormation::StackSet Tag": { "Key": "*Required* . A string used to identify this tag. You can specify a maximum of 128 characters for a tag key. Tags owned by AWS have the reserved prefix: `aws:` .", - "Value": "*Required* . A string containing the value for this tag. You can specify a maximum of 256 characters for a tag value." + "Value": "*Required* . A string that contains the value for this tag. You can specify a maximum of 256 characters for a tag value." }, "AWS::CloudFormation::TypeActivation": { "AutoUpdate": "Whether to automatically update the extension in this account and Region when a new *minor* version is published by the extension publisher. Major versions released by the publisher must be manually updated.\n\nThe default is `true` .", @@ -7763,7 +7803,7 @@ }, "AWS::CloudFormation::WaitCondition": { "Count": "The number of success signals that CloudFormation must receive before it continues the stack creation process. When the wait condition receives the requisite number of success signals, CloudFormation resumes the creation of the stack. If the wait condition doesn't receive the specified number of success signals before the Timeout period expires, CloudFormation assumes that the wait condition has failed and rolls the stack back.\n\nUpdates aren't supported.", - "Handle": "A reference to the wait condition handle used to signal this wait condition. Use the `Ref` intrinsic function to specify an [AWS::CloudFormation::WaitConditionHandle](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cloudformation-waitconditionhandle.html) resource.\n\nAnytime you add a `WaitCondition` resource during a stack update, you must associate the wait condition with a new WaitConditionHandle resource. Don't reuse an old wait condition handle that has already been defined in the template. If you reuse a wait condition handle, the wait condition might evaluate old signals from a previous create or update stack command.\n\nUpdates aren't supported.", + "Handle": "A reference to the wait condition handle used to signal this wait condition. Use the `Ref` intrinsic function to specify an [AWS::CloudFormation::WaitConditionHandle](https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/aws-resource-cloudformation-waitconditionhandle.html) resource.\n\nAnytime you add a `WaitCondition` resource during a stack update, you must associate the wait condition with a new WaitConditionHandle resource. Don't reuse an old wait condition handle that has already been defined in the template. If you reuse a wait condition handle, the wait condition might evaluate old signals from a previous create or update stack command.\n\nUpdates aren't supported.", "Timeout": "The length of time (in seconds) to wait for the number of signals that the `Count` property specifies. `Timeout` is a minimum-bound property, meaning the timeout occurs no sooner than the time you specify, but can occur shortly thereafter. The maximum time that can be specified for this property is 12 hours (43200 seconds).\n\nUpdates aren't supported." }, "AWS::CloudFormation::WaitConditionHandle": {}, @@ -7911,9 +7951,9 @@ "AWS::CloudFront::Distribution CustomOriginConfig": { "HTTPPort": "The HTTP port that CloudFront uses to connect to the origin. Specify the HTTP port that the origin listens on.", "HTTPSPort": "The HTTPS port that CloudFront uses to connect to the origin. Specify the HTTPS port that the origin listens on.", - "OriginKeepaliveTimeout": "Specifies how long, in seconds, CloudFront persists its connection to the origin. The minimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't specify otherwise) is 5 seconds.\n\nFor more information, see [Keep-alive timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginKeepaliveTimeout) in the *Amazon CloudFront Developer Guide* .", + "OriginKeepaliveTimeout": "Specifies how long, in seconds, CloudFront persists its connection to the origin. The minimum timeout is 1 second, the maximum is 120 seconds, and the default (if you don't specify otherwise) is 5 seconds.\n\nFor more information, see [Keep-alive timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginKeepaliveTimeout) in the *Amazon CloudFront Developer Guide* .", "OriginProtocolPolicy": "Specifies the protocol (HTTP or HTTPS) that CloudFront uses to connect to the origin. Valid values are:\n\n- `http-only` \u2013 CloudFront always uses HTTP to connect to the origin.\n- `match-viewer` \u2013 CloudFront connects to the origin using the same protocol that the viewer used to connect to CloudFront.\n- `https-only` \u2013 CloudFront always uses HTTPS to connect to the origin.", - "OriginReadTimeout": "Specifies how long, in seconds, CloudFront waits for a response from the origin. This is also known as the *origin response timeout* . The minimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't specify otherwise) is 30 seconds.\n\nFor more information, see [Response timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginResponseTimeout) in the *Amazon CloudFront Developer Guide* .", + "OriginReadTimeout": "Specifies how long, in seconds, CloudFront waits for a response from the origin. This is also known as the *origin response timeout* . The minimum timeout is 1 second, the maximum is 120 seconds, and the default (if you don't specify otherwise) is 30 seconds.\n\nFor more information, see [Response timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginResponseTimeout) in the *Amazon CloudFront Developer Guide* .", "OriginSSLProtocols": "Specifies the minimum SSL/TLS protocol that CloudFront uses when connecting to your origin over HTTPS. Valid values include `SSLv3` , `TLSv1` , `TLSv1.1` , and `TLSv1.2` .\n\nFor more information, see [Minimum Origin SSL Protocol](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginSSLProtocols) in the *Amazon CloudFront Developer Guide* ." }, "AWS::CloudFront::Distribution DefaultCacheBehavior": { @@ -8015,6 +8055,7 @@ "OriginCustomHeaders": "A list of HTTP header names and values that CloudFront adds to the requests that it sends to the origin.\n\nFor more information, see [Adding Custom Headers to Origin Requests](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/add-origin-custom-headers.html) in the *Amazon CloudFront Developer Guide* .", "OriginPath": "An optional path that CloudFront appends to the origin domain name when CloudFront requests content from the origin.\n\nFor more information, see [Origin Path](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesOriginPath) in the *Amazon CloudFront Developer Guide* .", "OriginShield": "CloudFront Origin Shield. Using Origin Shield can help reduce the load on your origin.\n\nFor more information, see [Using Origin Shield](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/origin-shield.html) in the *Amazon CloudFront Developer Guide* .", + "ResponseCompletionTimeout": "", "S3OriginConfig": "Use this type to specify an origin that is an Amazon S3 bucket that is not configured with static website hosting. To specify any other type of origin, including an Amazon S3 bucket that is configured with static website hosting, use the `CustomOriginConfig` type instead.", "VpcOriginConfig": "The VPC origin configuration." }, @@ -8054,7 +8095,8 @@ "GeoRestriction": "A complex type that controls the countries in which your content is distributed. CloudFront determines the location of your users using `MaxMind` GeoIP databases. To disable geo restriction, remove the [Restrictions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-cloudfront-distribution-distributionconfig.html#cfn-cloudfront-distribution-distributionconfig-restrictions) property from your stack template." }, "AWS::CloudFront::Distribution S3OriginConfig": { - "OriginAccessIdentity": "> If you're using origin access control (OAC) instead of origin access identity, specify an empty `OriginAccessIdentity` element. For more information, see [Restricting access to an AWS](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-origin.html) in the *Amazon CloudFront Developer Guide* . \n\nThe CloudFront origin access identity to associate with the origin. Use an origin access identity to configure the origin so that viewers can *only* access objects in an Amazon S3 bucket through CloudFront. The format of the value is:\n\n`origin-access-identity/cloudfront/ID-of-origin-access-identity`\n\nThe `*ID-of-origin-access-identity*` is the value that CloudFront returned in the `ID` element when you created the origin access identity.\n\nIf you want viewers to be able to access objects using either the CloudFront URL or the Amazon S3 URL, specify an empty `OriginAccessIdentity` element.\n\nTo delete the origin access identity from an existing distribution, update the distribution configuration and include an empty `OriginAccessIdentity` element.\n\nTo replace the origin access identity, update the distribution configuration and specify the new origin access identity.\n\nFor more information about the origin access identity, see [Serving Private Content through CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html) in the *Amazon CloudFront Developer Guide* ." + "OriginAccessIdentity": "> If you're using origin access control (OAC) instead of origin access identity, specify an empty `OriginAccessIdentity` element. For more information, see [Restricting access to an AWS](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-origin.html) in the *Amazon CloudFront Developer Guide* . \n\nThe CloudFront origin access identity to associate with the origin. Use an origin access identity to configure the origin so that viewers can *only* access objects in an Amazon S3 bucket through CloudFront. The format of the value is:\n\n`origin-access-identity/cloudfront/ID-of-origin-access-identity`\n\nThe `*ID-of-origin-access-identity*` is the value that CloudFront returned in the `ID` element when you created the origin access identity.\n\nIf you want viewers to be able to access objects using either the CloudFront URL or the Amazon S3 URL, specify an empty `OriginAccessIdentity` element.\n\nTo delete the origin access identity from an existing distribution, update the distribution configuration and include an empty `OriginAccessIdentity` element.\n\nTo replace the origin access identity, update the distribution configuration and specify the new origin access identity.\n\nFor more information about the origin access identity, see [Serving Private Content through CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html) in the *Amazon CloudFront Developer Guide* .", + "OriginReadTimeout": "" }, "AWS::CloudFront::Distribution StatusCodes": { "Items": "The items (status codes) for an origin group.", @@ -8080,8 +8122,8 @@ "SslSupportMethod": "> In CloudFormation, this field name is `SslSupportMethod` . Note the different capitalization. \n\nIf the distribution uses `Aliases` (alternate domain names or CNAMEs), specify which viewers the distribution accepts HTTPS connections from.\n\n- `sni-only` \u2013 The distribution accepts HTTPS connections from only viewers that support [server name indication (SNI)](https://docs.aws.amazon.com/https://en.wikipedia.org/wiki/Server_Name_Indication) . This is recommended. Most browsers and clients support SNI.\n- `vip` \u2013 The distribution accepts HTTPS connections from all viewers including those that don't support SNI. This is not recommended, and results in additional monthly charges from CloudFront.\n- `static-ip` - Do not specify this value unless your distribution has been enabled for this feature by the CloudFront team. If you have a use case that requires static IP addresses for a distribution, contact CloudFront through the [Support Center](https://docs.aws.amazon.com/support/home) .\n\nIf the distribution uses the CloudFront domain name such as `d111111abcdef8.cloudfront.net` , don't set a value for this field." }, "AWS::CloudFront::Distribution VpcOriginConfig": { - "OriginKeepaliveTimeout": "Specifies how long, in seconds, CloudFront persists its connection to the origin. The minimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't specify otherwise) is 5 seconds.\n\nFor more information, see [Keep-alive timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginKeepaliveTimeout) in the *Amazon CloudFront Developer Guide* .", - "OriginReadTimeout": "Specifies how long, in seconds, CloudFront waits for a response from the origin. This is also known as the *origin response timeout* . The minimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't specify otherwise) is 30 seconds.\n\nFor more information, see [Response timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginResponseTimeout) in the *Amazon CloudFront Developer Guide* .", + "OriginKeepaliveTimeout": "Specifies how long, in seconds, CloudFront persists its connection to the origin. The minimum timeout is 1 second, the maximum is 120 seconds, and the default (if you don't specify otherwise) is 5 seconds.\n\nFor more information, see [Keep-alive timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginKeepaliveTimeout) in the *Amazon CloudFront Developer Guide* .", + "OriginReadTimeout": "Specifies how long, in seconds, CloudFront waits for a response from the origin. This is also known as the *origin response timeout* . The minimum timeout is 1 second, the maximum is 120 seconds, and the default (if you don't specify otherwise) is 30 seconds.\n\nFor more information, see [Response timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginResponseTimeout) in the *Amazon CloudFront Developer Guide* .", "VpcOriginId": "The VPC origin ID." }, "AWS::CloudFront::DistributionTenant": { @@ -10785,7 +10827,8 @@ "CommunicationLimitList": "The list of CommunicationLimits." }, "AWS::ConnectCampaignsV2::Campaign CommunicationLimitsConfig": { - "AllChannelsSubtypes": "The CommunicationLimits that apply to all channel subtypes defined in an outbound campaign." + "AllChannelsSubtypes": "The CommunicationLimits that apply to all channel subtypes defined in an outbound campaign.", + "InstanceLimitsHandling": "Opt-in or Opt-out from instance-level limits." }, "AWS::ConnectCampaignsV2::Campaign CommunicationTimeConfig": { "Email": "The communication time configuration for the email channel subtype.", @@ -12054,9 +12097,15 @@ }, "AWS::DSQL::Cluster": { "DeletionProtectionEnabled": "Whether deletion protection is enabled on this cluster.", + "KmsEncryptionKey": "", "MultiRegionProperties": "Defines the structure for multi-Region cluster configurations, containing the witness Region and peered cluster settings.", "Tags": "A map of key and value pairs this cluster is tagged with." }, + "AWS::DSQL::Cluster EncryptionDetails": { + "EncryptionStatus": "The status of encryption for the cluster.", + "EncryptionType": "The type of encryption that protects the data on your cluster.", + "KmsKeyArn": "The ARN of the AWS KMS key that encrypts data in the cluster." + }, "AWS::DSQL::Cluster MultiRegionProperties": { "Clusters": "The set of peered clusters that form the multi-Region cluster configuration. Each peered cluster represents a database instance in a different Region.", "WitnessRegion": "The Region that serves as the witness Region for a multi-Region cluster. The witness Region helps maintain cluster consistency and quorum." @@ -13212,7 +13261,8 @@ }, "AWS::DataZone::ProjectProfile EnvironmentConfigurationParametersDetails": { "ParameterOverrides": "", - "ResolvedParameters": "" + "ResolvedParameters": "", + "SsmPath": "" }, "AWS::DataZone::ProjectProfile Region": { "RegionName": "" @@ -16512,7 +16562,7 @@ }, "AWS::ECS::Service LogConfiguration": { "LogDriver": "The log driver to use for the container.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `syslog` , `splunk` , and `awsfirelens` .\n\nFor more information about using the `awslogs` log driver, see [Send Amazon ECS logs to CloudWatch](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor more information about using the `awsfirelens` log driver, see [Send Amazon ECS logs to an AWS service or AWS Partner](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) .\n\n> If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.", - "Options": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n\n> On June 25, 2025, Amazon ECS is changing the default log driver mode from `blocking` to `non-blocking` to prioritize task availability over logging. To continue using the `blocking` mode after this change, do one of the following:\n> \n> - Set the `mode` option in your container definition's `logConfiguration` as `blocking` .\n> - Set the `defaultLogDriverMode` account setting to `blocking` .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", + "Options": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `non-blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n\n> On June 25, 2025, Amazon ECS changed the default log driver mode from `blocking` to `non-blocking` to prioritize task availability over logging. To continue using the `blocking` mode after this change, do one of the following:\n> \n> - Set the `mode` option in your container definition's `logConfiguration` as `blocking` .\n> - Set the `defaultLogDriverMode` account setting to `blocking` .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", "SecretOptions": "The secrets to pass to the log configuration. For more information, see [Specifying sensitive data](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html) in the *Amazon Elastic Container Service Developer Guide* ." }, "AWS::ECS::Service NetworkConfiguration": { @@ -16599,7 +16649,6 @@ "EphemeralStorage": "The ephemeral storage settings to use for tasks run with the task definition.", "ExecutionRoleArn": "The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make AWS API calls on your behalf. For informationabout the required IAM roles for Amazon ECS, see [IAM roles for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security-ecs-iam-role-overview.html) in the *Amazon Elastic Container Service Developer Guide* .", "Family": "The name of a family that this task definition is registered to. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed.\n\nA family groups multiple versions of a task definition. Amazon ECS gives the first task definition that you registered to a family a revision number of 1. Amazon ECS gives sequential revision numbers to each task definition that you add.\n\n> To use revision numbers when you update a task definition, specify this property. If you don't specify a value, AWS CloudFormation generates a new task definition each time that you update it.", - "InferenceAccelerators": "The Elastic Inference accelerators to use for the containers in the task.", "IpcMode": "The IPC resource namespace to use for the containers in the task. The valid values are `host` , `task` , or `none` . If `host` is specified, then all containers within the tasks that specified the `host` IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If `task` is specified, all containers within the specified task share the same IPC resources. If `none` is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance.\n\nIf the `host` IPC mode is used, be aware that there is a heightened risk of undesired IPC namespace expose.\n\nIf you are setting namespaced kernel parameters using `systemControls` for the containers in the task, the following will apply to your IPC resource namespace. For more information, see [System Controls](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html) in the *Amazon Elastic Container Service Developer Guide* .\n\n- For tasks that use the `host` IPC mode, IPC namespace related `systemControls` are not supported.\n- For tasks that use the `task` IPC mode, IPC namespace related `systemControls` will apply to all containers within a task.\n\n> This parameter is not supported for Windows containers or tasks run on AWS Fargate .", "Memory": "The amount (in MiB) of memory used by the task.\n\nIf your tasks runs on Amazon EC2 instances, you must specify either a task-level memory value or a container-level memory value. This field is optional and any value can be used. If a task-level memory value is specified, the container-level memory value is optional. For more information regarding container-level memory and memory reservation, see [ContainerDefinition](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDefinition.html) .\n\nIf your tasks runs on AWS Fargate , this field is required. You must use one of the following values. The value you choose determines your range of valid values for the `cpu` parameter.\n\n- 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available `cpu` values: 256 (.25 vCPU)\n- 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available `cpu` values: 512 (.5 vCPU)\n- 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available `cpu` values: 1024 (1 vCPU)\n- Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available `cpu` values: 2048 (2 vCPU)\n- Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available `cpu` values: 4096 (4 vCPU)\n- Between 16 GB and 60 GB in 4 GB increments - Available `cpu` values: 8192 (8 vCPU)\n\nThis option requires Linux platform `1.4.0` or later.\n- Between 32GB and 120 GB in 8 GB increments - Available `cpu` values: 16384 (16 vCPU)\n\nThis option requires Linux platform `1.4.0` or later.", "NetworkMode": "The Docker networking mode to use for the containers in the task. The valid values are `none` , `bridge` , `awsvpc` , and `host` . If no network mode is specified, the default is `bridge` .\n\nFor Amazon ECS tasks on Fargate, the `awsvpc` network mode is required. For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, `` or `awsvpc` can be used. If the network mode is set to `none` , you cannot specify port mappings in your container definitions, and the tasks containers do not have external connectivity. The `host` and `awsvpc` network modes offer the highest networking performance for containers because they use the EC2 network stack instead of the virtualized network stack provided by the `bridge` mode.\n\nWith the `host` and `awsvpc` network modes, exposed container ports are mapped directly to the corresponding host port (for the `host` network mode) or the attached elastic network interface port (for the `awsvpc` network mode), so you cannot take advantage of dynamic host port mappings.\n\n> When using the `host` network mode, you should not run containers using the root user (UID 0). It is considered best practice to use a non-root user. \n\nIf the network mode is `awsvpc` , the task is allocated an elastic network interface, and you must specify a [NetworkConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_NetworkConfiguration.html) value when you create a service or run a task with the task definition. For more information, see [Task Networking](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nIf the network mode is `host` , you cannot run multiple instantiations of the same task on a single container instance when port mappings are used.", @@ -16717,10 +16766,6 @@ "AWS::ECS::TaskDefinition HostVolumeProperties": { "SourcePath": "When the `host` parameter is used, specify a `sourcePath` to declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If the `host` parameter contains a `sourcePath` file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the `sourcePath` value doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.\n\nIf you're using the Fargate launch type, the `sourcePath` parameter is not supported." }, - "AWS::ECS::TaskDefinition InferenceAccelerator": { - "DeviceName": "The Elastic Inference accelerator device name. The `deviceName` must also be referenced in a container definition as a [ResourceRequirement](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ResourceRequirement.html) .", - "DeviceType": "The Elastic Inference accelerator type to use." - }, "AWS::ECS::TaskDefinition KernelCapabilities": { "Add": "The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to `CapAdd` in the docker container create command and the `--cap-add` option to docker run.\n\n> Tasks launched on AWS Fargate only support adding the `SYS_PTRACE` kernel capability. \n\nValid values: `\"ALL\" | \"AUDIT_CONTROL\" | \"AUDIT_WRITE\" | \"BLOCK_SUSPEND\" | \"CHOWN\" | \"DAC_OVERRIDE\" | \"DAC_READ_SEARCH\" | \"FOWNER\" | \"FSETID\" | \"IPC_LOCK\" | \"IPC_OWNER\" | \"KILL\" | \"LEASE\" | \"LINUX_IMMUTABLE\" | \"MAC_ADMIN\" | \"MAC_OVERRIDE\" | \"MKNOD\" | \"NET_ADMIN\" | \"NET_BIND_SERVICE\" | \"NET_BROADCAST\" | \"NET_RAW\" | \"SETFCAP\" | \"SETGID\" | \"SETPCAP\" | \"SETUID\" | \"SYS_ADMIN\" | \"SYS_BOOT\" | \"SYS_CHROOT\" | \"SYS_MODULE\" | \"SYS_NICE\" | \"SYS_PACCT\" | \"SYS_PTRACE\" | \"SYS_RAWIO\" | \"SYS_RESOURCE\" | \"SYS_TIME\" | \"SYS_TTY_CONFIG\" | \"SYSLOG\" | \"WAKE_ALARM\"`", "Drop": "The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to `CapDrop` in the docker container create command and the `--cap-drop` option to docker run.\n\nValid values: `\"ALL\" | \"AUDIT_CONTROL\" | \"AUDIT_WRITE\" | \"BLOCK_SUSPEND\" | \"CHOWN\" | \"DAC_OVERRIDE\" | \"DAC_READ_SEARCH\" | \"FOWNER\" | \"FSETID\" | \"IPC_LOCK\" | \"IPC_OWNER\" | \"KILL\" | \"LEASE\" | \"LINUX_IMMUTABLE\" | \"MAC_ADMIN\" | \"MAC_OVERRIDE\" | \"MKNOD\" | \"NET_ADMIN\" | \"NET_BIND_SERVICE\" | \"NET_BROADCAST\" | \"NET_RAW\" | \"SETFCAP\" | \"SETGID\" | \"SETPCAP\" | \"SETUID\" | \"SYS_ADMIN\" | \"SYS_BOOT\" | \"SYS_CHROOT\" | \"SYS_MODULE\" | \"SYS_NICE\" | \"SYS_PACCT\" | \"SYS_PTRACE\" | \"SYS_RAWIO\" | \"SYS_RESOURCE\" | \"SYS_TIME\" | \"SYS_TTY_CONFIG\" | \"SYSLOG\" | \"WAKE_ALARM\"`" @@ -16740,7 +16785,7 @@ }, "AWS::ECS::TaskDefinition LogConfiguration": { "LogDriver": "The log driver to use for the container.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `syslog` , `splunk` , and `awsfirelens` .\n\nFor more information about using the `awslogs` log driver, see [Send Amazon ECS logs to CloudWatch](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor more information about using the `awsfirelens` log driver, see [Send Amazon ECS logs to an AWS service or AWS Partner](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html) .\n\n> If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's [available on GitHub](https://docs.aws.amazon.com/https://github.com/aws/amazon-ecs-agent) and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software.", - "Options": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n\n> On June 25, 2025, Amazon ECS is changing the default log driver mode from `blocking` to `non-blocking` to prioritize task availability over logging. To continue using the `blocking` mode after this change, do one of the following:\n> \n> - Set the `mode` option in your container definition's `logConfiguration` as `blocking` .\n> - Set the `defaultLogDriverMode` account setting to `blocking` .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", + "Options": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `non-blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n\n> On June 25, 2025, Amazon ECS changed the default log driver mode from `blocking` to `non-blocking` to prioritize task availability over logging. To continue using the `blocking` mode after this change, do one of the following:\n> \n> - Set the `mode` option in your container definition's `logConfiguration` as `blocking` .\n> - Set the `defaultLogDriverMode` account setting to `blocking` .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", "SecretOptions": "The secrets to pass to the log configuration. For more information, see [Specifying sensitive data](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html) in the *Amazon Elastic Container Service Developer Guide* ." }, "AWS::ECS::TaskDefinition MountPoint": { @@ -17634,6 +17679,7 @@ "Architecture": "The CPU architecture of an application.", "AutoStartConfiguration": "The configuration for an application to automatically start on job submission.", "AutoStopConfiguration": "The configuration for an application to automatically stop after a certain amount of time being idle.", + "IdentityCenterConfiguration": "A configuration specification to be used when provisioning an application. A configuration consists of a classification, properties, and optional nested configurations. A classification refers to an application-specific configuration file. Properties are the settings you want to change in that file.", "ImageConfiguration": "The image configuration applied to all worker types.", "InitialCapacity": "The initial capacity of the application.", "InteractiveConfiguration": "The interactive configuration object that enables the interactive use cases for an application.", @@ -17667,6 +17713,9 @@ "Configurations": "A list of additional configurations to apply within a configuration object.", "Properties": "A set of properties specified within a configuration classification." }, + "AWS::EMRServerless::Application IdentityCenterConfiguration": { + "IdentityCenterInstanceArn": "" + }, "AWS::EMRServerless::Application ImageConfigurationInput": { "ImageUri": "The URI of an image in the Amazon ECR registry. This field is required when you create a new application. If you leave this field blank in an update, Amazon EMR will remove the image configuration." }, @@ -17735,7 +17784,7 @@ "Hosts": "Required for environment resource creation.", "InitialVlans": "> Amazon EVS is in public preview release and is subject to change. \n\nThe initial VLAN subnets for the environment. Amazon EVS VLAN subnets have a minimum CIDR block size of /28 and a maximum size of /24. Amazon EVS VLAN subnet CIDR blocks must not overlap with other subnets in the VPC.\n\nRequired for environment resource creation.", "KmsKeyId": "The AWS KMS key ID that AWS Secrets Manager uses to encrypt secrets that are associated with the environment. These secrets contain the VCF credentials that are needed to install vCenter Server, NSX, and SDDC Manager.\n\nBy default, Amazon EVS use the AWS Secrets Manager managed key `aws/secretsmanager` . You can also specify a customer managed key.", - "LicenseInfo": "The license information that Amazon EVS requires to create an environment. Amazon EVS requires two license keys: a VCF solution key and a vSAN license key.", + "LicenseInfo": "The license information that Amazon EVS requires to create an environment. Amazon EVS requires two license keys: a VCF solution key and a vSAN license key. The VCF solution key must cover a minimum of 256 cores. The vSAN license key must provide at least 110 TiB of vSAN capacity.", "ServiceAccessSecurityGroups": "The security groups that allow traffic between the Amazon EVS control plane and your VPC for service access. If a security group is not specified, Amazon EVS uses the default security group in your account for service access.", "ServiceAccessSubnetId": "The subnet that is used to establish connectivity between the Amazon EVS control plane and VPC. Amazon EVS uses this subnet to perform validations and create the environment.", "SiteId": "The Broadcom Site ID that is associated with your Amazon EVS environment. Amazon EVS uses the Broadcom Site ID that you provide to meet Broadcom VCF license usage reporting requirements for Amazon EVS.", @@ -17776,8 +17825,8 @@ "VmkManagement": "The host VMkernel management VLAN subnet. This VLAN subnet carries traffic for managing ESXi hosts and communicating with VMware vCenter Server." }, "AWS::EVS::Environment LicenseInfo": { - "SolutionKey": "The VCF solution key. This license unlocks VMware VCF product features, including vSphere, NSX, SDDC Manager, and vCenter Server.", - "VsanKey": "The VSAN license key. This license unlocks vSAN features." + "SolutionKey": "The VCF solution key. This license unlocks VMware VCF product features, including vSphere, NSX, SDDC Manager, and vCenter Server. The VCF solution key must cover a minimum of 256 cores.", + "VsanKey": "The VSAN license key. This license unlocks vSAN features. The vSAN license key must provide at least 110 TiB of vSAN capacity." }, "AWS::EVS::Environment Secret": { "SecretArn": "The Amazon Resource Name (ARN) of the secret." @@ -19477,6 +19526,37 @@ "ThroughputCapacity": "Sets the throughput capacity of an Amazon FSx file system, measured in megabytes per second (MB/s), in 2 to the *n* th increments, between 2^3 (8) and 2^11 (2048).\n\n> To increase storage capacity, a file system must have a minimum throughput capacity of 16 MB/s.", "WeeklyMaintenanceStartTime": "The preferred start time to perform weekly maintenance, formatted d:HH:MM in the UTC time zone, where d is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday." }, + "AWS::FSx::S3AccessPointAttachment": { + "Name": "The name of the S3 access point attachment; also used for the name of the S3 access point.", + "OpenZFSConfiguration": "The OpenZFSConfiguration of the S3 access point attachment.", + "S3AccessPoint": "The S3 access point configuration of the S3 access point attachment.", + "Type": "The type of Amazon FSx volume that the S3 access point is attached to." + }, + "AWS::FSx::S3AccessPointAttachment FileSystemGID": { + "Gid": "The GID of the file system user." + }, + "AWS::FSx::S3AccessPointAttachment OpenZFSFileSystemIdentity": { + "PosixUser": "Specifies the UID and GIDs of the file system POSIX user.", + "Type": "Specifies the FSx for OpenZFS user identity type, accepts only `POSIX` ." + }, + "AWS::FSx::S3AccessPointAttachment OpenZFSPosixFileSystemUser": { + "Gid": "The GID of the file system user.", + "SecondaryGids": "The list of secondary GIDs for the file system user.", + "Uid": "The UID of the file system user." + }, + "AWS::FSx::S3AccessPointAttachment S3AccessPoint": { + "Alias": "The S3 access point's alias.", + "Policy": "The S3 access point's policy.", + "ResourceARN": "The S3 access point's ARN.", + "VpcConfiguration": "The S3 access point's virtual private cloud (VPC) configuration." + }, + "AWS::FSx::S3AccessPointAttachment S3AccessPointOpenZFSConfiguration": { + "FileSystemIdentity": "The file system identity used to authorize file access requests made using the S3 access point.", + "VolumeId": "The ID of the FSx for OpenZFS volume that the S3 access point is attached to." + }, + "AWS::FSx::S3AccessPointAttachment S3AccessPointVpcConfiguration": { + "VpcId": "Specifies the virtual private cloud (VPC) for the S3 access point VPC configuration, if one exists." + }, "AWS::FSx::Snapshot": { "Name": "The name of the snapshot.", "Tags": "A list of `Tag` values, with a maximum of 50 elements.", @@ -25248,6 +25328,7 @@ "SalesforceConfiguration": "Provides the configuration information to connect to Salesforce as your data source.", "ServiceNowConfiguration": "Provides the configuration information to connect to ServiceNow as your data source.", "SharePointConfiguration": "Provides the configuration information to connect to Microsoft SharePoint as your data source.", + "TemplateConfiguration": "Provides a template for the configuration information to connect to your data source.", "WebCrawlerConfiguration": "Provides the configuration information required for Amazon Kendra Web Crawler.", "WorkDocsConfiguration": "Provides the configuration information to connect to Amazon WorkDocs as your data source." }, @@ -25425,6 +25506,9 @@ "Key": "The key for the tag. Keys are not case sensitive and must be unique for the index, FAQ, data source, or other resource.", "Value": "The value associated with the tag. The value may be an empty string but it can't be null." }, + "AWS::Kendra::DataSource TemplateConfiguration": { + "Template": "The template schema used for the data source, where templates schemas are supported.\n\nSee [Data source template schemas](https://docs.aws.amazon.com/kendra/latest/dg/ds-schemas.html) ." + }, "AWS::Kendra::DataSource WebCrawlerAuthenticationConfiguration": { "BasicAuthentication": "The list of configuration information that's required to connect to and crawl a website host using basic authentication credentials.\n\nThe list includes the name and port number of the website host." }, @@ -26923,12 +27007,17 @@ "AWS::Lex::Bot BotLocale": { "CustomVocabulary": "Specifies a custom vocabulary to use with a specific locale.", "Description": "A description of the bot locale. Use this to help identify the bot locale in lists.", + "GenerativeAISettings": "", "Intents": "One or more intents defined for the locale.", "LocaleId": "The identifier of the language and locale that the bot will be used in. The string must match one of the supported locales.", "NluConfidenceThreshold": "Determines the threshold where Amazon Lex will insert the `AMAZON.FallbackIntent` , `AMAZON.KendraSearchIntent` , or both when returning alternative intents. You must configure an `AMAZON.FallbackIntent` . `AMAZON.KendraSearchIntent` is only inserted if it is configured for the bot.", "SlotTypes": "One or more slot types defined for the locale.", "VoiceSettings": "Defines settings for using an Amazon Polly voice to communicate with a user.\n\nValid values include:\n\n- `standard`\n- `neural`\n- `long-form`\n- `generative`" }, + "AWS::Lex::Bot BuildtimeSettings": { + "DescriptiveBotBuilderSpecification": "", + "SampleUtteranceGenerationSpecification": "" + }, "AWS::Lex::Bot Button": { "Text": "The text that appears on the button. Use this to tell the user what value is returned when they choose this button.", "Value": "The value returned to Amazon Lex when the user chooses this button. This must be one of the slot values configured for the slot." @@ -26990,6 +27079,10 @@ "NextStep": "The next step in the conversation.", "Response": "Specifies a list of message groups that Amazon Lex uses to respond the user input." }, + "AWS::Lex::Bot DescriptiveBotBuilderSpecification": { + "BedrockModelSpecification": "An object containing information about the Amazon Bedrock model used to interpret the prompt used in descriptive bot building.", + "Enabled": "Specifies whether the descriptive bot building feature is activated or not." + }, "AWS::Lex::Bot DialogAction": { "SlotToElicit": "If the dialog action is `ElicitSlot` , defines the slot to elicit from the user.", "SuppressNextMessage": "When true the next message for the intent is not used.", @@ -27042,6 +27135,10 @@ "TimeoutInSeconds": "The length of time that the fulfillment Lambda function should run before it times out.", "UpdateResponse": "Provides configuration information for messages sent periodically to the user while the fulfillment Lambda function is running." }, + "AWS::Lex::Bot GenerativeAISettings": { + "BuildtimeSettings": "", + "RuntimeSettings": "" + }, "AWS::Lex::Bot GrammarSlotTypeSetting": { "Source": "The source of the grammar used to create the slot type." }, @@ -27131,6 +27228,9 @@ "AWS::Lex::Bot MultipleValuesSetting": { "AllowMultipleValues": "Indicates whether a slot can return multiple values. When `true` , the slot may return more than one value in a response. When `false` , the slot returns only a single value.\n\nMulti-value slots are only available in the en-US locale. If you set this value to `true` in any other locale, Amazon Lex throws a `ValidationException` .\n\nIf the `allowMutlipleValues` is not set, the default value is `false` ." }, + "AWS::Lex::Bot NluImprovementSpecification": { + "Enabled": "Determines whether the Assisted NLU feature is enabled for the bot. When set to `true` , Amazon Lex uses advanced models to improve intent recognition and slot resolution, with the default being `false` ." + }, "AWS::Lex::Bot ObfuscationSetting": { "ObfuscationSettingType": "Value that determines whether Amazon Lex obscures slot values in conversation logs. The default is to obscure the values." }, @@ -27207,6 +27307,10 @@ "AllowInterrupt": "Indicates whether the user can interrupt a speech response from Amazon Lex.", "MessageGroupsList": "A collection of responses that Amazon Lex can send to the user. Amazon Lex chooses the actual response to send at runtime." }, + "AWS::Lex::Bot RuntimeSettings": { + "NluImprovementSpecification": "", + "SlotResolutionImprovementSpecification": "" + }, "AWS::Lex::Bot S3BucketLogDestination": { "KmsKeyArn": "The Amazon Resource Name (ARN) of an AWS Key Management Service (KMS) key for encrypting audio log files stored in an Amazon S3 bucket.", "LogPrefix": "The S3 prefix to assign to audio log files.", @@ -27223,6 +27327,10 @@ "AWS::Lex::Bot SampleUtterance": { "Utterance": "A sample utterance that invokes an intent or respond to a slot elicitation prompt." }, + "AWS::Lex::Bot SampleUtteranceGenerationSpecification": { + "BedrockModelSpecification": "", + "Enabled": "Specifies whether to enable sample utterance generation or not." + }, "AWS::Lex::Bot SampleValue": { "Value": "The value that can be used for a slot type." }, @@ -27262,6 +27370,10 @@ "Priority": "The priority that Amazon Lex should apply to the slot.", "SlotName": "The name of the slot." }, + "AWS::Lex::Bot SlotResolutionImprovementSpecification": { + "BedrockModelSpecification": "An object containing information about the Amazon Bedrock model used to assist slot resolution.", + "Enabled": "Specifies whether assisted slot resolution is turned on or off." + }, "AWS::Lex::Bot SlotType": { "CompositeSlotTypeSetting": "", "Description": "A description of the slot type. Use the description to help identify the slot type in lists.", @@ -28095,6 +28207,11 @@ "AWS::Logs::Transformer ParseRoute53": { "Source": "Omit this parameter and the whole log message will be processed by this processor. No other value than `@message` is allowed for `source` ." }, + "AWS::Logs::Transformer ParseToOCSF": { + "EventSource": "Specify the service or process that produces the log events that will be converted with this processor.", + "OcsfVersion": "Specify which version of the OCSF schema to use for the transformed log events.", + "Source": "The path to the field in the log event that you want to parse. If you omit this value, the whole log message is parsed." + }, "AWS::Logs::Transformer ParseVPC": { "Source": "Omit this parameter and the whole log message will be processed by this processor. No other value than `@message` is allowed for `source` ." }, @@ -28116,6 +28233,7 @@ "ParseKeyValue": "Use this parameter to include the [parseKeyValue](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatch-Logs-Transformation-Processors.html#CloudWatch-Logs-Transformation-parseKeyValue) processor in your transformer.", "ParsePostgres": "Use this parameter to include the [parsePostGres](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatch-Logs-Transformation.html#CloudWatch-Logs-Transformation-parsePostGres) processor in your transformer.\n\nIf you use this processor, it must be the first processor in your transformer.", "ParseRoute53": "Use this parameter to include the [parseRoute53](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatch-Logs-Transformation-Processors.html#CloudWatch-Logs-Transformation-parseRoute53) processor in your transformer.\n\nIf you use this processor, it must be the first processor in your transformer.", + "ParseToOCSF": "Use this processor to convert logs into Open Cybersecurity Schema Framework (OCSF) format", "ParseVPC": "Use this parameter to include the [parseVPC](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatch-Logs-Transformation-Processors.html#CloudWatch-Logs-Transformation-parseVPC) processor in your transformer.\n\nIf you use this processor, it must be the first processor in your transformer.", "ParseWAF": "Use this parameter to include the [parseWAF](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatch-Logs-Transformation.html#CloudWatch-Logs-Transformation-parseWAF) processor in your transformer.\n\nIf you use this processor, it must be the first processor in your transformer.", "RenameKeys": "Use this parameter to include the [renameKeys](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatch-Logs-Transformation.html#CloudWatch-Logs-Transformation-renameKeys) processor in your transformer.", @@ -28363,7 +28481,7 @@ "Approvers": "Contains details for an approver.", "Description": "Description for the team.", "Name": "Name of the team.", - "Policies": "Contains details for a policy. Policies define what operations a team that define the permissions for team resources.\n\nThe protected operation for a service integration might require specific permissions. For more information, see [How other services work with Multi-party approval](https://docs.aws.amazon.com/mpa/latest/userguide/mpa-integrations.html) in the *Multi-party approval User Guide* .", + "Policies": "Contains details for a policy. Policies define what operations a team that define the permissions for team resources.", "Tags": "Tags that you have added to the specified resource." }, "AWS::MPA::ApprovalTeam ApprovalStrategy": { @@ -31103,6 +31221,7 @@ "ConfigurationAliases": "The player parameters and aliases used as dynamic variables during session initialization. For more information, see [Domain Variables](https://docs.aws.amazon.com/mediatailor/latest/ug/variables-domain.html) .", "DashConfiguration": "The configuration for a DASH source.", "HlsConfiguration": "The configuration for HLS content.", + "InsertionMode": "The setting that controls whether players can use stitched or guided ad insertion. The default, `STITCHED_ONLY` , forces all player sessions to use stitched (server-side) ad insertion. Choosing `PLAYER_SELECT` allows players to select either stitched or guided ad insertion at session-initialization time. The default for players that do not specify an insertion mode is stitched.", "LivePreRollConfiguration": "The configuration for pre-roll ad insertion.", "LogConfiguration": "Defines where AWS Elemental MediaTailor sends logs for the playback configuration.", "ManifestProcessingRules": "The configuration for manifest processing rules. Manifest processing rules enable customization of the personalized manifests created by MediaTailor.", @@ -31368,7 +31487,6 @@ "DBInstanceClass": "Contains the name of the compute and memory capacity class of the DB instance.\n\nIf you update this property, some interruptions may occur.", "DBInstanceIdentifier": "Contains a user-supplied database identifier. This identifier is the unique key that identifies a DB instance.", "DBParameterGroupName": "The name of an existing DB parameter group or a reference to an AWS::Neptune::DBParameterGroup resource created in the template. If any of the data members of the referenced parameter group are changed during an update, the DB instance might need to be restarted, which causes some interruption. If the parameter group contains static parameters, whether they were changed or not, an update triggers a reboot.", - "DBSnapshotIdentifier": "This parameter is not supported.\n\n`AWS::Neptune::DBInstance` does not support restoring from snapshots.\n\n`AWS::Neptune::DBCluster` does support restoring from snapshots.", "DBSubnetGroupName": "A DB subnet group to associate with the DB instance. If you update this value, the new subnet group must be a subnet group in a new virtual private cloud (VPC).", "PreferredMaintenanceWindow": "Specifies the weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).", "Tags": "An arbitrary set of tags (key-value pairs) for this DB instance." @@ -31952,7 +32070,9 @@ }, "AWS::NetworkManager::VpcAttachment VpcOptions": { "ApplianceModeSupport": "Indicates whether appliance mode is supported. If enabled, traffic flow between a source and destination use the same Availability Zone for the VPC attachment for the lifetime of that flow. The default value is `false` .", - "Ipv6Support": "Indicates whether IPv6 is supported." + "DnsSupport": "Indicates whether DNS is supported.", + "Ipv6Support": "Indicates whether IPv6 is supported.", + "SecurityGroupReferencingSupport": "Indicates whether security group referencing is enabled for this VPC attachment. The default is `true` . However, at the core network policy-level the default is set to `false` ." }, "AWS::Notifications::ChannelAssociation": { "Arn": "The Amazon Resource Name (ARN) of the `Channel` .", @@ -34252,7 +34372,7 @@ }, "AWS::QBusiness::DataSource HookConfiguration": { "InvocationCondition": "The condition used for when a Lambda function should be invoked.\n\nFor example, you can specify a condition that if there are empty date-time values, then Amazon Q Business should invoke a function that inserts the current date-time.", - "LambdaArn": "The Amazon Resource Name (ARN) of the Lambda function sduring ingestion. For more information, see [Using Lambda functions for Amazon Q Business document enrichment](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/cde-lambda-operations.html) .", + "LambdaArn": "The Amazon Resource Name (ARN) of the Lambda function during ingestion. For more information, see [Using Lambda functions for Amazon Q Business document enrichment](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/cde-lambda-operations.html) .", "RoleArn": "The Amazon Resource Name (ARN) of a role with permission to run `PreExtractionHookConfiguration` and `PostExtractionHookConfiguration` for altering document metadata and content during the document ingestion process.", "S3BucketName": "Stores the original, raw documents or the structured, parsed documents before and after altering them. For more information, see [Data contracts for Lambda functions](https://docs.aws.amazon.com/amazonq/latest/business-use-dg/cde-lambda-operations.html#cde-lambda-operations-data-contracts) ." }, @@ -45567,12 +45687,12 @@ "ResourceId": "The ID of the virtual private cloud (VPC) that you're configuring the DNSSEC validation status for." }, "AWS::Route53Resolver::ResolverEndpoint": { - "Direction": "Indicates whether the Resolver endpoint allows inbound or outbound DNS queries:\n\n- `INBOUND` : allows DNS queries to your VPC from your network\n- `OUTBOUND` : allows DNS queries from your VPC to your network", + "Direction": "Indicates whether the Resolver endpoint allows inbound or outbound DNS queries:\n\n- `INBOUND` : allows DNS queries to your VPC from your network\n- `OUTBOUND` : allows DNS queries from your VPC to your network\n- `INBOUND_DELEGATION` : Resolver delegates queries to Route 53 private hosted zones from your network.", "IpAddresses": "The subnets and IP addresses in your VPC that DNS queries originate from (for outbound endpoints) or that you forward DNS queries to (for inbound endpoints). The subnet ID uniquely identifies a VPC.\n\n> Even though the minimum is 1, Route\u00a053 requires that you create at least two.", "Name": "A friendly name that lets you easily find a configuration in the Resolver dashboard in the Route 53 console.", "OutpostArn": "The ARN (Amazon Resource Name) for the Outpost.", "PreferredInstanceType": "The Amazon EC2 instance type.", - "Protocols": "Protocols used for the endpoint. DoH-FIPS is applicable for inbound endpoints only.\n\nFor an inbound endpoint you can apply the protocols as follows:\n\n- Do53 and DoH in combination.\n- Do53 and DoH-FIPS in combination.\n- Do53 alone.\n- DoH alone.\n- DoH-FIPS alone.\n- None, which is treated as Do53.\n\nFor an outbound endpoint you can apply the protocols as follows:\n\n- Do53 and DoH in combination.\n- Do53 alone.\n- DoH alone.\n- None, which is treated as Do53.", + "Protocols": "Protocols used for the endpoint. DoH-FIPS is applicable for a default inbound endpoints only.\n\nFor an inbound endpoint you can apply the protocols as follows:\n\n- Do53 and DoH in combination.\n- Do53 and DoH-FIPS in combination.\n- Do53 alone.\n- DoH alone.\n- DoH-FIPS alone.\n- None, which is treated as Do53.\n\nFor a delegation inbound endpoint you can use Do53 only.\n\nFor an outbound endpoint you can apply the protocols as follows:\n\n- Do53 and DoH in combination.\n- Do53 alone.\n- DoH alone.\n- None, which is treated as Do53.", "ResolverEndpointType": "The Resolver endpoint IP address type.", "SecurityGroupIds": "The ID of one or more security groups that control access to this VPC. The security group must include one or more inbound rules (for inbound endpoints) or outbound rules (for outbound endpoints). Inbound and outbound rules must allow TCP and UDP access. For inbound access, open port 53. For outbound access, open the port that you're using for DNS queries on your network.", "Tags": "Route 53 Resolver doesn't support updating tags through CloudFormation." @@ -45600,10 +45720,11 @@ "ResourceId": "The ID of the Amazon VPC that is associated with the query logging configuration." }, "AWS::Route53Resolver::ResolverRule": { + "DelegationRecord": "DNS queries with delegation records that point to this domain name are forwarded to resolvers on your network.", "DomainName": "DNS queries for this domain name are forwarded to the IP addresses that are specified in `TargetIps` . If a query matches multiple Resolver rules (example.com and www.example.com), the query is routed using the Resolver rule that contains the most specific domain name (www.example.com).", "Name": "The name for the Resolver rule, which you specified when you created the Resolver rule.", "ResolverEndpointId": "The ID of the endpoint that the rule is associated with.", - "RuleType": "When you want to forward DNS queries for specified domain name to resolvers on your network, specify `FORWARD` .\n\nWhen you have a forwarding rule to forward DNS queries for a domain to your network and you want Resolver to process queries for a subdomain of that domain, specify `SYSTEM` .\n\nFor example, to forward DNS queries for example.com to resolvers on your network, you create a rule and specify `FORWARD` for `RuleType` . To then have Resolver process queries for apex.example.com, you create a rule and specify `SYSTEM` for `RuleType` .\n\nCurrently, only Resolver can create rules that have a value of `RECURSIVE` for `RuleType` .", + "RuleType": "When you want to forward DNS queries for specified domain name to resolvers on your network, specify `FORWARD` or `DELEGATE` . If a query matches multiple Resolver rules (example.com and www.example.com), outbound DNS queries are routed using the Resolver rule that contains the most specific domain name (www.example.com).\n\nWhen you have a forwarding rule to forward DNS queries for a domain to your network and you want Resolver to process queries for a subdomain of that domain, specify `SYSTEM` .\n\nFor example, to forward DNS queries for example.com to resolvers on your network, you create a rule and specify `FORWARD` for `RuleType` . To then have Resolver process queries for apex.example.com, you create a rule and specify `SYSTEM` for `RuleType` .\n\nCurrently, only Resolver can create rules that have a value of `RECURSIVE` for `RuleType` .", "Tags": "Tags help organize and categorize your Resolver rules. Each tag consists of a key and an optional value, both of which you define.", "TargetIps": "An array that contains the IP addresses and ports that an outbound endpoint forwards DNS queries to. Typically, these are the IP addresses of DNS resolvers on your network." }, @@ -45870,7 +45991,7 @@ "EncryptionConfiguration": "Specifies encryption-related information.", "Metrics": "A container specifying replication metrics-related settings enabling replication metrics and events.", "ReplicationTime": "A container specifying S3 Replication Time Control (S3 RTC), including whether S3 RTC is enabled and the time when all objects and operations on objects must be replicated. Must be specified together with a `Metrics` block.", - "StorageClass": "The storage class to use when replicating objects, such as S3 Standard or reduced redundancy. By default, Amazon S3 uses the storage class of the source object to create the object replica.\n\nFor valid values, see the `StorageClass` element of the [PUT Bucket replication](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html) action in the *Amazon S3 API Reference* ." + "StorageClass": "The storage class to use when replicating objects, such as S3 Standard or reduced redundancy. By default, Amazon S3 uses the storage class of the source object to create the object replica.\n\nFor valid values, see the `StorageClass` element of the [PUT Bucket replication](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html) action in the *Amazon S3 API Reference* .\n\n`FSX_OPENZFS` is not an accepted value when replicating objects." }, "AWS::S3::Bucket ReplicationRule": { "DeleteMarkerReplication": "Specifies whether Amazon S3 replicates delete markers. If you specify a `Filter` in your replication configuration, you must also include a `DeleteMarkerReplication` element. If your `Filter` includes a `Tag` element, the `DeleteMarkerReplication` `Status` must be set to Disabled, because Amazon S3 does not support replicating delete markers for tag-based rules. For an example configuration, see [Basic Rule Configuration](https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-add-config.html#replication-config-min-rule-config) .\n\nFor more information about delete marker replication, see [Basic Rule Configuration](https://docs.aws.amazon.com/AmazonS3/latest/dev/delete-marker-replication.html) .\n\n> If you are using an earlier version of the replication configuration, Amazon S3 handles replication of delete markers differently. For more information, see [Backward Compatibility](https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-add-config.html#replication-backward-compat-considerations) .", @@ -46304,6 +46425,10 @@ "AWS::S3Outposts::Endpoint NetworkInterface": { "NetworkInterfaceId": "The ID for the network interface." }, + "AWS::S3Tables::Namespace": { + "Namespace": "The name of the namespace.", + "TableBucketARN": "" + }, "AWS::S3Tables::TableBucket": { "EncryptionConfiguration": "Configuration specifying how data should be encrypted. This structure defines the encryption algorithm and optional KMS key to be used for server-side encryption.", "TableBucketName": "The name for the table bucket.", @@ -51225,7 +51350,7 @@ "AuthType": "The type of IAM policy.\n\n- `NONE` : The resource does not use an IAM policy. This is the default.\n- `AWS_IAM` : The resource uses an IAM policy. When this type is used, auth is enabled and an auth policy is required.", "CertificateArn": "The Amazon Resource Name (ARN) of the certificate.", "CustomDomainName": "The custom domain name of the service.", - "DnsEntry": "The DNS information of the service.", + "DnsEntry": "Describes the DNS information of the service. This field is read-only.", "Name": "The name of the service. The name must be unique within the account. The valid characters are a-z, 0-9, and hyphens (-). You can't use a hyphen as the first or last character, or immediately after another hyphen.\n\nIf you don't specify a name, CloudFormation generates one. However, if you specify a name, and later want to replace the resource, you must specify a new name.", "Tags": "The tags for the service." }, diff --git a/schema_source/cloudformation.schema.json b/schema_source/cloudformation.schema.json index 0107c17d9..14e69a9ea 100644 --- a/schema_source/cloudformation.schema.json +++ b/schema_source/cloudformation.schema.json @@ -1743,7 +1743,7 @@ "type": "array" }, "Type": { - "markdownDescription": "The type represents the zone of trust for the analyzer.\n\n*Allowed Values* : ACCOUNT | ORGANIZATION | ACCOUNT_UNUSED_ACCESS | ORGANIZATION_UNUSED_ACCESS", + "markdownDescription": "The type represents the zone of trust for the analyzer.\n\n*Allowed Values* : ACCOUNT | ORGANIZATION | ACCOUNT_UNUSED_ACCESS | ACCOUNT_INTERNAL_ACCESS | ORGANIZATION_INTERNAL_ACCESS | ORGANIZATION_UNUSED_ACCESS", "title": "Type", "type": "string" } @@ -25129,7 +25129,7 @@ "type": "array" }, "Email": { - "markdownDescription": "", + "markdownDescription": "Specifies the email address associated with this trading partner.", "title": "Email", "type": "string" }, @@ -25139,7 +25139,7 @@ "type": "string" }, "Phone": { - "markdownDescription": "", + "markdownDescription": "Specifies the phone number associated with the partnership.", "title": "Phone", "type": "string" }, @@ -27214,7 +27214,7 @@ "type": "string" }, "ImageType": { - "markdownDescription": "The image type to match with the instance type to select an AMI. The supported values are different for `ECS` and `EKS` resources.\n\n- **ECS** - If the `imageIdOverride` parameter isn't specified, then a recent [Amazon ECS-optimized Amazon Linux 2 AMI](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#al2ami) ( `ECS_AL2` ) is used. If a new image type is specified in an update, but neither an `imageId` nor a `imageIdOverride` parameter is specified, then the latest Amazon ECS optimized AMI for that image type that's supported by AWS Batch is used.\n\n- **ECS_AL2** - [Amazon Linux 2](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#al2ami) : Default for all non-GPU instance families.\n- **ECS_AL2_NVIDIA** - [Amazon Linux 2 (GPU)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#gpuami) : Default for all GPU instance families (for example `P4` and `G4` ) and can be used for all non AWS Graviton-based instance types.\n- **ECS_AL2023** - [Amazon Linux 2023](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html) : AWS Batch supports Amazon Linux 2023.\n\n> Amazon Linux 2023 does not support `A1` instances.\n- **ECS_AL1** - [Amazon Linux](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#alami) . Amazon Linux has reached the end-of-life of standard support. For more information, see [Amazon Linux AMI](https://docs.aws.amazon.com/amazon-linux-ami/) .\n- **EKS** - If the `imageIdOverride` parameter isn't specified, then a recent [Amazon EKS-optimized Amazon Linux AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) ( `EKS_AL2` ) is used. If a new image type is specified in an update, but neither an `imageId` nor a `imageIdOverride` parameter is specified, then the latest Amazon EKS optimized AMI for that image type that AWS Batch supports is used.\n\n- **EKS_AL2** - [Amazon Linux 2](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : Default for all non-GPU instance families.\n- **EKS_AL2_NVIDIA** - [Amazon Linux 2 (accelerated)](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : Default for all GPU instance families (for example, `P4` and `G4` ) and can be used for all non AWS Graviton-based instance types.", + "markdownDescription": "The image type to match with the instance type to select an AMI. The supported values are different for `ECS` and `EKS` resources.\n\n- **ECS** - If the `imageIdOverride` parameter isn't specified, then a recent [Amazon ECS-optimized Amazon Linux 2 AMI](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#al2ami) ( `ECS_AL2` ) is used. If a new image type is specified in an update, but neither an `imageId` nor a `imageIdOverride` parameter is specified, then the latest Amazon ECS optimized AMI for that image type that's supported by AWS Batch is used.\n\n- **ECS_AL2** - [Amazon Linux 2](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#al2ami) : Default for all non-GPU instance families.\n- **ECS_AL2_NVIDIA** - [Amazon Linux 2 (GPU)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#gpuami) : Default for all GPU instance families (for example `P4` and `G4` ) and can be used for all non AWS Graviton-based instance types.\n- **ECS_AL2023** - [Amazon Linux 2023](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html) : AWS Batch supports Amazon Linux 2023.\n\n> Amazon Linux 2023 does not support `A1` instances.\n- **ECS_AL1** - [Amazon Linux](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#alami) . Amazon Linux has reached the end-of-life of standard support. For more information, see [Amazon Linux AMI](https://docs.aws.amazon.com/amazon-linux-ami/) .\n- **EKS** - If the `imageIdOverride` parameter isn't specified, then a recent [Amazon EKS-optimized Amazon Linux AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) ( `EKS_AL2` ) is used. If a new image type is specified in an update, but neither an `imageId` nor a `imageIdOverride` parameter is specified, then the latest Amazon EKS optimized AMI for that image type that AWS Batch supports is used.\n\n- **EKS_AL2** - [Amazon Linux 2](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : Default for all non-GPU instance families.\n- **EKS_AL2_NVIDIA** - [Amazon Linux 2 (accelerated)](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : Default for all GPU instance families (for example, `P4` and `G4` ) and can be used for all non AWS Graviton-based instance types.\n- **EKS_AL2023** - [Amazon Linux 2023](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : AWS Batch supports Amazon Linux 2023.\n\n> Amazon Linux 2023 does not support `A1` instances.\n- **EKS_AL2023_NVIDIA** - [Amazon Linux 2023 (accelerated)](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) : GPU instance families and can be used for all non AWS Graviton-based instance types.", "title": "ImageType", "type": "string" } @@ -34505,7 +34505,7 @@ "title": "LoggingConfig" }, "SchemaHandlerPackage": { - "markdownDescription": "A URL to the Amazon S3 bucket containing the Hook project package that contains the necessary files for the Hook you want to register.\n\nFor information on generating a schema handler package, see [Modeling custom CloudFormation Hooks](https://docs.aws.amazon.com/cloudformation-cli/latest/hooks-userguide/hooks-model.html) in the *AWS CloudFormation Hooks User Guide* .\n\n> To register the Hook, you must have `s3:GetObject` permissions to access the S3 objects.", + "markdownDescription": "A URL to the Amazon S3 bucket for the Hook project package that contains the necessary files for the Hook you want to register.\n\nFor information on generating a schema handler package, see [Modeling custom CloudFormation Hooks](https://docs.aws.amazon.com/cloudformation-cli/latest/hooks-userguide/hooks-model.html) in the *AWS CloudFormation Hooks User Guide* .\n\n> To register the Hook, you must have `s3:GetObject` permissions to access the S3 objects.", "title": "SchemaHandlerPackage", "type": "string" }, @@ -34760,7 +34760,7 @@ "type": "string" }, "ModulePackage": { - "markdownDescription": "A URL to the S3 bucket containing the package that contains the template fragment and schema files for the module version to register.\n\nFor more information, see [Module structure and requirements](https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/modules-structure.html) in the *AWS CloudFormation Command Line Interface (CLI) User Guide* .\n\n> To register the module version, you must have `s3:GetObject` permissions to access the S3 objects.", + "markdownDescription": "A URL to the S3 bucket for the package that contains the template fragment and schema files for the module version to register.\n\nFor more information, see [Module structure and requirements](https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/modules-structure.html) in the *AWS CloudFormation Command Line Interface (CLI) User Guide* .\n\n> To register the module version, you must have `s3:GetObject` permissions to access the S3 objects.", "title": "ModulePackage", "type": "string" } @@ -35066,7 +35066,7 @@ "title": "LoggingConfig" }, "SchemaHandlerPackage": { - "markdownDescription": "A URL to the S3 bucket containing the resource project package that contains the necessary files for the resource you want to register.\n\nFor information on generating a schema handler package, see [Modeling resource types to use with AWS CloudFormation](https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/resource-type-model.html) in the *AWS CloudFormation Command Line Interface (CLI) User Guide* .\n\n> To register the resource version, you must have `s3:GetObject` permissions to access the S3 objects.", + "markdownDescription": "A URL to the S3 bucket for the resource project package that contains the necessary files for the resource you want to register.\n\nFor information on generating a schema handler package, see [Modeling resource types to use with AWS CloudFormation](https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/resource-type-model.html) in the *AWS CloudFormation Command Line Interface (CLI) User Guide* .\n\n> To register the resource version, you must have `s3:GetObject` permissions to access the S3 objects.", "title": "SchemaHandlerPackage", "type": "string" }, @@ -35182,7 +35182,7 @@ "type": "array" }, "TemplateURL": { - "markdownDescription": "The URL of a file containing the template body. The URL must point to a template (max size: 1 MB) that's located in an Amazon S3 bucket. The location for an Amazon S3 bucket must start with `https://` .\n\nWhether an update causes interruptions depends on the resources that are being updated. An update never causes a nested stack to be replaced.", + "markdownDescription": "The URL of a file that contains the template body. The URL must point to a template (max size: 1 MB) that's located in an Amazon S3 bucket. The location for an Amazon S3 bucket must start with `https://` .\n\nWhether an update causes interruptions depends on the resources that are being updated. An update never causes a nested stack to be replaced.", "title": "TemplateURL", "type": "string" }, @@ -35331,12 +35331,12 @@ "type": "array" }, "TemplateBody": { - "markdownDescription": "The structure that contains the template body, with a minimum length of 1 byte and a maximum length of 51,200 bytes.\n\nYou must include either `TemplateURL` or `TemplateBody` in a StackSet, but you can't use both. Dynamic references in the `TemplateBody` may not work correctly in all cases. It's recommended to pass templates containing dynamic references through `TemplateUrl` instead.", + "markdownDescription": "The structure that contains the template body, with a minimum length of 1 byte and a maximum length of 51,200 bytes.\n\nYou must include either `TemplateURL` or `TemplateBody` in a StackSet, but you can't use both. Dynamic references in the `TemplateBody` may not work correctly in all cases. It's recommended to pass templates that contain dynamic references through `TemplateUrl` instead.", "title": "TemplateBody", "type": "string" }, "TemplateURL": { - "markdownDescription": "The URL of a file containing the template body. The URL must point to a template (max size: 1 MB) that's located in an Amazon S3 bucket or a Systems Manager document. The location for an Amazon S3 bucket must start with `https://` .\n\nConditional: You must specify only one of the following parameters: `TemplateBody` , `TemplateURL` .", + "markdownDescription": "The URL of a file that contains the template body. The URL must point to a template (max size: 1 MB) that's located in an Amazon S3 bucket or a Systems Manager document. The location for an Amazon S3 bucket must start with `https://` .\n\nConditional: You must specify only one of the following parameters: `TemplateBody` , `TemplateURL` .", "title": "TemplateURL", "type": "string" } @@ -35431,12 +35431,12 @@ "additionalProperties": false, "properties": { "FailureToleranceCount": { - "markdownDescription": "The number of accounts, per Region, for which this operation can fail before CloudFormation stops the operation in that Region. If the operation is stopped in a Region, CloudFormation doesn't attempt the operation in any subsequent Regions.\n\nConditional: You must specify either `FailureToleranceCount` or `FailureTolerancePercentage` (but not both).", + "markdownDescription": "The number of accounts per Region this operation can fail in before CloudFormation stops the operation in that Region. If the operation is stopped in a Region, CloudFormation doesn't attempt the operation in any subsequent Regions.\n\nConditional: You must specify either `FailureToleranceCount` or `FailureTolerancePercentage` (but not both).", "title": "FailureToleranceCount", "type": "number" }, "FailureTolerancePercentage": { - "markdownDescription": "The percentage of accounts, per Region, for which this stack operation can fail before CloudFormation stops the operation in that Region. If the operation is stopped in a Region, CloudFormation doesn't attempt the operation in any subsequent Regions.\n\nWhen calculating the number of accounts based on the specified percentage, CloudFormation rounds *down* to the next whole number.\n\nConditional: You must specify either `FailureToleranceCount` or `FailureTolerancePercentage` , but not both.", + "markdownDescription": "The percentage of accounts per Region this stack operation can fail in before CloudFormation stops the operation in that Region. If the operation is stopped in a Region, CloudFormation doesn't attempt the operation in any subsequent Regions.\n\nWhen calculating the number of accounts based on the specified percentage, CloudFormation rounds *down* to the next whole number.\n\nConditional: You must specify either `FailureToleranceCount` or `FailureTolerancePercentage` , but not both.", "title": "FailureTolerancePercentage", "type": "number" }, @@ -35685,7 +35685,7 @@ "type": "number" }, "Handle": { - "markdownDescription": "A reference to the wait condition handle used to signal this wait condition. Use the `Ref` intrinsic function to specify an [AWS::CloudFormation::WaitConditionHandle](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cloudformation-waitconditionhandle.html) resource.\n\nAnytime you add a `WaitCondition` resource during a stack update, you must associate the wait condition with a new WaitConditionHandle resource. Don't reuse an old wait condition handle that has already been defined in the template. If you reuse a wait condition handle, the wait condition might evaluate old signals from a previous create or update stack command.\n\nUpdates aren't supported.", + "markdownDescription": "A reference to the wait condition handle used to signal this wait condition. Use the `Ref` intrinsic function to specify an [AWS::CloudFormation::WaitConditionHandle](https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/aws-resource-cloudformation-waitconditionhandle.html) resource.\n\nAnytime you add a `WaitCondition` resource during a stack update, you must associate the wait condition with a new WaitConditionHandle resource. Don't reuse an old wait condition handle that has already been defined in the template. If you reuse a wait condition handle, the wait condition might evaluate old signals from a previous create or update stack command.\n\nUpdates aren't supported.", "title": "Handle", "type": "string" }, @@ -36569,7 +36569,7 @@ "type": "number" }, "OriginKeepaliveTimeout": { - "markdownDescription": "Specifies how long, in seconds, CloudFront persists its connection to the origin. The minimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't specify otherwise) is 5 seconds.\n\nFor more information, see [Keep-alive timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginKeepaliveTimeout) in the *Amazon CloudFront Developer Guide* .", + "markdownDescription": "Specifies how long, in seconds, CloudFront persists its connection to the origin. The minimum timeout is 1 second, the maximum is 120 seconds, and the default (if you don't specify otherwise) is 5 seconds.\n\nFor more information, see [Keep-alive timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginKeepaliveTimeout) in the *Amazon CloudFront Developer Guide* .", "title": "OriginKeepaliveTimeout", "type": "number" }, @@ -36579,7 +36579,7 @@ "type": "string" }, "OriginReadTimeout": { - "markdownDescription": "Specifies how long, in seconds, CloudFront waits for a response from the origin. This is also known as the *origin response timeout* . The minimum timeout is 1 second, the maximum is 60 seconds, and the default (if you don't specify otherwise) is 30 seconds.\n\nFor more information, see [Response timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginResponseTimeout) in the *Amazon CloudFront Developer Guide* .", + "markdownDescription": "Specifies how long, in seconds, CloudFront waits for a response from the origin. This is also known as the *origin response timeout* . The minimum timeout is 1 second, the maximum is 120 seconds, and the default (if you don't specify otherwise) is 30 seconds.\n\nFor more information, see [Response timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginResponseTimeout) in the *Amazon CloudFront Developer Guide* .", "title": "OriginReadTimeout", "type": "number" }, @@ -83787,7 +83787,7 @@ }, "Options": { "additionalProperties": true, - "markdownDescription": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n\n> On June 25, 2025, Amazon ECS is changing the default log driver mode from `blocking` to `non-blocking` to prioritize task availability over logging. To continue using the `blocking` mode after this change, do one of the following:\n> \n> - Set the `mode` option in your container definition's `logConfiguration` as `blocking` .\n> - Set the `defaultLogDriverMode` account setting to `blocking` .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", + "markdownDescription": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `non-blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n\n> On June 25, 2025, Amazon ECS changed the default log driver mode from `blocking` to `non-blocking` to prioritize task availability over logging. To continue using the `blocking` mode after this change, do one of the following:\n> \n> - Set the `mode` option in your container definition's `logConfiguration` as `blocking` .\n> - Set the `defaultLogDriverMode` account setting to `blocking` .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -84194,8 +84194,6 @@ "items": { "$ref": "#/definitions/AWS::ECS::TaskDefinition.InferenceAccelerator" }, - "markdownDescription": "The Elastic Inference accelerators to use for the containers in the task.", - "title": "InferenceAccelerators", "type": "array" }, "IpcMode": { @@ -84850,13 +84848,9 @@ "additionalProperties": false, "properties": { "DeviceName": { - "markdownDescription": "The Elastic Inference accelerator device name. The `deviceName` must also be referenced in a container definition as a [ResourceRequirement](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ResourceRequirement.html) .", - "title": "DeviceName", "type": "string" }, "DeviceType": { - "markdownDescription": "The Elastic Inference accelerator type to use.", - "title": "DeviceType", "type": "string" } }, @@ -84957,7 +84951,7 @@ }, "Options": { "additionalProperties": true, - "markdownDescription": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n\n> On June 25, 2025, Amazon ECS is changing the default log driver mode from `blocking` to `non-blocking` to prioritize task availability over logging. To continue using the `blocking` mode after this change, do one of the following:\n> \n> - Set the `mode` option in your container definition's `logConfiguration` as `blocking` .\n> - Set the `defaultLogDriverMode` account setting to `blocking` .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", + "markdownDescription": "The configuration options to send to the log driver.\n\nThe options you can specify depend on the log driver. Some of the options you can specify when you use the `awslogs` log driver to route logs to Amazon CloudWatch include the following:\n\n- **awslogs-create-group** - Required: No\n\nSpecify whether you want the log group to be created automatically. If this option isn't specified, it defaults to `false` .\n\n> Your IAM policy must include the `logs:CreateLogGroup` permission before you attempt to use `awslogs-create-group` .\n- **awslogs-region** - Required: Yes\n\nSpecify the AWS Region that the `awslogs` log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.\n- **awslogs-group** - Required: Yes\n\nMake sure to specify a log group that the `awslogs` log driver sends its log streams to.\n- **awslogs-stream-prefix** - Required: Yes, when using Fargate.Optional when using EC2.\n\nUse the `awslogs-stream-prefix` option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format `prefix-name/container-name/ecs-task-id` .\n\nIf you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.\n\nFor Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.\n\nYou must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.\n- **awslogs-datetime-format** - Required: No\n\nThis option defines a multiline start pattern in Python `strftime` format. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nOne example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.\n\nFor more information, see [awslogs-datetime-format](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-datetime-format) .\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n- **awslogs-multiline-pattern** - Required: No\n\nThis option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don\u2019t match the pattern. The matched line is the delimiter between log messages.\n\nFor more information, see [awslogs-multiline-pattern](https://docs.aws.amazon.com/https://docs.docker.com/config/containers/logging/awslogs/#awslogs-multiline-pattern) .\n\nThis option is ignored if `awslogs-datetime-format` is also configured.\n\nYou cannot configure both the `awslogs-datetime-format` and `awslogs-multiline-pattern` options.\n\n> Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.\n\nThe following options apply to all supported log drivers.\n\n- **mode** - Required: No\n\nValid values: `non-blocking` | `blocking`\n\nThis option defines the delivery mode of log messages from the container to the log driver specified using `logDriver` . The delivery mode you choose affects application availability when the flow of logs from container is interrupted.\n\nIf you use the `blocking` mode and the flow of logs is interrupted, calls from container code to write to the `stdout` and `stderr` streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.\n\nIf you use the `non-blocking` mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the `max-buffer-size` option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see [Preventing log loss with non-blocking mode in the `awslogs` container log driver](https://docs.aws.amazon.com/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/) .\n\nYou can set a default `mode` for all containers in a specific AWS Region by using the `defaultLogDriverMode` account setting. If you don't specify the `mode` option or configure the account setting, Amazon ECS will default to the `non-blocking` mode. For more information about the account setting, see [Default log driver mode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#default-log-driver-mode) in the *Amazon Elastic Container Service Developer Guide* .\n\n> On June 25, 2025, Amazon ECS changed the default log driver mode from `blocking` to `non-blocking` to prioritize task availability over logging. To continue using the `blocking` mode after this change, do one of the following:\n> \n> - Set the `mode` option in your container definition's `logConfiguration` as `blocking` .\n> - Set the `defaultLogDriverMode` account setting to `blocking` .\n- **max-buffer-size** - Required: No\n\nDefault value: `1m`\n\nWhen `non-blocking` mode is used, the `max-buffer-size` log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.\n\nTo route logs using the `splunk` log router, you need to specify a `splunk-token` and a `splunk-url` .\n\nWhen you use the `awsfirelens` log router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set the `log-driver-buffer-limit` option to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.\n\nOther options you can specify when using `awsfirelens` to route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region with `region` and a name for the log stream with `delivery_stream` .\n\nWhen you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region with `region` and a data stream name with `stream` .\n\nWhen you export logs to Amazon OpenSearch Service, you can specify options like `Name` , `Host` (OpenSearch Service endpoint without protocol), `Port` , `Index` , `Type` , `Aws_auth` , `Aws_region` , `Suppress_Type_Name` , and `tls` . For more information, see [Under the hood: FireLens for Amazon ECS Tasks](https://docs.aws.amazon.com/containers/under-the-hood-firelens-for-amazon-ecs-tasks/) .\n\nWhen you export logs to Amazon S3, you can specify the bucket using the `bucket` option. You can also specify `region` , `total_file_size` , `upload_timeout` , and `use_put_object` as options.\n\nThis parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: `sudo docker version --format '{{.Server.APIVersion}}'`", "patternProperties": { "^[a-zA-Z0-9]+$": { "type": "string" @@ -166168,8 +166162,6 @@ "type": "string" }, "DBSnapshotIdentifier": { - "markdownDescription": "This parameter is not supported.\n\n`AWS::Neptune::DBInstance` does not support restoring from snapshots.\n\n`AWS::Neptune::DBCluster` does support restoring from snapshots.", - "title": "DBSnapshotIdentifier", "type": "string" }, "DBSubnetGroupName": { @@ -234268,7 +234260,7 @@ "additionalProperties": false, "properties": { "Direction": { - "markdownDescription": "Indicates whether the Resolver endpoint allows inbound or outbound DNS queries:\n\n- `INBOUND` : allows DNS queries to your VPC from your network\n- `OUTBOUND` : allows DNS queries from your VPC to your network", + "markdownDescription": "Indicates whether the Resolver endpoint allows inbound or outbound DNS queries:\n\n- `INBOUND` : allows DNS queries to your VPC from your network\n- `OUTBOUND` : allows DNS queries from your VPC to your network\n- `INBOUND_DELEGATION` : Resolver delegates queries to Route 53 private hosted zones from your network.", "title": "Direction", "type": "string" }, @@ -234299,7 +234291,7 @@ "items": { "type": "string" }, - "markdownDescription": "Protocols used for the endpoint. DoH-FIPS is applicable for inbound endpoints only.\n\nFor an inbound endpoint you can apply the protocols as follows:\n\n- Do53 and DoH in combination.\n- Do53 and DoH-FIPS in combination.\n- Do53 alone.\n- DoH alone.\n- DoH-FIPS alone.\n- None, which is treated as Do53.\n\nFor an outbound endpoint you can apply the protocols as follows:\n\n- Do53 and DoH in combination.\n- Do53 alone.\n- DoH alone.\n- None, which is treated as Do53.", + "markdownDescription": "Protocols used for the endpoint. DoH-FIPS is applicable for a default inbound endpoints only.\n\nFor an inbound endpoint you can apply the protocols as follows:\n\n- Do53 and DoH in combination.\n- Do53 and DoH-FIPS in combination.\n- Do53 alone.\n- DoH alone.\n- DoH-FIPS alone.\n- None, which is treated as Do53.\n\nFor a delegation inbound endpoint you can use Do53 only.\n\nFor an outbound endpoint you can apply the protocols as follows:\n\n- Do53 and DoH in combination.\n- Do53 alone.\n- DoH alone.\n- None, which is treated as Do53.", "title": "Protocols", "type": "array" }, @@ -234564,7 +234556,7 @@ "type": "string" }, "RuleType": { - "markdownDescription": "When you want to forward DNS queries for specified domain name to resolvers on your network, specify `FORWARD` .\n\nWhen you have a forwarding rule to forward DNS queries for a domain to your network and you want Resolver to process queries for a subdomain of that domain, specify `SYSTEM` .\n\nFor example, to forward DNS queries for example.com to resolvers on your network, you create a rule and specify `FORWARD` for `RuleType` . To then have Resolver process queries for apex.example.com, you create a rule and specify `SYSTEM` for `RuleType` .\n\nCurrently, only Resolver can create rules that have a value of `RECURSIVE` for `RuleType` .", + "markdownDescription": "When you want to forward DNS queries for specified domain name to resolvers on your network, specify `FORWARD` or `DELEGATE` . If a query matches multiple Resolver rules (example.com and www.example.com), outbound DNS queries are routed using the Resolver rule that contains the most specific domain name (www.example.com).\n\nWhen you have a forwarding rule to forward DNS queries for a domain to your network and you want Resolver to process queries for a subdomain of that domain, specify `SYSTEM` .\n\nFor example, to forward DNS queries for example.com to resolvers on your network, you create a rule and specify `FORWARD` for `RuleType` . To then have Resolver process queries for apex.example.com, you create a rule and specify `SYSTEM` for `RuleType` .\n\nCurrently, only Resolver can create rules that have a value of `RECURSIVE` for `RuleType` .", "title": "RuleType", "type": "string" }, @@ -236130,7 +236122,7 @@ "title": "ReplicationTime" }, "StorageClass": { - "markdownDescription": "The storage class to use when replicating objects, such as S3 Standard or reduced redundancy. By default, Amazon S3 uses the storage class of the source object to create the object replica.\n\nFor valid values, see the `StorageClass` element of the [PUT Bucket replication](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html) action in the *Amazon S3 API Reference* .", + "markdownDescription": "The storage class to use when replicating objects, such as S3 Standard or reduced redundancy. By default, Amazon S3 uses the storage class of the source object to create the object replica.\n\nFor valid values, see the `StorageClass` element of the [PUT Bucket replication](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html) action in the *Amazon S3 API Reference* .\n\n`FSX_OPENZFS` is not an accepted value when replicating objects.", "title": "StorageClass", "type": "string" } @@ -265013,7 +265005,7 @@ }, "DnsEntry": { "$ref": "#/definitions/AWS::VpcLattice::Service.DnsEntry", - "markdownDescription": "The DNS information of the service.", + "markdownDescription": "Describes the DNS information of the service. This field is read-only.", "title": "DnsEntry" }, "Name": {