diff --git a/docs/en/stack/ml/nlp/ml-nlp-autoscaling.asciidoc b/docs/en/stack/ml/nlp/ml-nlp-autoscaling.asciidoc index 1ce0c17b8..0906621e7 100644 --- a/docs/en/stack/ml/nlp/ml-nlp-autoscaling.asciidoc +++ b/docs/en/stack/ml/nlp/ml-nlp-autoscaling.asciidoc @@ -25,7 +25,7 @@ This can help you to manage performance and cost more easily. When adaptive allocations are enabled, the number of allocations of the model is set automatically based on the current load. When the load is high, a new model allocation is automatically created. When the load is low, a model allocation is automatically removed. -You must explicitely set the minimum and maximum number of allocations; autoscaling will occur within these limits. +You can explicitely set the minimum and maximum number of allocations; autoscaling will occur within these limits. You can enable adaptive allocations by using: