From 8aeedb1121e8bb828d9c514c52f339f879d70c2e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Istv=C3=A1n=20Zolt=C3=A1n=20Szab=C3=B3?= Date: Thu, 17 Oct 2024 12:58:07 +0200 Subject: [PATCH] Fixes typo in autoscaling docs (#2859) * Improves trained model autoscaling docs. * Fixes typo in autoscaling docs. (cherry picked from commit 6f34f30ee36a92f2dd942e4ddfdd92af0ffbd8fb) --- docs/en/stack/ml/nlp/ml-nlp-autoscaling.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/stack/ml/nlp/ml-nlp-autoscaling.asciidoc b/docs/en/stack/ml/nlp/ml-nlp-autoscaling.asciidoc index 1ce0c17b8..0906621e7 100644 --- a/docs/en/stack/ml/nlp/ml-nlp-autoscaling.asciidoc +++ b/docs/en/stack/ml/nlp/ml-nlp-autoscaling.asciidoc @@ -25,7 +25,7 @@ This can help you to manage performance and cost more easily. When adaptive allocations are enabled, the number of allocations of the model is set automatically based on the current load. When the load is high, a new model allocation is automatically created. When the load is low, a model allocation is automatically removed. -You must explicitely set the minimum and maximum number of allocations; autoscaling will occur within these limits. +You can explicitely set the minimum and maximum number of allocations; autoscaling will occur within these limits. You can enable adaptive allocations by using: