-
Notifications
You must be signed in to change notification settings - Fork 60
Feat/max error rate #171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Feat/max error rate #171
Conversation
This reverts commit 6059af1.
📦 Build Artifacts Available |
📦 Build Artifacts Available |
📦 Build Artifacts Available |
📦 Build Artifacts Available |
📦 Build Artifacts Available |
📦 Build Artifacts Available |
📦 Build Artifacts Available |
📦 Build Artifacts Available |
📦 Build Artifacts Available |
📦 Build Artifacts Available |
📦 Build Artifacts Available |
📦 Build Artifacts Available |
📦 Build Artifacts Available |
📦 Build Artifacts Available |
📦 Build Artifacts Available |
📦 Build Artifacts Available |
I've changed the logic as suggested to have a single termination method which is a After implementing the code like you suggested above I ran into an issue with Another super weird thing that happened is the raising specifically Also, I added the context of last X request as suggested but I feel that for the scenario where we do know the exact amount, i.e const rate + fixed duration, it is best to look at the entire run. |
@@ -4,40 +4,40 @@ repos: | |||
hooks: | |||
- id: trailing-whitespace | |||
- id: end-of-file-fixer | |||
- repo: https://github.com/astral-sh/ruff-pre-commit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll need these reenabled. If you ever need to push and skip recommit, you can pass in the --no-verify flag with your git commit command
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also @sjmonson just landed a diff that might clear up some of the issues you were seeing
@@ -147,6 +147,8 @@ The `guidellm benchmark` command is used to run benchmarks against a generative | |||
|
|||
- `--max-requests`: Sets the maximum number of requests for each benchmark run. If not provided, the benchmark will run until `--max-seconds` is reached or the dataset is exhausted. | |||
|
|||
- `--max-error-rate`: The maximum error rate after which a benchmark will stop. Applicable only for finite deterministic scenarios i.e `rate_type` is `constant` and `--max-seconds` exists OR `--max-requests` exists OR the dataset is finite. If `--max-error-rate` is `None` or not applicable, benchmarks will continue regardless of error rate. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- `--max-error-rate`: The maximum error rate after which a benchmark will stop. Applicable only for finite deterministic scenarios i.e `rate_type` is `constant` and `--max-seconds` exists OR `--max-requests` exists OR the dataset is finite. If `--max-error-rate` is `None` or not applicable, benchmarks will continue regardless of error rate. | |
- `--max-error`: The maximum error rate after which a benchmark will stop. Can either be a rate i.e 0 < rate < 1 or constant number. If rate is given and rate_type is 'constant' and 'max_seconds' exists then the rate will be calculated as part of the total expected requests counts i.e rate * duration. If rate is given and number of requests is not pre-determined than a context window of the last requests will be looked at. Context window size is configurable under GUIDELLM__ERROR_CHECK_WINDOW_SIZE. If a number above 1 is given than we just count the total number of error and check if it's above the threshold. |
@@ -46,12 +50,15 @@ class SchedulerRunInfo(StandardBaseModel): | |||
end_number: float | |||
processes: int | |||
strategy: SchedulingStrategy | |||
last_requests_statuses: deque[RequestStatus] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can simplify the logic here so we don't need to add this in especially since this list can be come very long and it is not json serializable so would break any pydantic serialization in the event someone wants to save that state. See note later down where that logic is located
"RequestLoader", | ||
"RequestLoaderDescription", | ||
] | ||
|
||
|
||
class GetInfiniteDatasetLengthError(Exception): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this exception and logic is no longer needed since if we can get the length then we go through a specific logic route and if we can't, then we fallback on the window
if max_duration is not None and max_duration < 0: | ||
raise ValueError(f"Invalid max_duration: {max_duration}") | ||
self._validate_scheduler_params( | ||
scheduling_strategy, max_duration, max_error, max_number |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT: could we keep the arg order the same as the surrounding function? Makes it a bit easier to track especially if additions com in later
) | ||
asyncio.create_task(self.send_result(results_queue, result)) | ||
asyncio.create_task(self.send_result(results_queue, request_scheduled_result)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch on potential threading issues with these. Rather than polluting the local namespace with a bunch of vars that aren't used later, though, could we add the WorkerProcessResult to pass directly to this call function?
process_request := await self.get_request(requests_queue) | ||
) is not None: | ||
dequeued_time = time.time() | ||
# We are using a separate internal event |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my note on fixes for get_request so we don't hit this race condition ideally and can rely on the shutdown event here
process_id: int, | ||
max_concurrency: Optional[int] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we change this out so it isn't optional? I think we had a change either pushed in or queued for that and that way we don't need to validate it for async. I believe the changes are in so that for sync, it still returns a max_concurrency there as well
internal_shutdown_event.set() | ||
try: # noqa: SIM105 | ||
await task | ||
except asyncio.CancelledError: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we are raising a different error in the shutdown task, there could be a race condition here where an error occurs and shutdown is set before we call await causing an uncaught exception here
while not shutdown_event.is_set(): # noqa: ASYNC110 | ||
await asyncio.sleep(shutdown_poll_interval) | ||
|
||
# Raising asyncio.CancelledError instead would |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is the correct way where we raise a specific error signaling this which we can track and catch. Can we remove the comments here since this is pretty self explanatory based on the name of the exception?
Addressing this issue:
#105
--max-error-rate
flag was added.It is only applicable in either or the the following scenarios:
max_requests
is setrate_type
isconstant
andmax_duration
is setsweep
for example)multiprocessing.Event
which is used to trigger a shutdown once we hitmax_error_rate
.max_error_rate
- added toreport
error_rate
in the final report is calculated astotal errored / total requests finalized
i.e we excludeincomplete requests