Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FEAT: speed up the scoring process #805

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ayeganov
Copy link
Contributor

Improve Scorer Performance with Concurrent Execution

Changes

  • Refactored scorer execution to run concurrently using asyncio.gather
  • Replaced sequential scorer execution with parallel processing
  • Maintained existing validation and response handling logic

Description

The previous implementation ran scorers sequentially, which could lead to performance bottlenecks when multiple scorers were configured. By leveraging asyncio.gather, we now execute all scorers simultaneously, potentially reducing the total scoring time significantly, especially in scenarios with multiple scorers or large response batches.

Tests and Documentation

No breaking changes - this is a pure performance optimization that maintains the existing API contract.

await scorer.score_responses_inferring_tasks_batch_async(
request_responses=response_pieces, batch_size=self._batch_size
scoring_tasks = [
scorer.score_responses_inferring_tasks_batch_async(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I love how you're thinking!
  2. We probably don't want this change. We already are batching these. score_responses_inferring_tasks_batch_async parallelizes the requests in a similar way you do here.

@romanlutz romanlutz changed the title feat: speed up the scoring process FEAT: speed up the scoring process Mar 18, 2025
Copy link
Contributor

@rlundeen2 rlundeen2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Synced offline! Basically, the change we want is to swap out the init parameters so we have batch sizes for scorer_batch_size and objective_target_batch_size

Conversation from Discord below:

think the PR could be changed to a much simpler version - I really want to separate batch size for sending to target vs batch size for sending to scorers. My current targets don't support batching, hence I'd like to speed up the scoring process.
rlundeen — Today at 10:13 AM
One question;

Today batch_size is used for both attacker infra AND objective target.

But to me it makes sense to separate these. There are a lot of cases the objective target is slow and you can't parallelize, but as a user you have control over adversarial infrastructure, so you may want different parallelization.

If we separate these, your orchestrator could be parallelized using scorer_batch_size and objective_target_batch_size which could be two different values.

Would something like that solve your issue? It would still go scorer by scorer, but you may not be constrained to a batch size of 1 for scorer_batch_size
I think I just repeated your suggestion 🙂
but yes, I think separating those out for PromptSendingOrchestrator is a great idea

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants