Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implemented token bucket rate limiter #1756

Open
wants to merge 1 commit into
base: trunk
Choose a base branch
from

Conversation

RostislavPorohnya
Copy link

@RostislavPorohnya RostislavPorohnya commented May 29, 2024

Implemented Queries Rate Limiter which uses Token Bucket algorithm to the Session struct. Added RateLimiterConfig to the Cluster struct. Closes #1731.

Rate limiter is a feature similar to REQUEST_THROTTLER_CLASS feature implemented in the Java driver.

This PR adds functionality to configure rate limiter, which allows Cassandra cluster to sustain heavy loads from client code.

@RostislavPorohnya RostislavPorohnya force-pushed the token-bucket-rate-limiter branch from 5902810 to 1563630 Compare May 29, 2024 12:04
@RostislavPorohnya RostislavPorohnya force-pushed the token-bucket-rate-limiter branch from 1563630 to a2abf6d Compare July 25, 2024 15:06
@RostislavPorohnya RostislavPorohnya force-pushed the token-bucket-rate-limiter branch from a2abf6d to b6238b5 Compare July 25, 2024 15:08
@martin-sucha
Copy link
Contributor

It seems to me that the rate limiter should be an interface instead, so that a client that wants to use a different implementation could supply their own.

Specifically, we could use the golang.org/x/time/rate implementation.

The waiting should be done by the rate limiter implementation itself, i.e. Allow should be blocking, so that the wait does not need to be in increments of 50 milliseconds, but could wait e.g. for external events as well.

The rate limiter should take the context into account, so that when the context is canceled or a deadline is reached, the limiter will stop waiting.

The behavior in this pull request could be implemented outside of gocql by wrapping of Session.Query. That said a proper implementation should probably be rate limiting the query attempts (queryExecutor.attemptQuery) instead of the high-level Session.Query call. The Session.Query also does not handle batches and attemptQuery is used for batches too.

It seems that there could be an interceptor-like interface that would allow to wrap attemptQuery calls. This could be used to supply a rate limiter implementation as well as replace the Query/Batch observers in version 2 of the gocql package.

Something the following draft, but the interface would need some refinement probably:

type QueryAttemptInterceptor interface {
    // AttemptQuery executes the query on the given connection.
    // The attempt function does the actual work, the AttempQuery implementation is free to execute code
    // before/after the call to the attempt function or skip the call altogether.
    // If error is nil, the returned *Iter must be non-nil.
    AttemptQuery(ctx context.Context, query ExecutableQuery, conn *Conn, attempt func(ctx context, query ExecutableQuery, conn *Conn) *Iter) (*Iter, error))
}

@testisnullus @joao-r-reis what do you think, should we open a separate issue for the interceptor-like interface?

@joao-r-reis
Copy link
Contributor

I agree that making this an interceptor-like interface would open up more use cases beyond rate limiting so it seems like a good idea to me 👍

@testisnullus
Copy link

I think it is a great suggestion, we can create an issue for it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

too many (100k) concurrent queries produce errors
4 participants