Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughPer-host HTTP client pooling and connection-tracking were added: client Get now accepts a host, transports collect new vs reused connection counts, pool eviction windows shortened, keep-alive disabling by scan-strategy removed, client acquisition moved to request execution, shutdown logs and benchmark suite added. Changes
Sequence Diagram(s)sequenceDiagram
participant R as Request
participant P as HTTPClientPool
participant C as retryablehttp.Client
participant T as Transport
participant S as TargetServer
participant A as Analyzer
R->>P: Get(options, config, host)
note right of P: select/create per-host client\nwrap transport with connTrackingTransport
P-->>C: return client
R->>C: Do(req)
C->>T: RoundTrip(req)
T->>S: TCP/TLS connect & send
S-->>T: response
T->>T: httptrace.GotConn -> increment New/Reused
T-->>C: response
C-->>R: response
R->>A: analyze(response, client=C)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~65 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (2)
pkg/protocols/common/automaticscan/automaticscan.go (1)
97-97: Disable cookie storage on the shared Wappalyzer client.This client is reused across every target, and
httpclientpool.Get(..., &Configuration{}, "")will give it a default jar.getTagsUsingWappalyzeronly does a stateless fingerprinting GET, so those cookies just retain cross-target state/memory for no benefit.DisableCookie: truelooks like the safer default here.Proposed change
- httpclient, err := httpclientpool.Get(opts.ExecuterOpts.Options, &httpclientpool.Configuration{}, "") + httpclient, err := httpclientpool.Get(opts.ExecuterOpts.Options, &httpclientpool.Configuration{ + DisableCookie: true, + }, "")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/protocols/common/automaticscan/automaticscan.go` at line 97, The shared Wappalyzer HTTP client is created via httpclientpool.Get in getTagsUsingWappalyzer and currently receives a default jar; update the Configuration passed to httpclientpool.Get (the second argument) to set DisableCookie: true so the returned httpclient does not store cookies across targets (use opts.ExecuterOpts.Options and the existing Configuration struct but set DisableCookie to true) to ensure stateless fingerprinting and avoid cross-target cookie retention.pkg/protocols/http/httpclientpool/clientpool_benchmark_test.go (1)
101-118: Add at least one test that goes throughGet().
tracedClientbuilds standalonehttp.Clients, so this suite never exercisespkg/protocols/http/httpclientpool.Get, the host-keyed cache path, or the explicit-jar cache bypass added in this PR. As written, these are useful microbenchmarks, but they do not lock down the actual regression surface.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/protocols/http/httpclientpool/clientpool_benchmark_test.go` around lines 101 - 118, Add a unit test that actually calls the httpclientpool Get method so the host-keyed cache and explicit-jar bypass are exercised: create a pool (using the same factory/hooks your code exposes), use tracedClient (the connTrackingRoundTripper with newConns/reusedConns counters) as the underlying client factory, call pool.Get(host) twice and assert the second call exercises the cache (no newConns, increased reusedConns or same client pointer), then call pool.Get(host, explicitJar=true) (or the equivalent API) and assert it bypasses the cache (newConns increments or returns a different client). Reference tracedClient, connTrackingRoundTripper, newConns, reusedConns and Get in your test so it validates both the host-keyed cache path and the explicit-jar cache bypass.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/runner/runner.go`:
- Around line 430-434: GetConnectionStats() returns package-global cumulative
counters, so per-scan logs in runner.go show aggregated totals across in-process
runs; update the httpclientpool usage to produce per-execution stats by either
calling a reset or using a scoped API: add/ call
httpclientpool.ResetConnectionStats() at the start (or end) of a run inside the
runner (around where GetConnectionStats() is invoked) or change to an API like
httpclientpool.GetConnectionStatsForExecution(executionId) /
httpclientpool.ScopedStats(executionId) and pass the run's ExecutionId so the
logged totals (from GetConnectionStats / new scoped method) reflect only the
current scan. Ensure you reference and modify the call site in runner.go where
GetConnectionStats() is used and wire in ExecutionId from the current Runner
context.
In `@pkg/protocols/http/httpclientpool/clientpool_benchmark_test.go`:
- Around line 215-218: The test currently enforces a hard performance threshold
by asserting require.Greater(t, speedup, 1.5) on the local speedup variable;
remove that flaky assertion and instead record the observed speedup (e.g.,
t.Logf or similar) so the test no longer fails on noisy CI—locate the speedup
variable and the require.Greater call in clientpool_benchmark_test.go and
replace the failing assertion with a non-fatal log of the speedup, relying on
the existing deterministic connection-count assertions to validate behavior.
In `@pkg/protocols/http/httpclientpool/clientpool.go`:
- Around line 220-244: wrappedGet currently keys the pool by
configuration.Hash() plus host but configuration.Hash() doesn't include
Connection.DisableKeepAlive, so clients with different DisableKeepAlive values
can collide; update the pool key construction in clientpool.go (the code around
wrappedGet / the local hash variable) to append the effective DisableKeepAlive
flag (e.g., strconv.FormatBool(disableKeepAlives) or "keepalive=off"/"on") to
the hash before calling dialers.HTTPClientPool.Get, ensuring the same unique
symbols (configuration.Hash(), host, disableKeepAlives) are used when creating
and retrieving the client so keep-alive behavior is correctly honored.
- Around line 51-67: The connTrackingTransport wrapper only implements
RoundTrip, so http.Client.CloseIdleConnections() doesn't reach the underlying
transport; add a CloseIdleConnections method on connTrackingTransport that
forwards the call to the wrapped transport when it supports it (assert t.base to
an interface with CloseIdleConnections and call it), ensuring no-op if the base
doesn't implement that method; reference connTrackingTransport, RoundTrip,
CloseIdleConnections and the base http.RoundTripper in your change.
In `@pkg/protocols/http/request.go`:
- Around line 1010-1015: The analyzer is being given a new client via
request.getHTTPClientForHost(hostname), which can drop per-request overrides
(cookie jar, WithCustomTimeout) applied when the request was executed; instead
pass the same HTTP client instance that executed the request (the local
httpclient/httpClient variable created earlier when cloning connConfig and
resolving the client) into analyzer.Analyze so follow-up requests use the same
session/timeout; replace the getHTTPClientForHost(hostname) call with the
executing client variable when invoking analyzer.Analyze.
- Around line 847-850: The pool key currently passed to httpclientpool.Get uses
the Host header override (hostname / generatedRequest.request.Host), which
collapses distinct connection targets; change the key to use the actual
connection target generatedRequest.request.URL.Host (or concatenate URL.Host +
generatedRequest.request.Host if you need both host-target and Host-header
isolation) when calling httpclientpool.Get (the call with
request.options.Options, connConfig, hostname) so each distinct target gets its
own pool entry and transports are not incorrectly reused across vhost/IP scans.
---
Nitpick comments:
In `@pkg/protocols/common/automaticscan/automaticscan.go`:
- Line 97: The shared Wappalyzer HTTP client is created via httpclientpool.Get
in getTagsUsingWappalyzer and currently receives a default jar; update the
Configuration passed to httpclientpool.Get (the second argument) to set
DisableCookie: true so the returned httpclient does not store cookies across
targets (use opts.ExecuterOpts.Options and the existing Configuration struct but
set DisableCookie to true) to ensure stateless fingerprinting and avoid
cross-target cookie retention.
In `@pkg/protocols/http/httpclientpool/clientpool_benchmark_test.go`:
- Around line 101-118: Add a unit test that actually calls the httpclientpool
Get method so the host-keyed cache and explicit-jar bypass are exercised: create
a pool (using the same factory/hooks your code exposes), use tracedClient (the
connTrackingRoundTripper with newConns/reusedConns counters) as the underlying
client factory, call pool.Get(host) twice and assert the second call exercises
the cache (no newConns, increased reusedConns or same client pointer), then call
pool.Get(host, explicitJar=true) (or the equivalent API) and assert it bypasses
the cache (newConns increments or returns a different client). Reference
tracedClient, connTrackingRoundTripper, newConns, reusedConns and Get in your
test so it validates both the host-keyed cache path and the explicit-jar cache
bypass.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 753306ea-3133-400d-8336-97eb7846a7b9
📒 Files selected for processing (11)
internal/runner/runner.golib/sdk_private.gopkg/protocols/common/automaticscan/automaticscan.gopkg/protocols/common/protocolstate/state.gopkg/protocols/http/build_request.gopkg/protocols/http/http.gopkg/protocols/http/httpclientpool/clientpool.gopkg/protocols/http/httpclientpool/clientpool_benchmark_test.gopkg/protocols/http/request.gopkg/protocols/utils/http/requtils.gopkg/tmplexec/exec.go
| if newConns, reusedConns := httpclientpool.GetConnectionStats(); newConns+reusedConns > 0 { | ||
| total := newConns + reusedConns | ||
| ratio := float64(reusedConns) / float64(total) * 100 | ||
| gologger.Info().Msgf("HTTP connections: %d total, %d new, %d reused (%.1f%%)", total, newConns, reusedConns, ratio) | ||
| } |
There was a problem hiding this comment.
Connection stats will be cumulative across multiple in-process scans.
GetConnectionStats() reads package-global counters, so a second SDK/embedded run in the same process will log totals from earlier executions too. If this is meant to diagnose one scan, reset or scope these stats by ExecutionId when a run starts or ends.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/runner/runner.go` around lines 430 - 434, GetConnectionStats()
returns package-global cumulative counters, so per-scan logs in runner.go show
aggregated totals across in-process runs; update the httpclientpool usage to
produce per-execution stats by either calling a reset or using a scoped API:
add/ call httpclientpool.ResetConnectionStats() at the start (or end) of a run
inside the runner (around where GetConnectionStats() is invoked) or change to an
API like httpclientpool.GetConnectionStatsForExecution(executionId) /
httpclientpool.ScopedStats(executionId) and pass the run's ExecutionId so the
logged totals (from GetConnectionStats / new scoped method) reflect only the
current scan. Ensure you reference and modify the call site in runner.go where
GetConnectionStats() is used and wire in ExecutionId from the current Runner
context.
| // speedup sanity check (at least 1.5x on localhost) | ||
| speedup := float64(old.Duration) / float64(new.Duration) | ||
| require.Greater(t, speedup, 1.5, | ||
| "expected at least 1.5x speedup with connection reuse") |
There was a problem hiding this comment.
Avoid hard performance thresholds in unit tests.
require.Greater(t, speedup, 1.5) is hardware- and load-dependent, so it can flap on noisy CI runners even when connection reuse is correct. The exact connection-count assertions already validate the behavior deterministically.
🩹 Proposed fix
// speedup sanity check (at least 1.5x on localhost)
speedup := float64(old.Duration) / float64(new.Duration)
- require.Greater(t, speedup, 1.5,
- "expected at least 1.5x speedup with connection reuse")
+ t.Logf("observed speedup with connection reuse: %.2fx", speedup)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // speedup sanity check (at least 1.5x on localhost) | |
| speedup := float64(old.Duration) / float64(new.Duration) | |
| require.Greater(t, speedup, 1.5, | |
| "expected at least 1.5x speedup with connection reuse") | |
| // speedup sanity check (at least 1.5x on localhost) | |
| speedup := float64(old.Duration) / float64(new.Duration) | |
| t.Logf("observed speedup with connection reuse: %.2fx", speedup) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/protocols/http/httpclientpool/clientpool_benchmark_test.go` around lines
215 - 218, The test currently enforces a hard performance threshold by asserting
require.Greater(t, speedup, 1.5) on the local speedup variable; remove that
flaky assertion and instead record the observed speedup (e.g., t.Logf or
similar) so the test no longer fails on noisy CI—locate the speedup variable and
the require.Greater call in clientpool_benchmark_test.go and replace the failing
assertion with a non-fatal log of the speedup, relying on the existing
deterministic connection-count assertions to validate behavior.
| hash := configuration.Hash() | ||
| if host != "" { | ||
| hash += ":" + host | ||
| } | ||
| if client, ok := dialers.HTTPClientPool.Get(hash); ok { | ||
| return client, nil | ||
| } | ||
|
|
||
| // Multiple Host | ||
| retryableHttpOptions := retryablehttp.DefaultOptionsSpraying | ||
| disableKeepAlives := true | ||
| maxIdleConns := 0 | ||
| maxConnsPerHost := 0 | ||
| maxIdleConnsPerHost := -1 | ||
| // do not split given timeout into chunks for retry | ||
| // because this won't work on slow hosts | ||
| // Each client is scoped to a single host, so we optimize for connection | ||
| // reuse: keep-alive always on, small idle pool, and an idle timeout that | ||
| // lets the transport reclaim unused connections automatically. | ||
| retryableHttpOptions := retryablehttp.DefaultOptionsSingle | ||
| retryableHttpOptions.NoAdjustTimeout = true | ||
|
|
||
| if configuration.Threads > 0 || options.ScanStrategy == scanstrategy.HostSpray.String() { | ||
| // Single host | ||
| retryableHttpOptions = retryablehttp.DefaultOptionsSingle | ||
| disableKeepAlives = false | ||
| maxIdleConnsPerHost = 500 | ||
| maxConnsPerHost = 500 | ||
| maxIdleConns := 4 | ||
| maxIdleConnsPerHost := 4 | ||
| maxConnsPerHost := 25 | ||
| if configuration.Threads > 0 { | ||
| maxIdleConnsPerHost = configuration.Threads | ||
| maxIdleConns = configuration.Threads | ||
| maxConnsPerHost = configuration.Threads | ||
| } | ||
|
|
||
| disableKeepAlives := configuration.Connection != nil && configuration.Connection.DisableKeepAlive | ||
|
|
There was a problem hiding this comment.
Include Connection.DisableKeepAlive in the pool key.
wrappedGet caches by configuration.Hash() plus host, but Configuration.Hash() does not encode Connection.DisableKeepAlive. Any two non-nil ConnectionConfigurations that differ only in that flag will collide and can return a client with the wrong keep-alive behavior.
🩹 Proposed fix
func (c *Configuration) Hash() string {
builder := &strings.Builder{}
builder.Grow(16)
@@
builder.WriteString("c")
builder.WriteString(strconv.FormatBool(c.Connection != nil))
+ if c.Connection != nil {
+ builder.WriteString("d")
+ builder.WriteString(strconv.FormatBool(c.Connection.DisableKeepAlive))
+ }
if c.Connection != nil && c.Connection.CustomMaxTimeout > 0 {
builder.WriteString("k")
builder.WriteString(c.Connection.CustomMaxTimeout.String())
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/protocols/http/httpclientpool/clientpool.go` around lines 220 - 244,
wrappedGet currently keys the pool by configuration.Hash() plus host but
configuration.Hash() doesn't include Connection.DisableKeepAlive, so clients
with different DisableKeepAlive values can collide; update the pool key
construction in clientpool.go (the code around wrappedGet / the local hash
variable) to append the effective DisableKeepAlive flag (e.g.,
strconv.FormatBool(disableKeepAlives) or "keepalive=off"/"on") to the hash
before calling dialers.HTTPClientPool.Get, ensuring the same unique symbols
(configuration.Hash(), host, disableKeepAlives) are used when creating and
retrieving the client so keep-alive behavior is correctly honored.
| httpclient, clientErr := httpclientpool.Get(request.options.Options, connConfig, hostname) | ||
| if clientErr != nil { | ||
| return errors.Wrap(clientErr, "could not get http client") | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Go's net/http client requests, what is the semantic difference between Request.Host and Request.URL.Host? Is Request.Host the actual connection target, or the Host-header override?
💡 Result:
In Go's net/http client requests, Request.URL.Host specifies the actual connection target (the server to connect to, determining the IP address and port for the TCP connection), while Request.Host is the optional override for the value of the Host header sent in the HTTP request. If Request.Host is empty, the Host header sent uses the value from Request.URL.Host (per Request.Write logic). Setting Request.Host explicitly overrides the Host header value without changing the connection target. Note that setting the "Host" header directly via req.Header.Set("Host", ...) has no effect for outgoing client requests, as the net/http client ignores it and uses Request.Host instead. This distinction allows connecting to an IP address (via URL.Host) while sending a different domain in the Host header (via Request.Host), useful for virtual hosting or testing.
Citations:
- 1: https://stackoverflow.com/questions/42921567/what-is-the-difference-between-host-and-url-host-for-golang-http-request
- 2: https://dev.to/himanshu_sahu/understanding-requrlhost-and-reqhost-in-gos-http-requests-20j5
- 3: https://gist.github.com/Integralist/8c0140da58fd6575373f6f1d98367170
- 4: https://go.dev/src/net/http/request.go
- 5: https://pkg.go.dev/net/http@go1.25
- 6: https://go.dev/src/net/http/client.go
- 7: https://go.dev/issue/7682
- 8: https://stackoverflow.com/questions/71253939/make-http-request-with-an-absolute-url-in-the-request-url
🏁 Script executed:
head -860 pkg/protocols/http/request.go | tail -50 | cat -nRepository: projectdiscovery/nuclei
Length of output: 2150
🏁 Script executed:
fd httpclientpool -t f | head -5Repository: projectdiscovery/nuclei
Length of output: 49
🏁 Script executed:
rg "func.*Get.*httpclientpool" -t goRepository: projectdiscovery/nuclei
Length of output: 49
🏁 Script executed:
rg "httpclientpool" -t go | grep -i "package\|import" | head -10Repository: projectdiscovery/nuclei
Length of output: 434
🏁 Script executed:
cat -n pkg/protocols/http/httpclientpool/clientpool.go | head -100Repository: projectdiscovery/nuclei
Length of output: 3429
🏁 Script executed:
rg "^func.*Get\(" pkg/protocols/http/httpclientpool/clientpool.go -A 20Repository: projectdiscovery/nuclei
Length of output: 1064
🏁 Script executed:
rg "generatedRequest\.request\." pkg/protocols/http/request.go | grep -E "URL|Host" | head -20Repository: projectdiscovery/nuclei
Length of output: 414
The pool key should use the dial target, not the Host header override.
This Get(..., hostname) call passes generatedRequest.request.Host as the pool key. For templates that set a Host: header override against multiple IPs or URLs, this collapses different targets into a single pool entry because the key becomes the overridden Host header (which is identical across targets) instead of the actual connection target (request.URL.Host). This defeats the per-host pooling isolation and causes unintended connection/transport reuse across different IPs in vhost scans.
Use generatedRequest.request.URL.Host as the primary pool key instead, or combine both values if Host-header isolation is also required.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/protocols/http/request.go` around lines 847 - 850, The pool key currently
passed to httpclientpool.Get uses the Host header override (hostname /
generatedRequest.request.Host), which collapses distinct connection targets;
change the key to use the actual connection target
generatedRequest.request.URL.Host (or concatenate URL.Host +
generatedRequest.request.Host if you need both host-target and Host-header
isolation) when calling httpclientpool.Get (the call with
request.options.Options, connConfig, hostname) so each distinct target gets its
own pool entry and transports are not incorrectly reused across vhost/IP scans.
| analysisMatched, analysisDetails, err := analyzer.Analyze(&analyzers.Options{ | ||
| FuzzGenerated: generatedRequest.fuzzGeneratedRequest, | ||
| HttpClient: request.httpClient, | ||
| HttpClient: request.getHTTPClientForHost(hostname), | ||
| ResponseTimeDelay: duration, | ||
| AnalyzerParameters: request.Analyzer.Parameters, | ||
| }) |
There was a problem hiding this comment.
Pass the same client that executed the request into the analyzer.
The request path on Lines 835-850 may clone connConfig to attach input.CookieJar and WithCustomTimeout, then resolves httpclient from that clone. Re-fetching here via getHTTPClientForHost drops those per-request overrides and can silently fall back to a different client, so analyzer follow-up requests can observe a different session/timeout than the response they are analyzing.
Proposed change
var (
resp *http.Response
fromCache bool
dumpedRequest []byte
projectCacheKey []byte
+ httpclient *retryablehttp.Client
)
@@
- httpclient, clientErr := httpclientpool.Get(request.options.Options, connConfig, hostname)
+ httpclient, clientErr = httpclientpool.Get(request.options.Options, connConfig, hostname)
if clientErr != nil {
return errors.Wrap(clientErr, "could not get http client")
}
@@
+ if httpclient == nil {
+ httpclient, err = httpclientpool.Get(request.options.Options, request.connConfiguration, hostname)
+ if err != nil {
+ return errors.Wrap(err, "could not get http client for analyzer")
+ }
+ }
analysisMatched, analysisDetails, err := analyzer.Analyze(&analyzers.Options{
FuzzGenerated: generatedRequest.fuzzGeneratedRequest,
- HttpClient: request.getHTTPClientForHost(hostname),
+ HttpClient: httpclient,
ResponseTimeDelay: duration,
AnalyzerParameters: request.Analyzer.Parameters,
})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/protocols/http/request.go` around lines 1010 - 1015, The analyzer is
being given a new client via request.getHTTPClientForHost(hostname), which can
drop per-request overrides (cookie jar, WithCustomTimeout) applied when the
request was executed; instead pass the same HTTP client instance that executed
the request (the local httpclient/httpClient variable created earlier when
cloning connConfig and resolving the client) into analyzer.Analyze so follow-up
requests use the same session/timeout; replace the
getHTTPClientForHost(hostname) call with the executing client variable when
invoking analyzer.Analyze.
Neo - PR Security ReviewNo security issues found Highlights
Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
pkg/protocols/common/protocolstate/state.go (1)
283-289: LGTM! Proper cleanup sequence for idle connections.The order of operations is correct—closing idle connections and clearing the pool before closing the fastdialer ensures transports don't attempt to use an already-closed dialer.
One minor defensive improvement: consider adding a nil check for
clientto guard against unexpected nil entries in the pool.,
🛡️ Optional: Add defensive nil check
_ = dialersInstance.HTTPClientPool.Iterate(func(_ string, client *retryablehttp.Client) error { + if client != nil && client.HTTPClient != nil { client.HTTPClient.CloseIdleConnections() + } return nil })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/protocols/common/protocolstate/state.go` around lines 283 - 289, Add a defensive nil check inside the Iterate callback to skip any nil entries before calling CloseIdleConnections: in the block using dialersInstance.HTTPClientPool.Iterate(func(_ string, client *retryablehttp.Client) error { ... }), verify client != nil and also client.HTTPClient != nil before calling client.HTTPClient.CloseIdleConnections(), then return nil as before; keep the subsequent dialersInstance.HTTPClientPool.Clear() unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@pkg/protocols/common/protocolstate/state.go`:
- Around line 283-289: Add a defensive nil check inside the Iterate callback to
skip any nil entries before calling CloseIdleConnections: in the block using
dialersInstance.HTTPClientPool.Iterate(func(_ string, client
*retryablehttp.Client) error { ... }), verify client != nil and also
client.HTTPClient != nil before calling
client.HTTPClient.CloseIdleConnections(), then return nil as before; keep the
subsequent dialersInstance.HTTPClientPool.Clear() unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 39974cdc-8191-45db-988f-7fd5cd2d5e67
📒 Files selected for processing (1)
pkg/protocols/common/protocolstate/state.go
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
pkg/protocols/http/httpclientpool/clientpool_benchmark_test.go (1)
216-219:⚠️ Potential issue | 🟡 MinorAvoid hard performance thresholds in unit tests.
The
require.Greater(t, speedup, 1.5)assertion is hardware- and load-dependent, making it prone to flakiness on noisy CI runners. The deterministic connection-count assertions (lines 205-214) already validate the behavior correctly.🩹 Proposed fix
// speedup sanity check (at least 1.5x on localhost) speedup := float64(old.Duration) / float64(new.Duration) - require.Greater(t, speedup, 1.5, - "expected at least 1.5x speedup with connection reuse") + t.Logf("observed speedup with connection reuse: %.2fx", speedup)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/protocols/http/httpclientpool/clientpool_benchmark_test.go` around lines 216 - 219, The test currently computes speedup := float64(old.Duration) / float64(new.Duration) and asserts require.Greater(t, speedup, 1.5), which is a flaky, environment-dependent performance threshold; remove the speedup calculation and the require.Greater assertion (the speedup variable, old.Duration/new.Duration usage, and the require.Greater call) and rely on the existing deterministic connection-count assertions in this test to validate behavior instead.
🧹 Nitpick comments (1)
pkg/protocols/http/httpclientpool/clientpool_benchmark_test.go (1)
81-100: Consider addingCloseIdleConnections()for consistency with production wrapper.The test's
connTrackingRoundTripperdoesn't implementCloseIdleConnections(), unlike the productionconnTrackingTransportinclientpool.go. While this doesn't break current tests (since they don't call it), adding the method would maintain consistency and prevent issues if tests are extended.🔧 Proposed addition
func (rt *connTrackingRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) { trace := &httptrace.ClientTrace{ GotConn: func(info httptrace.GotConnInfo) { if info.Reused { rt.reused.Add(1) } else { rt.newConns.Add(1) } }, } req = req.WithContext(httptrace.WithClientTrace(req.Context(), trace)) return rt.base.RoundTrip(req) } + +func (rt *connTrackingRoundTripper) CloseIdleConnections() { + type closeIdler interface{ CloseIdleConnections() } + if ci, ok := rt.base.(closeIdler); ok { + ci.CloseIdleConnections() + } +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/protocols/http/httpclientpool/clientpool_benchmark_test.go` around lines 81 - 100, The test's connTrackingRoundTripper is missing a CloseIdleConnections method (unlike production connTrackingTransport in clientpool.go); add a CloseIdleConnections receiver on connTrackingRoundTripper that delegates to the underlying base transport when available (use a type assertion to an interface with CloseIdleConnections or assert *http.Transport) so idle connections are closed consistently with production behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/protocols/http/httpclientpool/clientpool_benchmark_test.go`:
- Around line 136-186: The scan runners (runTemplateSpray, runHostSpray,
runConcurrentHostSpray) currently ignore the error returned by doRequest which
can hide failures and corrupt connection counts; update each loop to capture the
error (err := doRequest(...)) and fail fast on non-nil errors by aborting (e.g.,
panic or log.Fatalf) with a descriptive message that includes the target URL and
the error; apply this change in the bodies of runTemplateSpray, runHostSpray,
and the goroutine inside runConcurrentHostSpray (use the same failure mechanism
so the benchmark exits immediately when doRequest fails).
---
Duplicate comments:
In `@pkg/protocols/http/httpclientpool/clientpool_benchmark_test.go`:
- Around line 216-219: The test currently computes speedup :=
float64(old.Duration) / float64(new.Duration) and asserts require.Greater(t,
speedup, 1.5), which is a flaky, environment-dependent performance threshold;
remove the speedup calculation and the require.Greater assertion (the speedup
variable, old.Duration/new.Duration usage, and the require.Greater call) and
rely on the existing deterministic connection-count assertions in this test to
validate behavior instead.
---
Nitpick comments:
In `@pkg/protocols/http/httpclientpool/clientpool_benchmark_test.go`:
- Around line 81-100: The test's connTrackingRoundTripper is missing a
CloseIdleConnections method (unlike production connTrackingTransport in
clientpool.go); add a CloseIdleConnections receiver on connTrackingRoundTripper
that delegates to the underlying base transport when available (use a type
assertion to an interface with CloseIdleConnections or assert *http.Transport)
so idle connections are closed consistently with production behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9d561d91-8fd3-475f-82f2-656d84f3accc
📒 Files selected for processing (1)
pkg/protocols/http/httpclientpool/clientpool_benchmark_test.go
| // runTemplateSpray: outer loop = templates, inner loop = hosts (like nuclei template-spray). | ||
| func runTemplateSpray(servers []*httptest.Server, templates int, factory clientFactory) benchResult { | ||
| client, newC, reusedC := factory() | ||
| total := templates * len(servers) | ||
| start := time.Now() | ||
| for t := 0; t < templates; t++ { | ||
| for _, srv := range servers { | ||
| _ = doRequest(client, srv.URL+fmt.Sprintf("/t%d", t)) | ||
| } | ||
| } | ||
| return benchResult{time.Since(start), total, newC.Load(), reusedC.Load()} | ||
| } | ||
|
|
||
| // runHostSpray: outer loop = hosts, inner loop = templates (like nuclei host-spray). | ||
| func runHostSpray(servers []*httptest.Server, templates int, factory clientFactory) benchResult { | ||
| client, newC, reusedC := factory() | ||
| total := templates * len(servers) | ||
| start := time.Now() | ||
| for _, srv := range servers { | ||
| for t := 0; t < templates; t++ { | ||
| _ = doRequest(client, srv.URL+fmt.Sprintf("/t%d", t)) | ||
| } | ||
| } | ||
| return benchResult{time.Since(start), total, newC.Load(), reusedC.Load()} | ||
| } | ||
|
|
||
| // runConcurrentHostSpray: hosts in parallel (bounded by concurrency), templates | ||
| // sequential per host. Each host gets its own client (the per-host pool model). | ||
| func runConcurrentHostSpray(servers []*httptest.Server, templates, concurrency int, factory perHostClientFactory) benchResult { | ||
| total := templates * len(servers) | ||
| var totalNew, totalReused atomic.Int64 | ||
| sem := make(chan struct{}, concurrency) | ||
| var wg sync.WaitGroup | ||
| start := time.Now() | ||
| for _, srv := range servers { | ||
| sem <- struct{}{} | ||
| wg.Add(1) | ||
| go func(s *httptest.Server) { | ||
| defer wg.Done() | ||
| defer func() { <-sem }() | ||
| client, newC, reusedC := factory(s.URL) | ||
| for t := 0; t < templates; t++ { | ||
| _ = doRequest(client, s.URL+fmt.Sprintf("/t%d", t)) | ||
| } | ||
| totalNew.Add(newC.Load()) | ||
| totalReused.Add(reusedC.Load()) | ||
| }(srv) | ||
| } | ||
| wg.Wait() | ||
| return benchResult{time.Since(start), total, totalNew.Load(), totalReused.Load()} | ||
| } |
There was a problem hiding this comment.
Silently ignoring request errors may mask test failures.
The scan pattern runners discard errors from doRequest (lines 143, 156, 178). If requests fail due to network issues or server problems, the connection counts will be incorrect, potentially causing misleading test results.
Consider tracking errors or failing fast:
🩹 Proposed fix (fail-fast approach)
-func runTemplateSpray(servers []*httptest.Server, templates int, factory clientFactory) benchResult {
+func runTemplateSpray(t testing.TB, servers []*httptest.Server, templates int, factory clientFactory) benchResult {
+ t.Helper()
client, newC, reusedC := factory()
total := templates * len(servers)
start := time.Now()
for t := 0; t < templates; t++ {
for _, srv := range servers {
- _ = doRequest(client, srv.URL+fmt.Sprintf("/t%d", t))
+ if err := doRequest(client, srv.URL+fmt.Sprintf("/t%d", t)); err != nil {
+ t.Fatalf("request failed: %v", err)
+ }
}
}
return benchResult{time.Since(start), total, newC.Load(), reusedC.Load()}
}Apply similar changes to runHostSpray and runConcurrentHostSpray.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/protocols/http/httpclientpool/clientpool_benchmark_test.go` around lines
136 - 186, The scan runners (runTemplateSpray, runHostSpray,
runConcurrentHostSpray) currently ignore the error returned by doRequest which
can hide failures and corrupt connection counts; update each loop to capture the
error (err := doRequest(...)) and fail fast on non-nil errors by aborting (e.g.,
panic or log.Fatalf) with a descriptive message that includes the target URL and
the error; apply this change in the bodies of runTemplateSpray, runHostSpray,
and the goroutine inside runConcurrentHostSpray (use the same failure mechanism
so the benchmark exits immediately when doRequest fails).
Resolve conflict in clientpool.go: keep ConnectionStats and connTrackingTransport from this branch, adopt dev's removal of Init()/forceMaxRedirects and use ShouldFollowHTTPRedirects().
Proposed changes
Close #5453
Checklist
Summary by CodeRabbit
Performance
Bug Fixes
Chores
Tests