forked from netty/netty
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[pull] 4.1 from netty:4.1 #70
Open
pull
wants to merge
1,425
commits into
hx71105417:4.1
Choose a base branch
from
netty:4.1
base: 4.1
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
f9023b9
to
54a2c6d
Compare
f3b47e5
to
58f75f6
Compare
Motivation: fix some typos in comments Modification: fix some typos in comments Result: Fixes #<GitHub issue number>. If there is no issue then describe the changes introduced by this PR. Signed-off-by: welfuture <[email protected]>
Motivation: Bump the japicmp to 0.15.7 I checked 0.20.0 too, but a false alarm will be raised. Modification: Bump the japicmp to 0.15.7 Result: Use japicmp to 0.15.7 now
#13992) …ansport Motivation: We used the NioEventLoopGroup in the test which is not really correct and just works because of some implementation details. Better use the correct EventLoopGroup Modifications: Change to use DefaultEventLoopGroup Result: Fix incorrect code in test
Motivation: Bump japicmp to 0.21.0 Modification: There was an issue in the newer version, but after reported now it's fixed. siom79/japicmp#393 siom79/japicmp#394 Result: Bump japicmp to 0.21.0 Co-authored-by: Norman Maurer <[email protected]>
### Motivation: Our use case requires using nio transport for unix-domain-socket (UDS) in netty 4.x. Although netty 5.x supports UDS, it is is not compatible with 4.x. For us and probably many users, having to migrate to 5.x in order to use UDS is not a viable option. ### Modification: Add new classes to support domain UDS with NIO. ### Result: UDS is supported when using the NIO transport when java 16+ is used. Co-authored-by: Norman Maurer <[email protected]>
Motivation: When using the bouncycastle FIPS dependencies (bcpkix-fips instead of bcpkix, and bc-fips instead of bcprov), BouncyCastleProvider is replaced by BouncyCastleFipsProvider. This made BouncyCastleSelfSignedCertGenerator fail even though all the necessary algorithms are present. While bc-fips is only necessary for fips-compliant production deployments, and self-signed certs are only necessary in test deployments that don't have to be fips-compliant, this change is still useful because the fips and non-fips artifacts cannot exist alongside each other. So if you have a prod fips dependency, tests that also have a non-fips dependency for self-signed certs cannot live alongside each other. It's easiest to just use the fips dependency everywhere. Modification: Use reflection in BouncyCastleSelfSignedCertGenerator to instantiate whichever provider is available. This has the advantage of not needing a new dependency, though it may have some impact on native image deployments. For this reason I've also added the providers to the reflect-config.json. No test is possible because it would require a different classpath. I tested manually that it works by changing to the fips dependencies and then running SelfSignedCertificateTest, and checking with a debugger that the correct provider was used. Result: SelfSignedCertificate will work with bc-fips dependencies.
Motivation: This is a fix for issue #13981 that reports a changed behaviour of HttpPostStandardRequestDecoder after this PR - #13908 Because HttpPostStandardRequestDecoder changed the contract, some code implementations relying on certain parsing are failing Modification: This PR makes sure, that the edge case handling for form body happenes only when the content is in fact form body Result: Fixes #13981
Motivation: We should add also other JDKs to the docker images that are used by devcontainers so its easier to debug with different JDKs Modifications: Add Java 8,11,17 Result: Easier to debug with devcontainers --------- Co-authored-by: Chris Vest <[email protected]>
Motivation: AdaptivePoolingAllocator uses some shared Magazines hoping thread's id distribution to improve contended behaviour, but event loops threads can just uses their own Magazine(s) and save 2 atomic operations (write lock/unlock) in the hot path, further improving performance. Modification: Implements dedicated event loop Magazines Result: Better allocation performance for event loop threads
Motivation: varint decoding fall shortly when input length is not predictable Modification: Implements branchless and batchy varint decoding Result: Better performance when inputs length is not predictable
Motivation: AdaptivePoolingAllocator's buffer allocation, while bumping the pointer in the current chunk, is uncontended, but given that ChunkByteBuf extends AbstractReferenceCountedByteBuf for convenience (leak detection, double-free detection, etc etc), it still uses atomic operations. Despite Apple M1/M2/... are pretty fast while performing atomic uncontended operations, this is not so true for x86 ones. Modification: Make ChunkByteBuf to implement its own "weaker" reference-count logic, compared to AbstractReferenceCountedByteBuf, which benefit of a double counter, not atomic on the borrowing part. Result: Better allocation performance
Motivation: We should just remove @UnstableApi from things that we will not break before 5.0. This removes confusion and also ensures we will be able to detect breakage via our maven plugins. Modifications: Remove annotation usage where it makes sense Result: Fixes #14012
…14058) Motivation: Enhance security by supporting specific secure randomness source in SSLContext initialization Modification: Support building SecureRandom in `SslContextBuilder`. Allow passing SecureRandom as a parameter when creating an instance of `JdkSslServerContext` through its constructor. Result: Enhance security Fixes #14026 --------- Co-authored-by: Norman Maurer <[email protected]>
#14072) …icate*, slice* methods should be safe to be called from multiple threads. Motivation: Get* operations should always be safe to be used from different Threads. Modifications: - Only use internalNioBuffer when one of the read* or write* mthods are used. This is neccessary to prevent races as those can happen when a slice or duplicate is shared between different Channels - Add unit tests Result: Fixes #14070. Related to #1865
Motivation: I need to extend Http2FrameCodec to reduce code duplication. Modification: Make Http2FrameCodec's constructor protected Result: User extendable. --------- Co-authored-by: Norman Maurer <[email protected]>
Motivations: These exceptions generally shouldn't happen, but if they do, they break internal invariants and the best a system can do is to somehow restart. To facilitate systems reacting to this, we should tell the termination future about these exceptions. Modification: When an event loop terminates from an unexpected exception, we now shut down the event loop as normal, but mark the termination future as failed. Result: It's now possible for applications to listen for event loops displaying improper lifetime behavior. This helps situations such as the one described in #14632 --------- Co-authored-by: Norman Maurer <[email protected]>
Motivation: When our Frame is an unknown frame of HTTP/2 spec, we want to use Http2FrameCodec but avoid an allocation of the `UnknownHttp2Frame`, meaning I will decode it to my frame type. Modification: Add newHttp2UnknownFrame method to Http2FrameCodec Result: I can now create my custFrame instead. --------- Co-authored-by: Norman Maurer <[email protected]>
Motivation: We did miss to release the IovArray when shutdown the KQueueEventLoop. This lead to a small native memory leak. Modifications: Correctly release the native memory on shutdown Result: Fix small memory leak
Motivation: fix recently introduced (as part of #14623) `java.security.AccessControlException`, here is the relevant stack trace ``` java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "getClassLoader") at java.base/java.security.AccessControlContext.checkPermission(AccessControlContext.java:488) ~[?:?] at java.base/java.security.AccessController.checkPermission(AccessController.java:1085) ~[?:?] at java.base/java.lang.SecurityManager.checkPermission(SecurityManager.java:411) ~[?:?] at java.base/java.lang.ClassLoader.checkClassLoaderPermission(ClassLoader.java:2071) ~[?:?] at java.base/java.lang.Thread.getContextClassLoader(Thread.java:2284) ~[?:?] at io.netty.util.concurrent.GlobalEventExecutor.startThread(GlobalEventExecutor.java:239) ~[netty-common-4.1.117.Final.jar:4.1.117.Final] at io.netty.util.concurrent.GlobalEventExecutor.execute0(GlobalEventExecutor.java:232) ~[netty-common-4.1.117.Final.jar:4.1.117.Final] at io.netty.util.concurrent.GlobalEventExecutor.execute(GlobalEventExecutor.java:226) ~[netty-common-4.1.117.Final.jar:4.1.117.Final] at io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:862) [netty-common-4.1.117.Final.jar:4.1.117.Final] at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:500) [netty-common-4.1.117.Final.jar:4.1.117.Final] at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) [netty-common-4.1.117.Final.jar:4.1.117.Final] at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) [netty-common-4.1.117.Final.jar:4.1.117.Final] at io.netty.util.concurrent.DefaultPromise.setSuccess(DefaultPromise.java:97) [netty-common-4.1.117.Final.jar:4.1.117.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:1059) [netty-common-4.1.117.Final.jar:4.1.117.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.117.Final.jar:4.1.117.Final] at java.base/java.lang.Thread.run(Thread.java:1575) [?:?] ``` Modification: Apply `AccessController.doPrivileged` block Result: With the proper security policy, the exception is gone Affected Version: 4.1.117.Final --------- Signed-off-by: Andriy Redko <[email protected]>
Motivation: We should guard against the sitation where we can't map the stream. While this should never happen its better to be safe than sorry. Modifications: - Ensure the required Http2FrameStream can be found and only after that try to construct the specific frame. - Release ByteBuffer when DefaultHttp2DataFrame constructor throws Result: Fix possible buffer leaks
…isibility to public (#14758) Motivation: Currently, the constants for default values of the buffer are package-private. Modification: Introduce the ability to reference and reuse the default values for the buffer size from other classes/projects. Result: Changing the three constants for the buffer default value to public visibility.
…s and reduce chunk release frequency (#14768) Motivation: As the AdaptivePoolingAllocator adjusts it's preferred chunk size, it may end up causing fragmentation with the system allocator if chunks are allocated and released frequently. Modification: The chunk size is now rounded up to the nearest whole multiple of the MIN_CHUNK_SIZE. The MIN_CHUNK_SIZE is itself already an even multiple of many popular small page sizes, like 4k, 16k, and 64k, which makes it easier for the system allocator to manage the memory in terms of whole pages. To further reduce chunk release frequency, chunks are now probabilistically released if their size differs from the preferred chunk size. The probability of a chunk being released depends on how many MIN_CHUNK_SIZE units they deviate from the preferred chunk size. The probability is computed as 0.5% per unit of deviation, so chunks that are off by 2.6 MiB or more are guaranteed to be released. Result: Reduced memory fragmentation tendency of the adaptive allocator.
Motivation: Running a build with paranoid leak detector level is very expensive and slow. While it is useful to detect buffer leaks it is good enough to just run a few of the builds with it enable and not all. This will still detect leaks while reduce the build time in general a lot. Modifications: Change workflow to only run 2 builds with paranoid leak detection enabled. Result: Faster build times on the CI
Motivation: A new tcnative release is out Modifications: Upgrade to latest release Result: Use latest netty tcnative version
* Fix BoundedInputStream not counting null-bytes Motivation: The `InputStream.read()` method returns the byte value as an `int` in the range 0 to 255. The `BoundedInputStream` has a bug in counting the number of bytes read, where valid 0-bytes were not counted toward the limit. Modification: The `BoundedInputStream` is changed to only consider -1 as EOF, which is exactly as documented on `InputStream`. The `BoundedInputStream` is also changed to truncate and allow reads up to the limit, and then only trip the exception when it is at the limit and reads beyond. The `BoundedInputStreamTest` has also been made repeating, because it was already occasionally finding this problem. The test now passes consistently with the fixes. Result: The `BoundedInputStream` now counts all byte values toward the limit. * Add deterministic test * Refine deterministic test --------- Co-authored-by: Chris Vest <[email protected]>
Motivation: We need to handle the situation correctly in all cases when there is not enough data yet to unwrap the packet. Not doing so might cause undefined behaviour Modifications: - Correctly check for SslUtils.NOT_ENOUGH_DATA - Add assert Result: Correctly handle incomplete packets while unwrap
Motivation: This assertion sometimes fails in fuzzing when using the JDK SSL implementation. The reason is that the JDK equivalent of getEncryptedPacketLength (SSLEngineInputRecord#bytesInCompletePacket) remembers when a previous packet was not SSLv2 and will not check for SSLv2 again in that case. getEncryptedPacketLength cannot replicate this behavior and checks for SSLv2 every time. This can lead to a mismatch in the netty and JDK length result, which triggers the assert replaced in this PR. Modification: Replace the assert with a normal check and exception. This sort of input should always be a sign of bad data. Result: Normal decode failure instead of assertion failure. From my testing, this only affects the JDK implementation so there's no potential for a crash like CVE-2025-24970. It looks like when assertions are off, the data is just dropped atm. --------- Co-authored-by: Norman Maurer <[email protected]>
Motivation: When an HTTP message fails to aggregate, for example due to an invalid 'Expect' header, MessageAggregator does not produce a FullHttpRequest. HttpServerUpgradeHandler would then continue with the next request, wrongly believing it to be an upgrade request, even though only the previous one was. This produces an NPE: ``` Caused by: java.lang.NullPointerException: Cannot invoke "java.lang.CharSequence.length()" because "header" is null at io.netty.handler.codec.http.HttpServerUpgradeHandler.splitHeader(HttpServerUpgradeHandler.java:429) at io.netty.handler.codec.http.HttpServerUpgradeHandler.upgrade(HttpServerUpgradeHandler.java:328) at io.netty.handler.codec.http.HttpServerUpgradeHandler.decode(HttpServerUpgradeHandler.java:290) at io.netty.handler.codec.http.HttpServerUpgradeHandler.decode(HttpServerUpgradeHandler.java:40) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:91) ... 35 common frames omitted ``` Modification: When the upgrade handler sees a LastHttpContent, cancel the current upgrade and cancel aggregation. Result: No NPE for this input. Found by fuzzing.
…4830) Motivation: We need to check if pendingUnencryptedWrites is null before accessing it as the SslHandler might have been removed in the meantime. ``` io.netty.handler.codec.DecoderException: java.lang.NullPointerException: Cannot invoke "io.netty.handler.ssl.SslHandlerCoalescingBufferQueue.isEmpty()" because "this.pendingUnencryptedWrites" is null at io.netty.handler.ssl.SslHandler$SslTasksRunner.wrapIfNeeded(SslHandler.java:1798) at io.netty.handler.ssl.SslHandler$SslTasksRunner.safeExceptionCaught(SslHandler.java:1785) at io.netty.handler.ssl.SslHandler$SslTasksRunner.resumeOnEventExecutor(SslHandler.java:1896) at io.netty.handler.ssl.SslHandler$SslTasksRunner.access$2000(SslHandler.java:1751) at io.netty.handler.ssl.SslHandler$SslTasksRunner$2.run(SslHandler.java:1912) at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:405) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:998) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: java.lang.NullPointerException: Cannot invoke "io.netty.handler.ssl.SslHandlerCoalescingBufferQueue.isEmpty()" because "this.pendingUnencryptedWrites" is null at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1503) at io.netty.handler.ssl.SslHandler.unwrapNonAppData(SslHandler.java:1469) at io.netty.handler.ssl.SslHandler.access$1800(SslHandler.java:170) at io.netty.handler.ssl.SslHandler$SslTasksRunner.resumeOnEventExecutor(SslHandler.java:1847) ... 9 more ``` Modifications: Add null check Result: Fix NPE
Motivation: BlockHound version 1.0.11.RELEASE comes with newer byte-buddy dependency Modification: - Bump BlockHound version Result: BlockHound version 1.0.11.RELEASE comes with newer byte-buddy dependency
Motivation: Some HTTP/2 log messages do not contain channel information, making it hard to debug when it's not possible to relate these events to others. Modifications: - Add channel info at the beginning of all log messages in `netty-codec-http2` module; Result: HTTP/2 logs consistently start with channel info.
Motivation: The URI standard RFC 3986 does not specify that query components have their spaces encoded as `+`. It is implied that the encoding is `%20` instead. However, the whatwg HTML standard says explicitly that the query must be encoded using `application/x-www-form-urlencoded` rules, which does use `+` for space. This is also what browsers do. QueryStringDecoder should offer a way to parse either format. Modification: - Modify QueryStringDecoder to use a builder to accommodate the increasing number of flags - Add a `htmlQueryDecoding` flag, enabled by default, to control the `+` decoding The default value of `htmlQueryDecoding` is appropriate for most use cases, I don't think it should be changed even in netty 5. Also fixed the benchmark harness for Java 21. Result: Query strings encoded purely according to the URI spec can be decoded properly. I measured the performance of the new builder, and it didn't look any different. --------- Co-authored-by: Chris Vest <[email protected]>
Motivation: Before this change, the 'io.netty.handler.ssl.BouncyCastlePemReader' class used by the SSL context always instantiated a new BouncyCastle provider when one of supported providers were on the classpath. When Netty is used in a native image created with GraalVM, the moment of initialization matters. Providers instantiated when the native image is built will see different classes than providers instantiated during runtime unless every single class the BouncyCastle loads using the reflection API is registered for a reflection. Some frameworks like Quarkus makes sure that BouncyCastle providers are created for users and registered with the Java Security API. However in this case, the providers prepared by Quarkus are ignored and users are left to a difficult task to find all required classes loaded by the BouncyCastle for their use case and register them. This commit prefers the providers registered against the Java Security API and fallbacks to manual class instantiation, so that user experience improves when using native executables. Modifications: I have modified 'src/main/java/io/netty/handler/ssl/BouncyCastlePemReader.java' to load the 2 supported BouncyCastle providers (BC/BCFIPS) from the Java Security API first and fallback to previous behavior when the providers are not available. I didn't find any test class explicitly testing this 'BouncyCastlePemReader', therefore I added a new test class, 'src/test/java/io/netty/handler/ssl/BouncyCastlePemReaderTest.java', that assures providers are loaded from the Java Security API, but when not available, Netty creates a new providers if respective classes are available on the classpath. Result: It will be easier to use SSL context with BouncyCastle support in native images. Fixes #14826.
…4799) …#encodeNextChunkUrlEncoded(int) for current InterfaceHttpData Motivation: I am using `HttpPostRequestEncoder` to encode a few request parameters, but I read **infinite** chunks when the following conditions are met: 1. The number of POST request parameters is greater than or equal to 2. 2. The length of the first request parameter value > `DefaultHttpDataFactory#minSize`. 3. The length of the first request parameter key + the length of the request parameter value + 2 = N * `HttpPostBodyUtil#chunkSize` This issue can be reproduced by modifying the [length](https://github.com/netty/netty/blob/34011b5f02c52122eccd36ce7cc69502a514d0f1/codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestEncoderTest.java#L454) variable in the [HttpPostRequestEncoderTest#testEncodeChunkedContent()](https://github.com/netty/netty/blob/34011b5f02c52122eccd36ce7cc69502a514d0f1/codec-http/src/test/java/io/netty/handler/codec/http/multipart/HttpPostRequestEncoderTest.java#L450) method to `HttpPostBodyUtil.chunkSize * 3 - 4 - 2`. Normally, this test method should complete quickly, but when the length variable is set to `HttpPostBodyUtil.chunkSize * 3 - 4 - 2`, it will never finish. Modification: 1. Enhance the logic of determining whether the current `InterfaceHttpData` has been fully read. Result: Fix flawed termination condition check. --------- Co-authored-by: LinShunKang <[email protected]>
Motivation: Disable assertion by default in JMH tests, to exclude the noise introduced by assertion. Modification: Disable assertion in default constructor of `AbstractMicrobenchmark`, also make existing tests `FastThreadLocalSlowPathBenchmark` and `AdaptiveByteBufAllocatorConcurrentNoCacheBenchmark` aligned with this. Assertion will always be enabled in package `io.netty.microbench.*`. Result: Fixes #14876. --------- Co-authored-by: laosijikaichele <laosijikaichele>
Motivation: We had ubuntu-20.04 hardcoded in one place which failed the build now as these are deprecated. Just use ubuntu-latest Modifications: Use ubuntu-latest Result: CI build works again
…eMaxRstFramesPerWindow (#14892) Motivation: To expose APIs under `AbstractHttp2ConnectionHandlerBuilder` to be able to use with the `HttpToHttp2ConnectionHandlerBuilder` Modification: Exposed `decoderEnforceMaxConsecutiveEmptyDataFrames` and `decoderEnforceMaxRstFramesPerWindow` under `HttpToHttp2ConnectionHandlerBuilder` Result: With this change, users of the `HttpToHttp2ConnectionHandlerBuilder` will be able to use the`decoderEnforceMaxConsecutiveEmptyDataFrames` and `decoderEnforceMaxRstFramesPerWindow` API Fixes #14891
Motivation: ThreadExecutorMap did just set the stored EventExecutor to null once done in its wrapping methods. This is not correct as this might result in losing the current EventExecutor. Beside this the same problem existed in ManualIoEventLoop. Modifications: - Add FastThreadLocal.getAndSet(...) that return the old stored value - Keep track of the old value in ThreadExecutorMap / ManualIoEventLoop and restore it - Add unit test Result: Correctly restore old value --------- Co-authored-by: Chris Vest <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot]
Can you help keep this open source service alive? 💖 Please sponsor : )