Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Openssl-3.0-fips does not support signing raw digest bytes #5047

Open
lrstewart opened this issue Jan 17, 2025 · 0 comments
Open

Openssl-3.0-fips does not support signing raw digest bytes #5047

lrstewart opened this issue Jan 17, 2025 · 0 comments

Comments

@lrstewart
Copy link
Contributor

lrstewart commented Jan 17, 2025

Problem:

Openssl-3.0-fips has the same signing issue that awslc-fips had: #3142

Unfortunately the same solution (EVP_MD_CTX_set_pkey_ctx) is not going to work. We might be able to open an issue to Openssl to get it to work, but that looks like it would require a non-trivial change on their end to me. Basically, calling "EVP_PKEY_sign_init" on our EVP_PKEY_CTX sets its operation to "SIGN" instead of "SIGNCTX", which seems to refer to the old / deprecated way of signing and not work with FIPS. I also can't just use EVP_DigestSignInit to setup a different context and use that context's SIGNCTX pkey_ctx in EVP_MD_CTX_set_pkey_ctx, because the new way of handling digests seems to store data on the pkey_ctx, meaning EVP_MD_CTX_set_pkey_ctx basically wipes the hash.

Solution:

We were avoiding EVP_DigestSignInit because it requires a private key, and we have the data to sign before we have the private key. We may actually never have the private key, if the pkey offloading feature is being used.

But I think we're going to have to refactor our code to use EVP_DigestSignInit. That means we'll need to pass the full, un-hashed data to sign to the pkey operations. Basically the pkeys need to take an s2n_blob of raw data rather than an s2n_hash_state of hashed data. When signing, we can then call EVP_DigestSignInit rather than use the existing s2n_hash_state methods. It's a large but fairly straight-forward refactor.

I /think/ there's no memory concern around passing around raw input data:

  • The max size of the TLS1.3 cert verify data is 98 prefix + 64 max digest = 162 max bytes. Even adding in the blob structure itself (24 bytes), that's only 186 bytes.
  • The max size of the TLS1.2 server key data is the max size of the key shares. Excluding TLS1.2 PQ, which Remove PQ TLS 1.2 Support #5022 removes, key shares should be <100 bytes.
  • The max size of the s2n_hash_state is 240 bytes, ignoring the allocations done for openssl objects.

Requirements / Acceptance Criteria:

What must a solution address in order to solve the problem? How do we know the solution is complete?

  • RFC links: Links to relevant RFC(s)
  • Related Issues: Link any relevant issues
  • Will the Usage Guide or other documentation need to be updated?
  • Testing: How will this change be tested? Call out new integration tests, functional tests, or particularly interesting/important unit tests.
    • Will this change trigger SAW changes? Changes to the state machine, the s2n_handshake_io code that controls state transitions, the DRBG, or the corking/uncorking logic could trigger SAW failures.
    • Should this change be fuzz tested? Will it handle untrusted input? Create a separate issue to track the fuzzing work.

Out of scope:

Is there anything the solution will intentionally NOT address?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant