Skip to content

NSFS | S3 | Lifecycle: bunch of expiration header tests failed #8554

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
nhaustein opened this issue Nov 21, 2024 · 5 comments · Fixed by #8958
Closed

NSFS | S3 | Lifecycle: bunch of expiration header tests failed #8554

nhaustein opened this issue Nov 21, 2024 · 5 comments · Fixed by #8958
Assignees
Labels

Comments

@nhaustein
Copy link

Environment info

  • NooBaa Version: noobaa-core-5.17.1-20241104.el9.x86_64
  • Platform: RHEL 9.2 x86

Actual behavior

  1. Running the Ceph S3 tests using Haralds test suite for lifecycle expiration related tests uncovered some expiration header errors. Summary:
FAILED s3tests_boto3/functional/test_s3.py::test_lifecycle_expiration_header_put - assert False
FAILED s3tests_boto3/functional/test_s3.py::test_lifecycle_expiration_header_head - assert False
FAILED s3tests_boto3/functional/test_s3.py::test_lifecycle_expiration_header_tags_head - assert False

Expected behavior

  1. Not sure where the failure message come from, see more information below.

Steps to reproduce

  1. See above and below:

More information - Screenshots / Logs / Other output

___________________________________ test_lifecycle_expiration_header_put ___________________________________

@pytest.mark.lifecycle
@pytest.mark.lifecycle_expiration
def test_lifecycle_expiration_header_put():
    bucket_name = get_new_bucket()
    client = get_client()

    now = datetime.datetime.utcnow()
    response = setup_lifecycle_expiration(
        client, bucket_name, 'rule1', 1, 'days1/')
  assert check_lifecycle_expiration_header(response, now, 'rule1', 1)

E assert False
E + where False = check_lifecycle_expiration_header({'ETag': '"mtime-d5rrz4rux6v4-ino-e14"', 'ResponseMetadata': {'HTTPHeaders': {'access-control-allow-credentials': 'tru...w-origin': '*', ...}, 'HTTPStatusCode': 200, 'HostId': 'm3r5mc3z-67vzpb-bu3', 'RequestId': 'm3r5mc3z-67vzpb-bu3', ...}}, datetime.datetime(2024, 11, 21, 10, 13, 4, 67964), 'rule1', 1)

__________________________________ test_lifecycle_expiration_header_head ___________________________________

@pytest.mark.lifecycle
@pytest.mark.lifecycle_expiration
@pytest.mark.fails_on_dbstore
def test_lifecycle_expiration_header_head():
    bucket_name = get_new_bucket()
    client = get_client()

    now = datetime.datetime.utcnow()
    response = setup_lifecycle_expiration(
        client, bucket_name, 'rule1', 1, 'days1/')

    key = 'days1/' + 'foo'

    # stat the object, check header
    response = client.head_object(Bucket=bucket_name, Key=key)
    assert response['ResponseMetadata']['HTTPStatusCode'] == 200
  assert check_lifecycle_expiration_header(response, now, 'rule1', 1)

E assert False
E + where False = check_lifecycle_expiration_header({'AcceptRanges': 'bytes', 'ContentLength': 3, 'ContentType': 'application/octet-stream', 'ETag': '"mtime-d5rrz4z50tts-ino-1jsw"', ...}, datetime.datetime(2024, 11, 21, 10, 13, 4, 507854), 'rule1', 1)

________________________________ test_lifecycle_expiration_header_tags_head ________________________________

@pytest.mark.lifecycle
@pytest.mark.lifecycle_expiration
@pytest.mark.fails_on_dbstore
def test_lifecycle_expiration_header_tags_head():
    bucket_name = get_new_bucket()
    client = get_client()
    lifecycle={
        "Rules": [
        {
            "Filter": {
                "Tag": {"Key": "key1", "Value": "tag1"}
            },
            "Status": "Enabled",
            "Expiration": {
                "Days": 1
            },
            "ID": "rule1"
            },
        ]
    }
    response = client.put_bucket_lifecycle_configuration(
        Bucket=bucket_name, LifecycleConfiguration=lifecycle)
    key1 = "obj_key1"
    body1 = "obj_key1_body"
    tags1={'TagSet': [{'Key': 'key1', 'Value': 'tag1'},
          {'Key': 'key5','Value': 'tag5'}]}
    response = client.put_object(Bucket=bucket_name, Key=key1, Body=body1)
    response = client.put_object_tagging(Bucket=bucket_name, Key=key1,Tagging=tags1)

    # stat the object, check header
    response = client.head_object(Bucket=bucket_name, Key=key1)
    assert response['ResponseMetadata']['HTTPStatusCode'] == 200
  assert check_lifecycle_expiration_header(response, datetime.datetime.now(None), 'rule1', 1)

E assert False
E + where False = check_lifecycle_expiration_header({'AcceptRanges': 'bytes', 'ContentLength': 13, 'ContentType': 'application/octet-stream', 'ETag': '"mtime-d5rrz56rescg-ino-680"', ...}, datetime.datetime(2024, 11, 21, 11, 13, 5, 6866), 'rule1', 1)
E + where datetime.datetime(2024, 11, 21, 11, 13, 5, 6866) = <built-in method now of type object at 0x7f6461da8a00>(None)
E + where <built-in method now of type object at 0x7f6461da8a00> = <class 'datetime.datetime'>.now
E + where <class 'datetime.datetime'> = datetime.datetime

@achouhan09
Copy link
Member

Hi @nhaustein
I am getting "404 error" while accessing the Haralds test suite you have added in the description. Can you pls update the link or provide me some more info/steps to run this tests. Thanks

@nhaustein
Copy link
Author

Hi @achouhan09 , I asked Harald to give you access to the test suite. If you'd like we can also take a look at my system where I have the test suite installed.

@achouhan09
Copy link
Member

Hi @nhaustein We can take a look to it but it would be better if you show me how to setup it on the system.
Thanks!

Copy link

This issue had no activity for too long - it will now be labeled stale. Update it to prevent it from getting closed.

@achouhan09
Copy link
Member

Hi @nhaustein
This issue has been fixed in PR #8958 However, the tests are still failing.

Reason:
The tests were originally failing due to a missing x-amz-expiration header, which has been addressed in the PR above. However, they continue to fail because the x-amz-expiration value should be in midnight UTC format (according to aws), while the Ceph S3 tests calculate the number of days from a start_time that is not in midnight UTC. This mismatch leads to incorrect delta_days values.

I've opened a bug in the ceph-s3 test repository to track this issue: ceph-s3 tests issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
2 participants