Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

XML report associated to a test session #60

Open
EliottPaillard opened this issue Jun 28, 2024 · 9 comments
Open

XML report associated to a test session #60

EliottPaillard opened this issue Jun 28, 2024 · 9 comments
Labels
question Further information is requested

Comments

@EliottPaillard
Copy link

Hello @costas80,

I've begun to look at the reports and the associated xml.
I feel I miss a "skip" status besides the "failure" and "success" ones.

Most of my tests now are :
1: initialization) Try to create an entity
2: test) Try to do something with this entity
3: cleaning) Try to delete this entity

As the test part can not resolve if the initialization fails, the second step would be skipped in this case.

However, when you skip an entire group, ITB displays it as successful.
In my opinion that would deserve to be considered "skip" instead of "success".

What do you think about this?

@costas80
Copy link
Contributor

Not sure I follow all your points here.

As the test part can not resolve if the initialization fails, the second step would be skipped in this case.

Why can't you detect if the initialisation failed? Also I assume if there is a failure here the complete test session should fail no? You can achieve this as discussed with the stopOnError attribute.

However, when you skip an entire group, ITB displays it as successful.

Steps within e.g. an if step with a condition that did not evaluate to true are skipped, not successful. The entire if step however will (correctly) be set as a success. If the goal is to skip the rest of the test steps under certain conditions you could also use the exit step that could be more clear.

In case the XML report you are seeing does not match what you expect from the UI, do you think you could share a screenshot from the UI and the corresponding XML report to highlight what you are finding inconsistent?

@EliottPaillard
Copy link
Author

In each of our tests, the only part that truly corresponds to what we aim to test is the second phase.
The initialization phase merely creates the initial conditions necessary for the test to run smoothly.
The cleanup phase returns the system to its pre-test state.
This is essential to properly execute subsequent tests and ensure that each test can be run independently.

We can and do detect when initialization fails.
In such cases, we prefer not to even start the actual test phase to clearly distinguish between 'the system fails the test' and 'conditions are not met to run the test.'

Once the initialization phase has begun, it is no longer possible to use stopOnError or exit because we do not want to end the testing session without going through the cleanup phase.

Currently, in the report, when initialization fails, the validation step only shows the 'if' step as having succeeded. (step 4 in the picture)

image image

@costas80
Copy link
Contributor

Given the current test steps the report is in fact accurate. Also keep in mind that skipped steps don't figure in the PDF/XML reports. What if you expand the if step so that you add an else block? Like that you would have something as follows:

  • The then part will execute only if the condition is met (i.e. the initialisation is ok). Exactly what you have now basically.
  • The else part will include a step that will always fail. This could be a verify step with an ExpressionValidator handler, for which you pass a false() as the expression. You could even set the description of this step with something meaningful to convey better the message to the user.

By executing the else block and having a failure within it, the overall if step will also fail and give you the result you're looking for. As the test is not set to stop then you will also run the cleanup step at the end. What do you think?

@costas80 costas80 added the question Further information is requested label Jun 28, 2024
@costas80
Copy link
Contributor

Another idea which would be better (and one that most projects I know of apply), would be to do a "delete/insert" as the first setup step (if possible). If the delete call would fail you can also just ignore the failure (ignore the response altogether).

Like that you have test cases that can always run from a clean slate regardless of what happened previously (covering also cases where a previous cleanup failed for some reason, or the user terminated the test session before reaching that point). With such a design you can use stopOnError, remove the if step, and make the resulting report much more intuitive.

Better no?

@EliottPaillard
Copy link
Author

I'm not sure I understand what you're saying.
By "delete/insert" do you mean like refresh the system by cleanup everything and then only create the things we need?

@costas80
Copy link
Contributor

costas80 commented Jul 1, 2024

By "delete/insert" do you mean like refresh the system by cleanup everything and then only create the things we need?

More/less yes, If I understood currently you are doing something along these lines:

  1. Add test data to the SUT via REST calls.
  2. Assuming the setup step worked, proceed to do the test calls and verifications.
  3. Remove test data from the SUT via REST calls.

My suggestion would be to adapt this as follows:

  1. Remove test data from the SUT via REST calls (ignoring any issues reported in the responses).
  2. Add test data to the SUT via REST calls.
  3. Proceed to do the test calls and verifications

If you follow this approach you will never have test cases failing because test data from previous test sessions was not removed. In addition, your core testing steps can now execute and fail immediately (i.e. the stopOnError attribute) given that you are not relying on always executing a "cleanup" at the end.

Would something like this work for you?

@EliottPaillard
Copy link
Author

Hey !
I think this would not work well for us.
Because we can't delete everything that could be retrieved with our request.
So even if we delete the data we will add again just after, remaining data will cause the test fail.

But anyway I think we've found another way to achieve what we want

@costas80
Copy link
Contributor

Hi @EliottPaillard. Good to see you found a workaround but in any case I'm keeping this on our backlog to see how we can better support a "cleanup/teardown" phase in tests that will always be executed. Like that you can have "stop on error" semantics but be sure that a final step (or set of steps) can execute regardless. In this step you could also inspect the status of earlier steps to see for example if the initialisation completed and also which specific step failed.

We can keep this issue open until this is available.

@costas80
Copy link
Contributor

Hi @EliottPaillard. I have some interesting news related to this issue and features that are just published in our nightly channel.

Once the initialization phase has begun, it is no longer possible to use stopOnError or exit because we do not want to end the testing session without going through the cleanup phase.

We had discussed previously the possibility to have "setup" and "teardown" phases in test cases which were problematic if you wanted to stop test execution upon errors (i.e. use stopOnError="true"). This has become now quite simple, using new attributes supported by container steps (e.g. group):

  • Attribute stopOnChildError allowing you to configure a container step to stop executing its steps if one produces an error. Importantly, this does not stop the overall test execution.
  • Attribute hiddenContainer (available for group steps) that hides the group's container but displays its contained steps. This allows you to use groups also as non-visual elements just to manage the behavior of certain steps.

In addition we extended use of the level attribute (previously available for verify steps), to apply also to send, receive and process steps. This allows you to set the severity of a step's failure, practically changing it from ERROR (the default) to WARNING.

Using these new features, consider this test case that shows you how to set up a set of "teardown" steps:

<!--
  Setting the test case's stopOnError to false ensures it executes all steps.
-->
<steps stopOnError="false">
  <!--
    Main test steps.

    Setting stopOnChildError to true will stop the execution of other steps within this group
    in case an error is encountered. In addition, setting hiddenContainer to true will hide the
    boundary of the group step but show all children.
  -->
  <group stopOnChildError="true" hiddenContainer="true">
    <!-- Include here all main test steps. -->
    ...
  </group>
  <!--
    Teardown phase.

    Setting hidden to true will hide the entire group and its children. The steps herein 
    make calls to trigger cleanup operations (you could have also used e.g. process).
  -->
  <group hidden="true">
    <log>'Resetting DB...'</log>
    <!-- 
      Setting the level to WARNING ensures that even if this step fails, the overall test case
      will never fail (you can of course adapt this if you want).
    -->
    <send desc="Reset DB" level="WARNING" .../>
    <log>'Removing temp files...'</log>
    <send desc="Remove temp files" level="WARNING" .../>
  </group>
</steps>

I trust with this approach you can manage your teardown operations no? (we will also document this as a "recipe" as part of the next release's documentation updates).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants