-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
XML report associated to a test session #60
Comments
Not sure I follow all your points here.
Why can't you detect if the initialisation failed? Also I assume if there is a failure here the complete test session should fail no? You can achieve this as discussed with the stopOnError attribute.
Steps within e.g. an if step with a condition that did not evaluate to true are skipped, not successful. The entire if step however will (correctly) be set as a success. If the goal is to skip the rest of the test steps under certain conditions you could also use the exit step that could be more clear. In case the XML report you are seeing does not match what you expect from the UI, do you think you could share a screenshot from the UI and the corresponding XML report to highlight what you are finding inconsistent? |
Given the current test steps the report is in fact accurate. Also keep in mind that skipped steps don't figure in the PDF/XML reports. What if you expand the if step so that you add an else block? Like that you would have something as follows:
By executing the else block and having a failure within it, the overall if step will also fail and give you the result you're looking for. As the test is not set to stop then you will also run the cleanup step at the end. What do you think? |
Another idea which would be better (and one that most projects I know of apply), would be to do a "delete/insert" as the first setup step (if possible). If the delete call would fail you can also just ignore the failure (ignore the response altogether). Like that you have test cases that can always run from a clean slate regardless of what happened previously (covering also cases where a previous cleanup failed for some reason, or the user terminated the test session before reaching that point). With such a design you can use stopOnError, remove the if step, and make the resulting report much more intuitive. Better no? |
I'm not sure I understand what you're saying. |
More/less yes, If I understood currently you are doing something along these lines:
My suggestion would be to adapt this as follows:
If you follow this approach you will never have test cases failing because test data from previous test sessions was not removed. In addition, your core testing steps can now execute and fail immediately (i.e. the stopOnError attribute) given that you are not relying on always executing a "cleanup" at the end. Would something like this work for you? |
Hey ! But anyway I think we've found another way to achieve what we want |
Hi @EliottPaillard. Good to see you found a workaround but in any case I'm keeping this on our backlog to see how we can better support a "cleanup/teardown" phase in tests that will always be executed. Like that you can have "stop on error" semantics but be sure that a final step (or set of steps) can execute regardless. In this step you could also inspect the status of earlier steps to see for example if the initialisation completed and also which specific step failed. We can keep this issue open until this is available. |
Hi @EliottPaillard. I have some interesting news related to this issue and features that are just published in our nightly channel.
We had discussed previously the possibility to have "setup" and "teardown" phases in test cases which were problematic if you wanted to stop test execution upon errors (i.e. use
In addition we extended use of the Using these new features, consider this test case that shows you how to set up a set of "teardown" steps: <!--
Setting the test case's stopOnError to false ensures it executes all steps.
-->
<steps stopOnError="false">
<!--
Main test steps.
Setting stopOnChildError to true will stop the execution of other steps within this group
in case an error is encountered. In addition, setting hiddenContainer to true will hide the
boundary of the group step but show all children.
-->
<group stopOnChildError="true" hiddenContainer="true">
<!-- Include here all main test steps. -->
...
</group>
<!--
Teardown phase.
Setting hidden to true will hide the entire group and its children. The steps herein
make calls to trigger cleanup operations (you could have also used e.g. process).
-->
<group hidden="true">
<log>'Resetting DB...'</log>
<!--
Setting the level to WARNING ensures that even if this step fails, the overall test case
will never fail (you can of course adapt this if you want).
-->
<send desc="Reset DB" level="WARNING" .../>
<log>'Removing temp files...'</log>
<send desc="Remove temp files" level="WARNING" .../>
</group>
</steps> I trust with this approach you can manage your teardown operations no? (we will also document this as a "recipe" as part of the next release's documentation updates). |
Hello @costas80,
I've begun to look at the reports and the associated xml.
I feel I miss a "skip" status besides the "failure" and "success" ones.
Most of my tests now are :
1: initialization) Try to create an entity
2: test) Try to do something with this entity
3: cleaning) Try to delete this entity
As the test part can not resolve if the initialization fails, the second step would be skipped in this case.
However, when you skip an entire group, ITB displays it as successful.
In my opinion that would deserve to be considered "skip" instead of "success".
What do you think about this?
The text was updated successfully, but these errors were encountered: