Skip to content

Commit

Permalink
Migrate tests from nose to nose2 (#610)
Browse files Browse the repository at this point in the history
* update tests to be nose2-compatible

* update CI builds to use nose2

* replace nose with nose2 in requirements

* add unittest.cfg for nose2

* Update contribution.rst to use nose2

---------

Co-authored-by: Zhaoyang Xie <[email protected]>
  • Loading branch information
damien2012eng and Zhaoyang Xie authored Jul 8, 2023
1 parent a1f5a9d commit c580b2e
Show file tree
Hide file tree
Showing 32 changed files with 3,379 additions and 3,472 deletions.
14 changes: 7 additions & 7 deletions .gitlab-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,48 +21,48 @@ variables:
- echo "except ImportError:" >> sitecustomize.py
- echo " pass" >> sitecustomize.py
script:
- "/root/rsmenv/bin/nosetests --nologcapture --with-coverage --cover-package=rsmtool ${TESTFILES}"
- "/root/rsmenv/bin/nose2 -s tests ${TESTFILES}"
after_script:
- /root/rsmenv/bin/codecov

# first set of test files
testset1:
extends: ".runtests"
variables:
TESTFILES: "tests/test_experiment_rsmtool_1.py"
TESTFILES: "test_experiment_rsmtool_1"
stage: "test"

# second set of test files
testset2:
extends: ".runtests"
variables:
TESTFILES: "tests/test_comparer.py tests/test_configuration_parser.py tests/test_experiment_rsmtool_2.py"
TESTFILES: "test_comparer test_configuration_parser test_experiment_rsmtool_2"
stage: "test"

# third set of test files
testset3:
extends: ".runtests"
variables:
TESTFILES: "tests/test_analyzer.py tests/test_experiment_rsmeval.py tests/test_fairness_utils.py tests/test_utils_prmse.py tests/test_container.py tests/test_test_utils.py tests/test_cli.py"
TESTFILES: "test_analyzer test_experiment_rsmeval test_fairness_utils test_utils_prmse test_container test_test_utils test_cli"
stage: "test"

# fourth set of test files
testset4:
extends: ".runtests"
variables:
TESTFILES: "tests/test_experiment_rsmcompare.py tests/test_experiment_rsmsummarize.py tests/test_modeler.py tests/test_preprocessor.py tests/test_writer.py tests/test_experiment_rsmtool_3.py"
TESTFILES: "test_experiment_rsmcompare test_experiment_rsmsummarize test_modeler test_preprocessor test_writer test_experiment_rsmtool_3"
stage: "test"

# fifth set of test files
testset5:
extends: ".runtests"
variables:
TESTFILES: "tests/test_experiment_rsmpredict.py tests/test_reader.py tests/test_reporter.py tests/test_transformer.py tests/test_utils.py tests/test_experiment_rsmtool_4.py"
TESTFILES: "test_experiment_rsmpredict test_reader test_reporter test_transformer test_utils test_experiment_rsmtool_4"
stage: "test"

# sixth set of test files
testset6:
extends: ".runtests"
variables:
TESTFILES: "tests/test_experiment_rsmxval.py tests/test_experiment_rsmexplain.py tests/test_explanation_utils.py"
TESTFILES: "test_experiment_rsmxval test_experiment_rsmexplain test_explanation_utils"
stage: "test"
52 changes: 26 additions & 26 deletions DistributeTests.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -29,42 +29,42 @@ Write-Host "Total tests: $testCount"
$testsToRun= @()

if ($agentNumber -eq 1) {
$testsToRun = $testsToRun + "tests/test_experiment_rsmtool_1.py"
$testsToRun = $testsToRun + "test_experiment_rsmtool_1"
}
elseif ($agentNumber -eq 2) {
$testsToRun = $testsToRun + "tests/test_comparer.py"
$testsToRun = $testsToRun + "tests/test_configuration_parser.py"
$testsToRun = $testsToRun + "tests/test_experiment_rsmtool_2.py"
$testsToRun = $testsToRun + "tests/test_container.py"
$testsToRun = $testsToRun + "test_comparer"
$testsToRun = $testsToRun + "test_configuration_parser"
$testsToRun = $testsToRun + "test_experiment_rsmtool_2"
$testsToRun = $testsToRun + "test_container"
}
elseif ($agentNumber -eq 3) {
$testsToRun = $testsToRun + "tests/test_analyzer.py"
$testsToRun = $testsToRun + "tests/test_experiment_rsmeval.py"
$testsToRun = $testsToRun + "tests/test_fairness_utils.py"
$testsToRun = $testsToRun + "tests/test_utils_prmse.py"
$testsToRun = $testsToRun + "tests/test_test_utils.py"
$testsToRun = $testsToRun + "tests/test_cli.py"
$testsToRun = $testsToRun + "test_analyzer"
$testsToRun = $testsToRun + "test_experiment_rsmeval"
$testsToRun = $testsToRun + "test_fairness_utils"
$testsToRun = $testsToRun + "test_utils_prmse"
$testsToRun = $testsToRun + "test_test_utils"
$testsToRun = $testsToRun + "test_cli"
}
elseif ($agentNumber -eq 4) {
$testsToRun = $testsToRun + "tests/test_experiment_rsmcompare.py"
$testsToRun = $testsToRun + "tests/test_experiment_rsmsummarize.py"
$testsToRun = $testsToRun + "tests/test_modeler.py"
$testsToRun = $testsToRun + "tests/test_preprocessor.py"
$testsToRun = $testsToRun + "tests/test_writer.py"
$testsToRun = $testsToRun + "tests/test_experiment_rsmtool_3.py"
$testsToRun = $testsToRun + "test_experiment_rsmcompare"
$testsToRun = $testsToRun + "test_experiment_rsmsummarize"
$testsToRun = $testsToRun + "test_modeler"
$testsToRun = $testsToRun + "test_preprocessor"
$testsToRun = $testsToRun + "test_writer"
$testsToRun = $testsToRun + "test_experiment_rsmtool_3"
}
elseif ($agentNumber -eq 5) {
$testsToRun = $testsToRun + "tests/test_experiment_rsmpredict.py"
$testsToRun = $testsToRun + "tests/test_reader.py"
$testsToRun = $testsToRun + "tests/test_reporter.py"
$testsToRun = $testsToRun + "tests/test_transformer.py"
$testsToRun = $testsToRun + "tests/test_utils.py"
$testsToRun = $testsToRun + "tests/test_experiment_rsmtool_4.py"
$testsToRun = $testsToRun + "test_experiment_rsmpredict"
$testsToRun = $testsToRun + "test_reader"
$testsToRun = $testsToRun + "test_reporter"
$testsToRun = $testsToRun + "test_transformer"
$testsToRun = $testsToRun + "test_utils"
$testsToRun = $testsToRun + "test_experiment_rsmtool_4"
}
elseif ($agentNumber -eq 6) {
$testsToRun = $testsToRun + "tests/test_experiment_rsmxval.py"
$testsToRun = $testsToRun + "tests/test_experiment_rsmexplain.py"
$testsToRun = $testsToRun + "tests/test_explanation_utils.py"
$testsToRun = $testsToRun + "test_experiment_rsmxval"
$testsToRun = $testsToRun + "test_experiment_rsmexplain"
$testsToRun = $testsToRun + "test_explanation_utils"
}

# join all test files seperated by space. pytest runs multiple test files in following format pytest test1.py test2.py test3.py
Expand Down
4 changes: 2 additions & 2 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,12 +42,12 @@ jobs:
- script: |
CALL activate rsmdev
echo $(pytestfiles)
nosetests --nologcapture --with-xunit $(pytestfiles)
nose2 -s tests $(pytestfiles)
displayName: 'Run tests'
- task: PublishTestResults@2
displayName: 'Publish Test Results'
inputs:
testResultsFiles: 'nosetests.xml'
testResultsFiles: 'junit.xml'
testRunTitle: 'RSMTool tests'
condition: succeededOrFailed()
18 changes: 9 additions & 9 deletions doc/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ To set up a local development environment, follow the steps below:

#. Make your changes and add tests. See the next section for more on writing new tests.

#. Run ``nosetests -v --nologcapture tests`` to run the tests. We use the ``--nologcapture`` switch, since otherwise test failures for some tests tend to produce very long Jupyter notebook traces.
#. Run ``nose2 --quiet -s tests`` to run the tests. We use the ``--quiet`` switch, since otherwise test failures for some tests tend to produce very long Jupyter notebook traces.

Documentation
-------------
Expand Down Expand Up @@ -160,7 +160,7 @@ To write a new experiment test for RSMTool (or any of the other tools):
| configuration file. |
+----------------------------------------------------------------------------+

Once you have added all new functional tests, commit all of your changes. Next, you should run ``nosetests --nologcapture`` to run all the tests. Obviously, the newly added tests will fail since you have not yet generated the expected output for that test.
Once you have added all new functional tests, commit all of your changes. Next, you should run ``nose2`` to run all the tests. Obviously, the newly added tests will fail since you have not yet generated the expected output for that test.

To do this, you should now run the following:

Expand All @@ -169,7 +169,7 @@ To do this, you should now run the following:
python tests/update_files.py --tests tests --outputs test_outputs
This will copy over the generated outputs for the newly added tests and show you a report of the files that it added. It will also update the input files form tests for ``rsmcompare`` and ``rsmsummarize``. If run correctly, the report should *only* refer the files affected by the functionality you implemented. If you run ``nosetests`` again, your newly added tests should now pass.
This will copy over the generated outputs for the newly added tests and show you a report of the files that it added. It will also update the input files form tests for ``rsmcompare`` and ``rsmsummarize``. If run correctly, the report should *only* refer to the files affected by the functionality you implemented. If you run ``nose2`` again, your newly added tests should now pass.

At this point, you should inspect all of the new test files added by the above command to make sure that the outputs are as expected. You can find these files under ``tests/data/experiments/<test>/output`` where ``<test>`` refers to the test(s) that you added.

Expand All @@ -181,15 +181,15 @@ The two examples below might help make this process easier to understand:

.. topic:: Example 1: You made a code change to better handle an edge case that only affects one test.

#. Run ``nosetests --nologcapture tests/*.py``. The affected test failed.
#. Run ``nose2 --quiet -s tests``. The affected test failed.

#. Run ``python tests/update_files.py --tests tests --outputs test_outputs`` to update test outputs. You will see the total number of deleted, updated and missing files. There should be no deleted files and no missing files. Only the files for your new test should be updated. There are no warnings in the output.

#. If this is the case, you are now ready to commit your change and the updated test outputs.

.. topic:: Example 2: You made a code change that changes the output of many tests. For example, you renamed one of the evaluation metrics.

#. Run ``nosetests --nologcapture tests/*.py``. Many tests will now fail since the output produced by the tool(s) has changed.
#. Run ``nose2 --quiet -s tests``. Many tests will now fail since the output produced by the tool(s) has changed.

#. Run ``python tests/update_files.py --tests tests --outputs test_outputs`` to update test outputs. The files affected by your change are shown as added/deleted. You also see the following warning:

Expand All @@ -199,7 +199,7 @@ The two examples below might help make this process easier to understand:
#. This means that the changes you made to the code changed the outputs for one or more ``rsmtool``/``rsmeval`` tests that served as inputs to one or more ``rsmcompare``/``rsmsummarize`` tests. Therefore, it is likely that the current test outputs no longer match the expected output and the tests for those two tools must be be re-run.

#. Run ``nosetests --nologcapture tests/*rsmsummarize*.py`` and ``nosetests --nologcapture tests/*rsmcompare*.py``. If you see any failures, make sure they are related to the changes you made since those are expected.
#. Run ``nose2 --quiet -s tests $(find tests -name 'test_*rsmsummarize*.py' | cut -d'/' -f2 | sed 's/.py//')`` and ``nose2 --quiet -s tests $(find tests -name 'test_*rsmcompare*.py' | cut -d'/' -f2 | sed 's/.py//')``. If you see any failures, make sure they are related to the changes you made since those are expected.

#. Next, re-run ``python tests/update_files.py --tests tests --outputs test_outputs`` which should only update the outputs for the ``rsmcompare``/``rsmsummarize`` tests.

Expand All @@ -211,12 +211,12 @@ Advanced tips and tricks

Here are some advanced tips and tricks when working with RSMTool tests.

#. To run a specific test function in a specific test file, simply use ``nosetests --nologcapture tests/test_X.py:Y`` where ``test_X.py`` is the name of the test file, and ``Y`` is the test functions. Note that this will not work for parameterized tests. If you want to run a specific parameterized test, you can comment out all of the other ``param()`` calls and run the ``test_run_experiment_parameterized()`` function as above.
#. To run a specific test function in a specific test file, simply use ``nose2 --quiet tests test_X.Y.Z`` where ``test_X`` is the name of the test file, ``Y`` is the enclosing ``unittest.TestCase`` subclass, and ``Z`` is the desired test function. Note that this will not work for parameterized tests. If you want to run a specific parameterized test, you can comment out all of the other parameters in the ``params`` and run the ``test_run_experiment_parameterized()`` function as above.

#. If you make any changes to the code that can change the output that the tests are expected to produce, you *must* re-run all of the tests and then update the *expected* test outputs using the ``update_files.py`` command as shown :ref:`above <update_files>`.

#. In the rare case that you *do* need to create an entirely new ``tests/test_experiment_X.py`` file instead of using one of the existing ones, you can choose whether to exclude the tests contained in this file from updating their expected outputs when ``update_files.py`` is run by setting ``_AUTO_UPDATE=False`` at the top of the file. This should *only* be necessary if you are absolutely sure that your tests will never need updating.

#. The ``--pdb-errors`` and ``--pdb-failures`` options for ``nosetests`` are your friends. If you encounter test errors or test failures where the cause may not be immediately clear, re-run the ``nosetests`` command with the appropriate option. Doing so will drop you into an interactive PDB session as soon as a error (or failure) is encountered and then you inspect the variables at that point (or use "u" and "d" to go up and down the call stack). This may be particularly useful for tests in ``tests/test_cli.py`` that use ``subprocess.run()``. If these tests are erroring out, use ``--pdb-errors`` and inspect the "stderr" variable in the resulting PDB session to see what the error is.
#. The ``--debugger/-D`` option for ``nose2`` is your friend. If you encounter test errors or test failures where the cause may not be immediately clear, re-run the ``nose2`` command with this option. Doing so will drop you into an interactive PDB session as soon as an error (or failure) is encountered and then you inspect the variables at that point (or use "u" and "d" to go up and down the call stack). This may be particularly useful for tests in ``tests/test_cli.py`` that use ``subprocess.run()``. If these tests are erroring out, use ``-D`` and inspect the "stderr" variable in the resulting PDB session to see what the error is.

#. In RSMTool 8.0.1 and later, the tests will pass even if any of the reports contain warnings. To catch any warnings that may appear in the reports, run the tests in strict mode (``STRICT=1 nosetests --nologcapture tests``).
#. In RSMTool 8.0.1 and later, the tests will pass even if any of the reports contain warnings. To catch any warnings that may appear in the reports, run the tests in strict mode (``STRICT=1 nose2 --quiet -s tests``).
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ codecov
doc2dash
ipython
jupyter
nose
nose2
notebook
numpy<1.24
openpyxl
Expand Down
Loading

0 comments on commit c580b2e

Please sign in to comment.