You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the dispersion.py and dos.py scripts are tested by grabbing the generated figure using plt.gcf() and inspecting ax.lines
There are legacy tests for euphonic.plot which inspect tick labels and ax.lines for plot_1d and plot_dispersion
parts of euphonic.plot are untested (i.e. plot_2d)
While I have advocated for this kind of test in the past (they are robust and verify that numbers are not changing unexpectedly), they are cumbersome to set up and can miss more general issues with plot configuration.
It could be interesting to try https://github.com/matplotlib/pytest-mpl which provides a pytest fixture for comparing output images with reference results. This seems a robust sanity check against surprises, which also provides a visual output to quickly inspect differences. I've been cautious in the past about image checks due to the potential fragility against minor differences in Matplotlib versions and between machines. Presumably the pytest extension controls for this stuff by being prescriptive with the figure setup. I think we should give it a try - it could significantly lower the barrier to testing new plot features.
The text was updated successfully, but these errors were encountered:
At present:
plt.gcf()
and inspecting ax.linesWhile I have advocated for this kind of test in the past (they are robust and verify that numbers are not changing unexpectedly), they are cumbersome to set up and can miss more general issues with plot configuration.
It could be interesting to try https://github.com/matplotlib/pytest-mpl which provides a pytest fixture for comparing output images with reference results. This seems a robust sanity check against surprises, which also provides a visual output to quickly inspect differences. I've been cautious in the past about image checks due to the potential fragility against minor differences in Matplotlib versions and between machines. Presumably the pytest extension controls for this stuff by being prescriptive with the figure setup. I think we should give it a try - it could significantly lower the barrier to testing new plot features.
The text was updated successfully, but these errors were encountered: