Skip to content

Use new visualizers structure from 2023-07-draft #438

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

mpsijm
Copy link
Collaborator

@mpsijm mpsijm commented Mar 16, 2025

The input visualizers were quite simple to change, the only change required was that the input visualizers now read the test case from stdin, rather than testcase.in.

The output visualizers are currently a special type of output validator, because they share a lot (mostly, in the way how they are executed). This feels very ugly, but I just wanted to get some working implementation up and running to aid in the discussion in Kattis/problem-package-format#351 🙂

TODO:

  • Allow disabling output visualizers with a command-line argument --no-output-visualizer (probably --no-visualizer needs to be renamed to --no-input-visualizer as well)
  • bt upgrade needs to read generators.yaml, and if it has a visualizer: key, move the target to input_visualizer/ and remove it from generators.yaml. It should then warn that the visualizer itself needs to be rewritten to no longer read from testcase.in, but from stdin.
  • The resulting judgeimage.<ext> of the output visualizer can be copied back to data/ when bt generate runs it on the canonical jury solution, but only when there is no input visualizer, because there can be at most one image per test case.
  • Related to the above: should we do something when any input validator happens to write image files?
  • Related to the above: should we do something when the output validator happens to write image files?
  • I haven't tested this with interactive and multi-pass problems.

I've currently tested this with a bt upgraded version of NWERC 2024 Jib Job, to which I've added a simple output_visualizer/: jibjob.zip

@mpsijm
Copy link
Collaborator Author

mpsijm commented Mar 31, 2025

Done some thinking, see also Kattis/problem-package-format#351 (comment):

  • In BAPCtools, the interface of the input visualizer can be the same as that of the input validator, except that it writes a file rather than return exit code 42/43 (this is the current behaviour of this PR). The spec does not have to specify that it should write the image to testcase.<ext> in the CWD, because the spec does not have a generators framework. In fact, I'm not even sure if it needs to mention that it has the same interface as the input validator, because the spec does not prescribe where the resulting images should be stored anyway. In fact, I'm thinking of changing testcase.<ext> to image.<ext> or judgeimage.<ext>.
  • In BAPCtools, do we want to allow input validators to write image files, similar to the output validator in the spec? The spec does not prescribe where the input illustrations should be stored, so we have this freedom in BAPCtools. (Note that the input visualizer is singular and there can be multiple input validators, so perhaps this is not the best idea 😛 But in theory, this is the same check as when both the output validator and the output visualizer write an image file, so perhaps it's not too bad.)
  • In BAPCtools, we want the <feedback_dir>/judgeimage.<ext> files that result from the output visualizer/validator ran on the canonical jury solution to end up in data/, provided that there is no input illustration already (because there can be at most one image per test case). Now that I think about it, this is the same conflict as when multiple input validators would write test case illustrations.
  • In the spec, we may want to split up {in,out}put_v{alidato,isualize}r_flags, if we think it doesn't make sense to pass the same flags to both. I'd be in favour of this, because the output validator flags are typically about float tolerance and case-/whitespace-sensitiveness, but the output visualizer may need completely different flags.

@mzuenni
Copy link
Collaborator

mzuenni commented Apr 5, 2025

Reading through this: I don't think a visualizer should inherit from validator even if they use a similar interface.
They have different behaviors, and are used for completely different things. Reading program.validators returning a visualizer is just confusing same goes for ANY_VALIDATOR matching a visualizer.
We probably want a new Visualizer.py with these classes and a new method in Problem.

Also i think that for bt run calling a visualizer should be opt in not opt out.

@mzuenni
Copy link
Collaborator

mzuenni commented Apr 5, 2025

@mpsijm if you don't have time to actively work on this i would start on a new branch?

@mpsijm mpsijm mentioned this pull request Apr 9, 2025
8 tasks
@thorehusfeldt
Copy link
Collaborator

Very good thoughts upthread. Another vote for adding {in,out}put_v{alidato,isualize}r_flags in testdata.yaml.

@mzuenni
Copy link
Collaborator

mzuenni commented Apr 12, 2025

will close this in favor of #448

@mzuenni mzuenni closed this Apr 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants