-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Core: Add AutoWorld.testable_worlds to be used instead of AutoWorld.world_types in generic unittests #5504
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
06413e3 to
016d18e
Compare
BadMagic100
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love to see this (completely by accident by the way). For the sake of your sanity I hope it can be merged quickly :)
This is probably a contentious suggestion, but thinking back to the original soft disablement pr one of the goals there was to allow opting worlds out of certain tests to prevent worlds that fail new tests from blocking said tests. Of course the downside of this is it means broken worlds are not fixed immediately, but it prevents the tests from sitting in PR hell and allows new worlds to have the tests to prevent them from making the same error. Should this be included here? I imagine a field like unsafe_opt_out_tests: list[str] which just names the tests to be opted out. Playing along with the decorator suggestion, this is also something that could be implemented there by using the wrapped function's name as a filter when iterating the testableworlds
| for game_name, world_type in AutoWorldRegister.world_types.items(): | ||
| with self.subTest("Game", game_name=game_name): | ||
| multiworld = setup_solo_multiworld(world_type, gen_steps) | ||
| for game_name, testable_world in AutoWorldRegister.testable_worlds.items(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given how often this pattern is used, it seems like it would be desirable to make it easier to reuse this as it's getting very bulky and inconvenient to use. It'll also make future refactoring easier.
One option that comes to mind but may be neither optimal nor trivial is to make an decorator that automatically implements this pattern, allowing each test to be implemented as a function of world/options. The iteration and subtests would then be handled opaquely under the hood for a majority of tests
|
Of course I have more thoughts just after reviewing
|
|
I don't know about this. What this PR wants seems exactly like what Why not move more of the core tests to Merging this PR would end up with two world testing setups that are almost the same, which I think is redundant and would be confusing. |
We could, but then we really really really need to encourage the use of Right now, every single TestCase subclassing WorldTestBase running Fill is seen as a problem by most of the important™️members of the community, and this is completely intransparent to world devs such that they do it "by accident". In this scenario, I would insist that we find a disambiguation between WorldTestBase as the "convenient helper class for custom unit tests" and the "registering non-default options to run default tests on" class |
|
Also, we would need a mechanism where core can run these WorldTestBase tests on default options, because that part needs to be retained for sure, so I guess that would be some sort of dynamic class creation |
|
And finally, this would mean that WorldTestBase would be like, really big, instead of now where the unit tests are neatly split across multiple files in multiple folders. Unless there is a good way to split one class across multiple files? If someone wants to start work on that, doing the "disambiguating" of WorldTestBase would be the first step. I would see that as a generically good change anyway (Highlighting this because I don't want to be seen as just unproductively shooting down this idea) |
|
Oh, also also also, one thing I like about this solution is that we have an easy mechanism for "rewarding" worlds who register more presets than just the default one, e.g. by just running len(world_class.presets). I guess we could have some sort of metaclass for WorldTestBase that counts subclasses by WorldTestBase.game instead? A metaclass like this could also prevent running the default tests on the same options twice, which I think is important. |
|
pytest-xdist multiprocessing does appear to work with multiple functions on the same unittest.TestCase subclass, so this is not a concern. |
The current
There is no requirement to move the test implementations into
I don't think there will be any issues with |
|
Black Sliver and I discussed all of these points in Discord: https://discord.com/channels/731205301247803413/731214280439103580/1424353137598402630 I have come to the conclusion that "I don't hate it" (which is high praise in German culture) We came to a lot of the same conclusions as BadMagic |
|
Here's the final result of that discussion from my side https://discord.com/channels/731205301247803413/731214280439103580/1424359725151293450
|
This is mostly to allow us to have greater control over what unit tests get run, but I also wanted to include a way for worlds to run generic unit tests on non default options.
Here's what this PR does:
Exceptions:
There is currently no incentive to register extra options, but my vision is that our index / the new install process that runs unit tests on worlds you're installing "rewards" worlds for defining multiple test option presets