You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So I started going through the list this morning and there are some problems with the output from this tool:
The Package.swift file can be anywhere in the repo where for a valid package it must be in the root.
It finds packages with no products which we reject as part of validation.
It includes a lot of test/example/experimentation packages.
I started going through manually and removing bad packages that failed with the validation script from it but that’s no good as next time we run this they’ll all come back.
If we want to use this regularly then we should add more validation to this tool, or adapt the validation script to not stop validating after one package has failed.
—
Then, the tricky one is number 3 on that list above. I think it’s best if we draw a parallel with some other package managers, let’s take npm and RubyGems as examples. Both require an explicit user action to publish packages. With npm it’s a npm publish and with RubyGems it’s gem push.
This is my biggest issue with this kind of method of adding packages. It’ll collect all sorts of things that people either didn’t intend to be packages that other people would use or are just not useful as packages because someone was just playing around.
If we align on this point, then we should add more validation or rules to try and weed out this kind of package. I don’t think this additional validation should be on the main validation script as if someone was making an intentional decision to submit a package, no matter how bare-bones it was, we should let it in.
Things I can think of for this kind of validation would be:
No packages without a significant README should be allowed. If we take this one for example, it a < 50 char README. Maybe we don’t auto-add anything with a README less than 500 characters.
That wouldn’t catch this package though. For that, we might try removing common words like “Examples”, “Demo”, “Experiment”, etc… The obvious one that’s going to be problematic here is “Test”.
Finally, we could have a denylist so that once we’ve flagged a package to not be auto-discovered, it won’t get found again.
There may be more…
The text was updated successfully, but these errors were encountered:
see if we can run the search as /Package.swift (I believe I tried but I'm not sure)
add a deny-packages.json to this repo or even just the general package list repo
run this on an interval to get recently changed packages
filter them against the deny list and the existing package list
run validation on them up to a certain limit (say 10) that we then manually validate and add
Something that occurred to me after discussing this: My two main goals with this were
automatically pull in new packages (automatically as in putting them up for PR review, not to merge them straight away)
make sure we are not missing any well known packages that simply never have been submitted
The second point could easily be addressed by running that query with say a >=500 likes filter. That should surface a decent list of packages to cross-check and any remaining packages are likely candidates to add.
The second point could easily be addressed by running that query with say a >=500 likes filter. That should surface a decent list of packages to cross-check and any remaining packages are likely candidates to add.
This is a great idea to weed out those test/example packages. I'm not sure it'd even need to be 500. 50 would probably be more than enough.
So I started going through the list this morning and there are some problems with the output from this tool:
I started going through manually and removing bad packages that failed with the validation script from it but that’s no good as next time we run this they’ll all come back.
If we want to use this regularly then we should add more validation to this tool, or adapt the validation script to not stop validating after one package has failed.
—
Then, the tricky one is number 3 on that list above. I think it’s best if we draw a parallel with some other package managers, let’s take npm and RubyGems as examples. Both require an explicit user action to publish packages. With npm it’s a npm publish and with RubyGems it’s gem push.
This is my biggest issue with this kind of method of adding packages. It’ll collect all sorts of things that people either didn’t intend to be packages that other people would use or are just not useful as packages because someone was just playing around.
If we align on this point, then we should add more validation or rules to try and weed out this kind of package. I don’t think this additional validation should be on the main validation script as if someone was making an intentional decision to submit a package, no matter how bare-bones it was, we should let it in.
Things I can think of for this kind of validation would be:
The text was updated successfully, but these errors were encountered: