Skip to content

Add missing test for set #622

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

AbeZbm
Copy link

@AbeZbm AbeZbm commented May 8, 2025

Hi,

Thanks for your time to review this PR.

We are researchers focusing on generating Rust tests by LLM to statisfy specific execution paths in functions. By examing the existing code, we found that one test can be added to improve the repo's overall test coverage.
The code region we covered is:

hashbrown/src/set.rs

Lines 1630 to 1633 in 6263458

if rhs.len() < self.len() {
for item in rhs {
self.remove(item);
}

The current doc test only covers rhs.len() >= self.len(). To keep documentation example concise, we add a unit test for the rhs.len() < self.len() case.

Thanks again for reviewing.

@clarfonthey
Copy link
Contributor

While I'll admit the bar for quality is substantially lower for tests, by default I'm not happy about the idea of using LLMs at all for code.

The code itself works and is okay. It is true that SubAssign is technically untested and that should probably be rectified, but I would rather focus on the proper development of a test suite than haphazard generation of tests using lying machines.

Of course, I'm not a maintainer, so, this only counts as guidance here. But I would much rather not see time wasted reviewing tests by machines so prone to failure. Tests are only good if the person writing them knows what they're doing, and simply passing isn't enough; the goal is to codify correctness, not simply the current behaviour.

(This test in particular checks all the boxes as "okay." I just would not like to see more like it which potentially don't.)

@AbeZbm
Copy link
Author

AbeZbm commented May 10, 2025

@clarfonthey Thank you for taking the time to review and share your valuable feedback!

We fully understand your concerns about LLM-generated test quality. As a research team, our core work focuses precisely on generating effective tests by LLM, the submitted test case represents a manually vetted valid case that survived our review process, not raw generated output. We wholeheartedly agree with the principle thad "tests should codify correctness rather than simply pass".

Thank you again for highlighting these important considerations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants