Skip to content

feat(RubricEvaluation): implement rubric auto-grading #7941

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jun 3, 2025

Conversation

ncduy0303
Copy link
Contributor

@ncduy0303 ncduy0303 commented May 21, 2025

Description

Implement auto-grading service for rubric-based response questions. OpenAI LLM is used to evaluate student's answers and automatically assign suitable grades for different categories. A draft comment for overall feedback is also automatically generated where teachers can edit and publish later.

Changes made

  • Implement rubric_auto_grading_service and rubric_llm_service
  • Modify front end to dispay AI generated draft comments that teachers can modify and publish, and refresh views after the question is autograded.

Views

Teacher can see suggested grades and comments when opening student's submission. Pressing Re-evaluate Answer will rerun the autograding job. This button is in loading state if the answer is currently autograded, with the rubric panel being disabled too.

image image

Comments can be modified and published in both the Submission Page and the Comment Center Page.

image image

Notes

  • Some UI enhancements on rubric panels were noted: rubric_grade being disabled dropdown, reevaluate button to moved down (and maybe changed name), 'moderation' category grade (putting -1, 10000, etc.), 'rubric explanation` overflow when explanation text is too long
  • There are some outdated javascript files that might need to be refactored into typescript.

@ncduy0303 ncduy0303 self-assigned this May 21, 2025
@cysjonathan
Copy link
Contributor

cysjonathan commented May 21, 2025

  • I am currently reusing the front end components ReevaluateButton that was used for autograding programming questions which called the back-end to auto grade the given answer.

OK

  • Since it is autogradable, it is already triggered the moment student finalise their submission, so the grader will already see the rubric-based questions graded when they open the student's submission for the first time.

What is the behaviour if the grading is "in-progress"? i.e. when the instructor opens the submission before the grading completes.

  • During manual mode, if the user clicks 'Evaluate Answers', all autogradable questions will be autograded again (which now included rubric-based question)

That is fine. I suspect we can also remove this 'Evaluate Answers' button since individual/single 'Re-evaluate Answer' is sufficient for manual triggering, and autograded answers will be autograded on finalise anyway.

  • What should be the behaviour when the assessment setting is autograded? The attached screenshot is during manual mode.
    When assessment is autograded, as-per earlier point on autogradable question type triggers autograding on finalise, rubric-based question should autograde and assign the marks based on allocated explanations. The submission will then get a final score, similar to how we do autograding for programming questions (question-answer score will be calculated).

Autograding logic should go to app/services/course/assessment/answer/, this feature should not touch rag_wise, which is a separate service for responding to forum posts (and possibly comments in future). Can create a folder for rubric if needed.

Also add test cases once ready.

@ncduy0303 ncduy0303 force-pushed the ncduy0303/rubric-auto-grade branch 7 times, most recently from 682ad82 to 9120c4e Compare May 29, 2025 06:55
@ncduy0303 ncduy0303 marked this pull request as ready for review May 29, 2025 07:23
@ncduy0303 ncduy0303 force-pushed the ncduy0303/rubric-auto-grade branch 3 times, most recently from 6a48ee7 to 6da72ec Compare May 29, 2025 08:41
Copy link
Contributor

@adi-herwana-nus adi-herwana-nus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks mostly good, but there are some comments that need to be addressed. When making changes, remember to make them in the appropriate commits.

Also, there are endpoints called auto_feedback_count and publish_auto_feedback that allow instructor to publish all feedback from a given assessment at once, but currently these only affect programming file feedback by Codaveri.
A user would consider the draft comments generated as "automated feedback", so we should probably rename the button to something like "Publish Automated Programming Feedback" to disambiguate.

@ncduy0303 ncduy0303 requested a review from adi-herwana-nus June 2, 2025 08:00
@ncduy0303 ncduy0303 force-pushed the ncduy0303/rubric-auto-grade branch 3 times, most recently from 38038a7 to 07f5a9e Compare June 3, 2025 05:52
ncduy0303 added 3 commits June 3, 2025 01:52
style(RubricPanelRow): make category explanation text wrap
…and draft comment generation

- Add backend service for rubric auto-grading and AI-generated draft comments
- Modify for frontend to enable graders to edit and publish AI-generated draft comments
- Add tests and translations for new features
@ncduy0303 ncduy0303 force-pushed the ncduy0303/rubric-auto-grade branch from 07f5a9e to ac24c46 Compare June 3, 2025 05:52
@adi-herwana-nus adi-herwana-nus merged commit e2fc62d into master Jun 3, 2025
11 of 14 checks passed
@adi-herwana-nus adi-herwana-nus deleted the ncduy0303/rubric-auto-grade branch June 3, 2025 06:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants