Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add task list to results page #785

Open
7 tasks done
CharliePatterson opened this issue Jan 15, 2025 · 5 comments · May be fixed by #842
Open
7 tasks done

Add task list to results page #785

CharliePatterson opened this issue Jan 15, 2025 · 5 comments · May be fixed by #842
Assignees
Labels
Spill over Identifies tickets not completed in sprint and have been carried over into next sprint

Comments

@CharliePatterson
Copy link
Contributor

CharliePatterson commented Jan 15, 2025

Context:

  • To help data providers understand which data quality issues they must fix prior to publishing and submitting their data, we are going to incorporate the Data Management team's data quality framework.
  • This will introduce the concept of "blocking" and "non-blocking" issues.
  • This will help data providers prioritise which issues to resolve first, and hopefully encourage them to take an iterative approach to providing data.
  • The results page is split into a task list (copying the pattern from the task list page on the submit service), showing three groupings:
    1. Checks that have passed
    2. Issues that must be fixed prior to publishing and submitting (aka blocking issues)
    3. Issues that can be fixed after publishing and submitting (aka non-blocking issues)
  • Each task in the list links out to an issues details page, where the user can get more information on the specific issue that they need to fix.
  • If the user has blocking issues, the CTA will be to upload a new version (linking back to the start of the check tool flow). If the user has no blocking issues, the CTA will take them to the confirmation page.

Designs/Prototype:

Implementation detail:

  • Use the `quality_level' field from this table.
  • Issues labeled as quality_level 2 are considered 'blocking'.
  • Issues labeled as quality_level 3 are considered 'non-blocking'.
  • Issues without a quality_level are excluded from the results, as these either have responsibility set to internal or severity of warning.

Acceptance criteria:

1. All results:

  • Display the count of the number of rows that have been found in the dataset (e.g. "Found 50 rows").
  • Show a list of checks that have passed (e.g. "All rows have unique references").
  • This will need to be inferred from the checks that are returned, since this doesn't exist as a concept in the pipeline currently.
  • We should group checks by quality_category (ignoring the unknown entities issues), and if no issues are flagged for the group/category, then we can display the following success messages:
    1. All rows have unique references
    2. All rows have valid geometry
    3. All rows have valid data

2. Blocking issues results:

  • List all issues with a quality_level of 2 under the "Issues you must fix before submitting" section.
    • Group issues by type and field.
    • For each group show the number of rows affected, or highlight if the entire column is missing.
    • Each issue in the list links to the issues details page.
  • If there are blocking issues:
    • The page should display this text: "Fix the issues, then upload a new version of your data to check if it is ready to submit."
    • Display a button that links to the file upload step of the check tool, labelled: "Upload a new version".

3. Non-blocking issues:

  • List all issues with a quality_level of 3 under the "Issues you can fix after submitting" section.
    • Group issues by type and field.
    • For each group show the number of rows affected, or highlight if the entire column is missing.
    • Each issue in the list should link out to the issues details page.
  • If there are no blocking issues:
    • The page should display this text: "You can submit your data with these issues but should fix them over time to improve the quality and usability of the data.".
    • Display a button that links to the confirmation page, labelled: "Continue".

Remaining ToDos:

  • Update task messages
  • Update bottom of form content to show upload new version or continue based on tasks
  • add lpa name to page heading
  • delete old views and controllers
  • merge with Tabs on check tool results page #822
  • testing
  • resolve coderabbit comments
@GeorgeGoodall-GovUk
Copy link
Contributor

@CharliePatterson are there any other passed checks you want me to add? from the description it looks like we just want

  • x rows found
  • all rows have unique references

@CharliePatterson
Copy link
Contributor Author

Good question @GeorgeGoodall-GovUk - I've updated the description to add more detail:

  • Display the count of the number of rows that have been found in the dataset (e.g. "Found 50 rows").
  • Show a list of checks that have passed (e.g. "All rows have unique references").
  • This will need to be inferred from the checks that are returned, since this doesn't exist as a concept in the pipeline currently.
  • We should group checks by quality_category (ignoring the unknown entities issues), and if no issues are flagged for the group/category, then we can display the following success messages:
    1. All rows have unique references
    2. All rows have valid geometry
    3. All rows have valid data

@GeorgeGoodall-GovUk GeorgeGoodall-GovUk linked a pull request Jan 31, 2025 that will close this issue
8 tasks
@neilfwar neilfwar added the Spill over Identifies tickets not completed in sprint and have been carried over into next sprint label Feb 3, 2025
@coderabbitai coderabbitai bot linked a pull request Feb 3, 2025 that will close this issue
8 tasks
@GeorgeGoodall-GovUk GeorgeGoodall-GovUk added the blocked This ticket can not be progressed until another issue is resolved label Feb 3, 2025
@GeorgeGoodall-GovUk GeorgeGoodall-GovUk moved this from In Development to Code review & QA in Submit and update planning data service Feb 3, 2025
@GeorgeGoodall-GovUk GeorgeGoodall-GovUk moved this from Code review & QA to In Development in Submit and update planning data service Feb 4, 2025
@GeorgeGoodall-GovUk GeorgeGoodall-GovUk removed the blocked This ticket can not be progressed until another issue is resolved label Feb 4, 2025
@GeorgeGoodall-GovUk
Copy link
Contributor

Integration tests need fixing, after that we should be good to go

@GeorgeGoodall-GovUk GeorgeGoodall-GovUk moved this from In Development to Code review & QA in Submit and update planning data service Feb 6, 2025
@GeorgeGoodall-GovUk
Copy link
Contributor

tests failing on cicd but not locally. this might take a lot of trial and error to fix

@GeorgeGoodall-GovUk
Copy link
Contributor

fixed those tests, but now the acceptance tests are failing -_-

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Spill over Identifies tickets not completed in sprint and have been carried over into next sprint
Projects
Status: Code review & QA
Development

Successfully merging a pull request may close this issue.

3 participants