Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QA] VizroAI UI tests #882

Open
wants to merge 50 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 30 commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
ea31d69
component library tests
l0uden Nov 13, 2024
c656536
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 13, 2024
40a293e
failed artifacts and slack notifications
l0uden Nov 14, 2024
3283d74
branch in notification
l0uden Nov 14, 2024
3cff909
delete screenshot
l0uden Nov 14, 2024
8fe3812
add screenshot
l0uden Nov 14, 2024
ab843c8
fix screenshot url
l0uden Nov 14, 2024
f9de666
Merge branch 'main' of https://github.com/mckinsey/vizro into qa/comp…
l0uden Nov 14, 2024
d74645f
changelog
l0uden Nov 14, 2024
2af4dce
vizroAI UI tests
l0uden Nov 18, 2024
aeb063f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 18, 2024
8ad3e4e
added PR branch for tests
l0uden Nov 18, 2024
9ff0bbb
fix for runner name
l0uden Nov 18, 2024
4d826f0
requirements for tests env
l0uden Nov 18, 2024
73ba50a
run app under hatch
l0uden Nov 18, 2024
1bc78c8
return headless mode
l0uden Nov 18, 2024
f5a67bf
test failure
l0uden Nov 18, 2024
fbbb2a0
test failure
l0uden Nov 18, 2024
5b6c718
test success
l0uden Nov 18, 2024
bb0b680
Merge branch 'main' of https://github.com/mckinsey/vizro into qa/vizr…
l0uden Dec 12, 2024
ea7ef47
fix merging main
l0uden Dec 12, 2024
2ffdbe6
tests refactor
l0uden Dec 13, 2024
41b6a97
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 13, 2024
aaa805d
delete data csv
l0uden Dec 13, 2024
5f0f3ce
Merge branch 'qa/vizro_ai_ui_tests' of https://github.com/mckinsey/vi…
l0uden Dec 13, 2024
255b847
Merge branch 'main' of https://github.com/mckinsey/vizro into qa/vizr…
l0uden Dec 13, 2024
efc1acc
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 13, 2024
d68d17d
test failed
l0uden Dec 13, 2024
4905fe7
Merge branch 'qa/vizro_ai_ui_tests' of https://github.com/mckinsey/vi…
l0uden Dec 13, 2024
f85aa78
test success
l0uden Dec 13, 2024
ad807f9
small refactoring
l0uden Dec 24, 2024
f2792be
Merge branch 'main' of https://github.com/mckinsey/vizro into qa/vizr…
l0uden Dec 24, 2024
84be043
changelog
l0uden Dec 24, 2024
ebc2465
temp uv fix
l0uden Dec 24, 2024
530d3fe
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 24, 2024
01dcdc6
temp uv fix
l0uden Dec 24, 2024
7c78959
Merge branch 'qa/vizro_ai_ui_tests' of https://github.com/mckinsey/vi…
l0uden Dec 24, 2024
4f93588
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 24, 2024
986872b
get conftest back to integration
l0uden Dec 24, 2024
725b7b2
Merge branch 'qa/vizro_ai_ui_tests' of https://github.com/mckinsey/vi…
l0uden Dec 24, 2024
c2f13f4
Propose rename for future
antonymilne Jan 16, 2025
dde3430
Merge branch 'main' of https://github.com/mckinsey/vizro into qa/vizr…
l0uden Jan 17, 2025
91e88ab
refactor tests structure and naming
l0uden Jan 17, 2025
30acedc
changes wait port
l0uden Jan 17, 2025
d6282c1
make dashboard_ui run with dash_duo
l0uden Jan 21, 2025
d313f0e
Merge branch 'main' of https://github.com/mckinsey/vizro into qa/vizr…
l0uden Jan 21, 2025
4e8d43e
move conftest with reset managers to vizro_ai_ui only
l0uden Jan 21, 2025
6bc8b30
fix test-vizro-ai-ui.yml
l0uden Jan 21, 2025
b28f5f9
linting
l0uden Jan 21, 2025
56e6812
webhook secret
l0uden Jan 21, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,16 @@ runs:
- name: Copy failed screenshots
shell: bash
run: |
mkdir /home/runner/work/vizro/vizro/vizro-core/failed_screenshots/
cd /home/runner/work/vizro/vizro/vizro-core/
mkdir ${{ env.PROJECT_PATH }}failed_screenshots/
cd ${{ env.PROJECT_PATH }}
cp *.png failed_screenshots

- name: Archive production artifacts
uses: actions/upload-artifact@v4
with:
name: Failed screenshots
path: |
/home/runner/work/vizro/vizro/vizro-core/failed_screenshots/*.png
${{ env.PROJECT_PATH }}failed_screenshots/*.png

- name: Send custom JSON data to Slack
id: slack
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,3 +43,4 @@ jobs:
env:
TESTS_NAME: Vizro e2e component library tests
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
PROJECT_PATH: /home/runner/work/vizro/vizro/vizro-core/
66 changes: 66 additions & 0 deletions .github/workflows/test-e2e-vizro-ai-ui.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
name: e2e tests for VizroAI UI

defaults:
run:
working-directory: vizro-ai

on:
push:
branches: [main]
pull_request:
branches:
- main
paths:
- "vizro-ai/**"
- "!vizro-ai/docs/**"

env:
PYTHONUNBUFFERED: 1
FORCE_COLOR: 1
PYTHON_VERSION: "3.12"

jobs:
test-e2e-vizro-ai-ui-fork:
if: ${{ github.event.pull_request.head.repo.fork }}
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4

- name: Passed fork step
run: echo "Success!"

test-e2e-vizro-ai-ui:
if: ${{ ! github.event.pull_request.head.repo.fork }}
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4

- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}

- name: Install Hatch
run: pip install hatch

- name: Show dependency tree
run: hatch run pip tree

- name: Run e2e VizroAI UI tests
run: |
hatch run vizro-ai-ui
tests/tests_utils/wait-for-it.sh 127.0.0.1:8050 -t 30
hatch run test-e2e-vizro-ai-ui
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
OPENAI_API_BASE: ${{ secrets.OPENAI_API_BASE }}

- name: Create artifacts and slack notifications
if: failure()
uses: ./.github/actions/failed-artifacts-and-slack-notifications
env:
TESTS_NAME: e2e VizroAI UI tests
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
PROJECT_PATH: /home/runner/work/vizro/vizro/vizro-ai/
4 changes: 0 additions & 4 deletions .github/workflows/vizro-qa-tests-trigger.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ jobs:
matrix:
include:
- label: integration tests
- label: vizro-ai ui tests
steps:
- name: Passed fork step
run: echo "Success!"
Expand All @@ -34,7 +33,6 @@ jobs:
matrix:
include:
- label: integration tests
- label: vizro-ai ui tests
steps:
- uses: actions/checkout@v4
- name: Tests trigger
Expand All @@ -44,8 +42,6 @@ jobs:

if [ "${{ matrix.label }}" == "integration tests" ]; then
export INPUT_WORKFLOW_FILE_NAME=${{ secrets.VIZRO_QA_INTEGRATION_TESTS_WORKFLOW }}
elif [ "${{ matrix.label }}" == "vizro-ai ui tests" ]; then
export INPUT_WORKFLOW_FILE_NAME=${{ secrets.VIZRO_QA_VIZRO_AI_UI_TESTS_WORKFLOW }}
fi
export INPUT_GITHUB_TOKEN=${{ secrets.VIZRO_SVC_PAT }}
export INPUT_REF=main # because we should send existent branch to dispatch workflow
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
<!--
A new scriv changelog fragment.

Uncomment the section that is right (remove the HTML comment wrapper).
-->

<!--
### Highlights ✨

- A bullet item for the Highlights ✨ category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))

-->
<!--
### Removed

- A bullet item for the Removed category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))

-->
<!--
### Added

- A bullet item for the Added category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))

-->
<!--
### Changed

- A bullet item for the Changed category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))

-->
<!--
### Deprecated

- A bullet item for the Deprecated category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))

-->
<!--
### Fixed

- A bullet item for the Fixed category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))

-->
<!--
### Security

- A bullet item for the Security category with a link to the relevant PR at the end of your entry, e.g. Enable feature XXX ([#1](https://github.com/mckinsey/vizro/pull/1))

-->
2 changes: 2 additions & 0 deletions vizro-ai/hatch.toml
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ prep-release = [
]
pypath = "hatch run python -c 'import sys; print(sys.executable)'"
test = "pytest tests {args}"
test-e2e-vizro-ai-ui = "pytest -vs --reruns 1 tests/e2e/test_vizro_ai_ui.py --headless {args}"
l0uden marked this conversation as resolved.
Show resolved Hide resolved
test-integration = "pytest -vs --reruns 1 tests/integration --headless {args}"
test-score = "pytest -vs --reruns 1 tests/score --headless {args}"
test-unit = "pytest tests/unit {args}"
Expand All @@ -56,6 +57,7 @@ test-unit-coverage = [
"- coverage combine",
"coverage report"
]
vizro-ai-ui = "python examples/dashboard_ui/app.py &"
l0uden marked this conversation as resolved.
Show resolved Hide resolved

[envs.docs]
dependencies = [
Expand Down
6 changes: 5 additions & 1 deletion vizro-ai/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -66,8 +66,12 @@ filterwarnings = [
# Ignore LLMchian deprecation warning:
"ignore:.*The class `LLMChain` was deprecated in LangChain 0.1.17",
# Ignore warning for Pydantic v1 API and Python 3.13:
"ignore:Failing to pass a value to the 'type_params' parameter of 'typing.ForwardRef._evaluate' is deprecated:DeprecationWarning"
"ignore:Failing to pass a value to the 'type_params' parameter of 'typing.ForwardRef._evaluate' is deprecated:DeprecationWarning",
# Ignore deprecation warning until this is solved: https://github.com/plotly/dash/issues/2590:
"ignore:HTTPResponse.getheader():DeprecationWarning"
]
norecursedirs = ["tests/tests_utils"]
maxschulz-COL marked this conversation as resolved.
Show resolved Hide resolved
pythonpath = ["tests/tests_utils"]

[tool.ruff]
extend = "../pyproject.toml"
Expand Down
52 changes: 52 additions & 0 deletions vizro-ai/tests/e2e/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
from datetime import datetime

import pytest
from e2e_asserts import browser_console_warnings_checker
from selenium import webdriver
from selenium.webdriver.chrome.options import Options


@pytest.fixture()
def chromedriver(request):
"""Fixture for starting chromedriver."""
options = Options()
options.add_argument("--headless")
options.add_argument("--window-size=1920,1080")
options.add_argument("--disable-search-engine-choice-screen")
driver = webdriver.Chrome(options=options)
driver.get(f"http://127.0.0.1:{request.param.get('port')}/")
return driver


@pytest.fixture(autouse=True)
def teardown_method(chromedriver):
"""Fixture checks log errors and quits the driver after each test."""
yield
log_levels = [level for level in chromedriver.get_log("browser") if level["level"] == "SEVERE" or "WARNING"]
if log_levels:
for log_level in log_levels:
browser_console_warnings_checker(log_level, log_levels)
chromedriver.quit()


@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
rep = outcome.get_result()
setattr(item, "rep_" + rep.when, rep)


@pytest.fixture(scope="function", autouse=True)
def test_failed_check(request):
yield
if request.node.rep_setup.failed:
return "setting up a test failed!", request.node.nodeid
elif request.node.rep_setup.passed and request.node.rep_call.failed:
driver = request.node.funcargs["chromedriver"]
take_screenshot(driver, request.node.nodeid)
return "executing test failed", request.node.nodeid


def take_screenshot(driver, nodeid):
file_name = f'{nodeid}_{datetime.today().strftime("%Y-%m-%d_%H-%M")}.png'.replace("/", "_").replace("::", "__")
driver.save_screenshot(file_name)
62 changes: 62 additions & 0 deletions vizro-ai/tests/e2e/test_vizro_ai_ui.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
import os

import pytest
from e2e_fake_data_generator import create_genre_popularity_by_country
from e2e_waiters import (
wait_for,
webdriver_click_waiter,
webdriver_waiter,
webdriver_waiter_css,
)
from selenium.common import InvalidSelectorException, TimeoutException


@pytest.mark.parametrize(
"chromedriver",
[({"port": 8050})],
indirect=["chromedriver"],
)
def test_chart_ui(chromedriver):
# Create test dataset
popularity_dataset = create_genre_popularity_by_country(start_year=1980, end_year=2023, records_per_year=10)
# Save to a CSV file
popularity_dataset.to_csv("tests/tests_utils/genre_popularity_by_country.csv", index=False)

# fill in values
api_key = webdriver_waiter(chromedriver, '//*[@id="settings-api-key"]')
api_base = webdriver_waiter(chromedriver, '//*[@id="settings-api-base"]')
api_key.send_keys(os.environ["OPENAI_API_KEY"])
api_base.send_keys(os.environ["OPENAI_API_BASE"])

# close panel
webdriver_click_waiter(chromedriver, '//*[@class="btn-close"]')

# upload file
file_input = webdriver_waiter_css(chromedriver, 'input[type="file"]')
file_input.send_keys(os.path.abspath("tests/tests_utils/genre_popularity_by_country.csv"))
webdriver_click_waiter(chromedriver, '//*[@id="data-upload"]')

# enter prompt
prompt = webdriver_waiter(chromedriver, '//*[@id="text-area"]')
prompt.send_keys("Create bar graph by genre")

# choose gpt version
webdriver_click_waiter(chromedriver, '//*[@class="Select-arrow"]')
webdriver_waiter(chromedriver, '//*[@class="Select-menu-outer"]')
webdriver_click_waiter(chromedriver, '//*/div[text()="gpt-4o-mini"]')

# click run VizroAI
webdriver_click_waiter(chromedriver, '//*[@id="trigger-button"]')

# check result
def _text_waiter():
try:
webdriver_waiter(
chromedriver,
'//*[starts-with(@class, "language-python")]',
)
return True
except (TimeoutException, InvalidSelectorException):
return False

wait_for(_text_waiter)
l0uden marked this conversation as resolved.
Show resolved Hide resolved
Empty file.
30 changes: 30 additions & 0 deletions vizro-ai/tests/tests_utils/e2e_asserts.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
from e2e_constants import (
INVALID_PROP_ERROR,
REACT_NOT_RECOGNIZE_ERROR,
REACT_RENDERING_ERROR,
READPIXELS_WARNING,
SCROLL_ZOOM_ERROR,
UNMOUNT_COMPONENTS_ERROR,
WEBGL_WARNING,
WILLMOUNT_RENAMED_WARNING,
WILLRECEIVEPROPS_RENAMED_WARNING,
)
from hamcrest import any_of, assert_that, contains_string


def browser_console_warnings_checker(log_level, log_levels):
assert_that(
log_level["message"],
any_of(
contains_string(INVALID_PROP_ERROR),
contains_string(REACT_NOT_RECOGNIZE_ERROR),
contains_string(SCROLL_ZOOM_ERROR),
contains_string(REACT_RENDERING_ERROR),
contains_string(UNMOUNT_COMPONENTS_ERROR),
contains_string(WILLMOUNT_RENAMED_WARNING),
contains_string(WILLRECEIVEPROPS_RENAMED_WARNING),
contains_string(READPIXELS_WARNING),
contains_string(WEBGL_WARNING),
),
reason=f"Error outoput: {log_levels}",
)
11 changes: 11 additions & 0 deletions vizro-ai/tests/tests_utils/e2e_constants.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
INVALID_PROP_ERROR = "Invalid prop `persisted_props[0]` of value `on` supplied to `t`"
REACT_NOT_RECOGNIZE_ERROR = "React does not recognize the `%s` prop on a DOM element"
SCROLL_ZOOM_ERROR = "_scrollZoom"
REACT_RENDERING_ERROR = "unstable_flushDiscreteUpdates: Cannot flush updates when React is already rendering"
UNMOUNT_COMPONENTS_ERROR = "React state update on an unmounted component"
WILLMOUNT_RENAMED_WARNING = "componentWillMount has been renamed"
WILLRECEIVEPROPS_RENAMED_WARNING = "componentWillReceiveProps has been renamed"
READPIXELS_WARNING = "GPU stall due to ReadPixels"
WEBGL_WARNING = "WebGL" # https://issues.chromium.org/issues/40277080

TIMEOUT = 30
Loading
Loading