A Robot Framework Listener for library agnostic self-healing and smart recovery of tests
📙 Documentation can be found here
pip install robotframework-healAdd Library SelfHealing to your Robot Framework test suite *** Settings *** section.
*** Settings ***
Library SelfHealingSet up the following environment variables to enable the self-healing feature:
LLM_API_KEYLLM_API_BASELLM_TEXT_MODEL(model used for picking final locator from proposal list)LLM_LOCATOR_MODEL(model for generating locator proposals from DOM tree)LLM_VISION_MODEL(not working yet)
Interface with LLMs uses the LiteLMM API.
Check the list of available Providers and how to connect to them.
*** Settings ***
Library Browser timeout=5s
Library SelfHealing use_llm_for_locator_proposals=True
Suite Setup New Browser browser=${BROWSER} headless=${HEADLESS}
Test Setup New Context viewport={'width': 1280, 'height': 720}
Test Teardown Close Context
Suite Teardown Close Browser ALL
*** Variables ***
${BROWSER} chromium
${HEADLESS} True
*** Test Cases ***
Login with valid credentials
New Page https://the-internet.herokuapp.com/login
Fill Text id=user tomsmith
Fill Text id=pass SuperSecretPassword!
Click id=loginbutton
Get Text id=flash *= You logged into a secure area!fix: Specifies the mode of operation, set to "realtime" for real-time healing. Default is "realtime".collect_locator_info: Boolean flag to enable or disable the collection of locator information. Default is false.use_locator_db: Boolean flag to enable or disable the use of a locator database. Default is false.use_llm_for_locator_proposals: Boolean flag to enable or disable the use of a language model for generating locator proposals. Default is false.heal_assertions: Boolean flag to enable or disable the healing of assertions. Default is false. (not implemented yet)locator_db_file: Specifies the filename for the locator database. Default is "locator_db.json".
Example when running with Ollama LLM:
LLM_API_BASE=http://localhost:11434
LLM_TEXT_MODEL=ollama_chat/llama3.1
LLM_LOCATOR_MODEL=ollama_chat/llama3.1
LLM_VISION_MODEL=ollama_chat/llama3.2-visionExample when using OpenAI:
LLM_API_KEY=YOUR_OPENAI_API_KEY
LLM_TEXT_MODEL=gpt-3.5-turbo
LLM_LOCATOR_MODEL=gpt-3.5-turbo
Try it out in Gitpod
