Skip to content

Commit 20f8cbe

Browse files
Merge pull request #85 from MarketSquare/feature/temperature_option
feature: added temperature option for models
2 parents 1f94304 + b3aa9f7 commit 20f8cbe

File tree

7 files changed

+44
-23
lines changed

7 files changed

+44
-23
lines changed

README.md

Lines changed: 20 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -173,31 +173,35 @@ REQUEST_LIMIT=5
173173
TOTAL_TOKENS_LIMIT=6000
174174
ORCHESTRATOR_AGENT_PROVIDER="openai"
175175
ORCHESTRATOR_AGENT_MODEL="gpt-4o-mini"
176+
ORCHESTRATOR_AGENT_TEMPERATURE=0.1
176177
LOCATOR_AGENT_PROVIDER="openai"
177178
LOCATOR_AGENT_MODEL="gpt-4o-mini"
179+
LOCATOR_AGNET_TEMPERATURE=0.1
178180
LOCATOR_TYPE="css"
179181
```
180182

181183
### 📝 Configuration Parameters
182184

183-
| Name | Default | Required? | Description |
184-
|------------------------------------|-----------------|--------------------------|----------------------------------------------------------------------------|
185-
| **OPENAI_API_KEY** | `None` | If using OpenAI | Your OpenAI API key |
186-
| **LITELLM_API_KEY** | `None` | If using LiteLLM | Your LiteLLM API key |
187-
| **AZURE_API_KEY** | `None` | If using Azure | Your Azure OpenAI API key |
188-
| **AZURE_API_VERSION** | `None` | If using Azure | Azure OpenAI API version |
189-
| **AZURE_ENDPOINT** | `None` | If using Azure | Azure OpenAI endpoint |
190-
| **BASE_URL** | `None` | No | Endpoint to connect to (if required) |
191-
| **ENABLE_SELF_HEALING** | `True` | No | Enable or disable SelfhealingAgents |
192-
| **USE_LLM_FOR_LOCATOR_GENERATION** | `True` | No | If `True`, LLM generates locator suggestions directly (see note below) |
193-
| **MAX_RETRIES** | `3` | No | Number of self-healing attempts per locator |
194-
| **REQUEST_LIMIT** | `5` | No | Internal agent-level limit for valid LLM response attempts |
195-
| **TOTAL_TOKENS_LIMIT** | `6000` | No | Maximum input tokens per LLM request |
196-
| **ORCHESTRATOR_AGENT_PROVIDER** | `"openai"` | No | Provider for the orchestrator agent (`"openai"`, `"azure"` or `"litellm"`) |
185+
| Name | Default | Required? | Description |
186+
|------------------------------------|----------------|--------------------------|----------------------------------------------------------------------------|
187+
| **OPENAI_API_KEY** | `None` | If using OpenAI | Your OpenAI API key |
188+
| **LITELLM_API_KEY** | `None` | If using LiteLLM | Your LiteLLM API key |
189+
| **AZURE_API_KEY** | `None` | If using Azure | Your Azure OpenAI API key |
190+
| **AZURE_API_VERSION** | `None` | If using Azure | Azure OpenAI API version |
191+
| **AZURE_ENDPOINT** | `None` | If using Azure | Azure OpenAI endpoint |
192+
| **BASE_URL** | `None` | No | Endpoint to connect to (if required) |
193+
| **ENABLE_SELF_HEALING** | `True` | No | Enable or disable SelfhealingAgents |
194+
| **USE_LLM_FOR_LOCATOR_GENERATION** | `True` | No | If `True`, LLM generates locator suggestions directly (see note below) |
195+
| **MAX_RETRIES** | `3` | No | Number of self-healing attempts per locator |
196+
| **REQUEST_LIMIT** | `5` | No | Internal agent-level limit for valid LLM response attempts |
197+
| **TOTAL_TOKENS_LIMIT** | `6000` | No | Maximum input tokens per LLM request |
198+
| **ORCHESTRATOR_AGENT_PROVIDER** | `"openai"` | No | Provider for the orchestrator agent (`"openai"`, `"azure"` or `"litellm"`) |
197199
| **ORCHESTRATOR_AGENT_MODEL** | `"gpt-4o-mini"` | No | Model for the orchestrator agent |
198-
| **LOCATOR_AGENT_PROVIDER** | `"openai"` | No | Provider for the locator agent (`"openai"`, `"azure"` or `"litellm"`) |
200+
| **ORCHESTRATOR_AGENT_TEMPERATURE** | `0.1` | No | Orchestrator model temperature. |
201+
| **LOCATOR_AGENT_PROVIDER** | `"openai"` | No | Provider for the locator agent (`"openai"`, `"azure"` or `"litellm"`) |
199202
| **LOCATOR_AGENT_MODEL** | `"gpt-4o-mini"` | No | Model for the locator agent |
200-
| **LOCATOR_TYPE** | `"css"` | No | Restricts the locator suggestions of the agent to the given type |
203+
| **LOCATOR_AGENT_TEMPERATURE** | `0.1` | No | Locator model temperature. |
204+
| **LOCATOR_TYPE** | `"css"` | No | Restricts the locator suggestions of the agent to the given type |
201205

202206
> **Note:**
203207
> Locator suggestions can be generated either by assembling strings from the DOM tree (with an LLM selecting the best option), or by having the LLM generate suggestions directly itself with the context given (DOM included). Set `USE_LLM_FOR_LOCATOR_GENERATION` to `True` to enable direct LLM generation (default is True).

SelfhealingAgents/self_healing_system/agents/locator_agent/base_locator_agent.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ class BaseLocatorAgent(ABC):
3030
Defines the common interface and shared functionality for all locator agent flavors.
3131
3232
Attributes:
33+
_cfg (Cfg): An instance of the Cfg config class containing user-defined application configuration.
3334
_usage_limits (UsageLimits): Usage token and request limits.
3435
_dom_utility (BaseDomUtils): DOM utility instance for the specific library.
3536
_use_llm_for_locator_generation (bool): Whether to use LLM for locator generation.
@@ -45,6 +46,7 @@ def __init__(self, cfg: Cfg, dom_utility: BaseDomUtils) -> None:
4546
cfg (Cfg): Instance of Cfg config class containing user-defined app configuration.
4647
dom_utility (BaseDomUtils): DOM utility instance for validation.
4748
"""
49+
self._cfg = cfg
4850
self._usage_limits: UsageLimits = UsageLimits(
4951
request_limit=cfg.request_limit, total_tokens_limit=cfg.total_tokens_limit
5052
)
@@ -217,7 +219,7 @@ async def _heal_with_llm(
217219
PromptsLocatorGenerationAgent.get_user_msg(ctx),
218220
deps=ctx.deps,
219221
usage_limits=self._usage_limits,
220-
model_settings={"temperature": 0.1},
222+
model_settings={"temperature": self._cfg.locator_agent_temperature},
221223
)
222224
if not isinstance(response.output, LocatorHealingResponse):
223225
raise ModelRetry(
@@ -319,7 +321,7 @@ async def _heal_with_dom_utils(
319321
),
320322
deps=ctx.deps,
321323
usage_limits=self._usage_limits,
322-
model_settings={"temperature": 0.1},
324+
model_settings={"temperature": self._cfg.locator_agent_temperature},
323325
)
324326

325327
# Parse the selected locator from the response

SelfhealingAgents/self_healing_system/agents/orchestrator_agent/orchestrator_agent.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ async def run_async(
7878
PromptsOrchestrator.get_user_msg(robot_ctx_payload),
7979
deps=robot_ctx_payload,
8080
usage_limits=self._usage_limits,
81-
model_settings={"temperature": 0.1, "parallel_tool_calls": False},
81+
model_settings={"temperature": self._cfg.orchestrator_agent_temperature, "parallel_tool_calls": False},
8282
)
8383
self._catch_token_limit_exceedance(response.output)
8484
return response.output

SelfhealingAgents/utils/cfg.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,9 @@ class Cfg(BaseSettings):
3232
"gpt-4o-mini", env="ORCHESTRATOR_AGENT_MODEL",
3333
description="Model selection for orchestrator agent."
3434
)
35+
orchestrator_agent_temperature: float = Field(
36+
0.1, gt=0, env="ORCHESTRATOR_AGENT_TEMPERATURE"
37+
)
3538
locator_agent_provider: str = Field(
3639
"openai", env="LOCATOR_AGENT_PROVIDER",
3740
description="LLM Provider for Locator agent - Options: 'openai', 'azure'."
@@ -40,6 +43,9 @@ class Cfg(BaseSettings):
4043
"gpt-4o-mini", env="LOCATOR_AGENT_MODEL",
4144
description="Model selection for locator agent."
4245
)
46+
locator_agent_temperature: float = Field(
47+
0.1, gt=0, env="LOCATOR_AGENT_TEMPERATURE"
48+
)
4349
request_limit: int = Field(
4450
5, env="REQUEST_LIMIT",
4551
description="Request limit for a each agent."

pyproject.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
[project]
22
name = "robotframework-selfhealing-agents"
3-
version = "0.1.2"
3+
version = "0.1.3"
44
description = ""
55
authors = [
66
{ name = "viadee IT-Unternehmensberatung AG <[email protected]>" },
77
{ name = "Many Kasiriha <[email protected]>" },
88
]
99
license = "Apache-2.0"
1010
readme = "README.md"
11-
requires-python = ">=3.12,<4.0"
11+
requires-python = ">=3.10,<4.0"
1212
dependencies = [
1313
"pydantic-ai-slim[logfire, openai, azure, litellm]",
1414
"robotframework >=7.1",

tests/utest/test_base_locator_agent.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -131,6 +131,8 @@ def __init__(self) -> None:
131131
self.use_llm_for_locator_generation: bool = True
132132
self.locator_agent_provider: str = "prov"
133133
self.locator_agent_model: str = "mod"
134+
self.locator_agent_temperature: float = 0.1
135+
self.orchestrator_agent_temperature: float = 0.1
134136

135137
cfg_mod.Cfg = Cfg
136138
_force_module("SelfhealingAgents.utils", aid_utils)
@@ -332,6 +334,8 @@ class Cfg:
332334
use_llm_for_locator_generation: bool = use_llm
333335
locator_agent_provider: str = "prov"
334336
locator_agent_model: str = "mod"
337+
locator_agent_temperature: float = 0.1
338+
orchestrator_agent_temperature: float = 0.1
335339

336340
return Impl(Cfg(), dom)
337341

@@ -452,4 +456,4 @@ def test_sort_and_filter_helpers(mod_and_cls: Tuple[Any, Any, Any, Any]) -> None
452456
sorted_list = inst._sort_locators(["b", "a", "c"])
453457
assert sorted_list[:2] == ["a", "c"]
454458
filtered = inst._filter_clickable_locators(["a", "b", "c"])
455-
assert filtered == ["a", "c"]
459+
assert filtered == ["a", "c"]

tests/utest/test_orchestrator_agent.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -247,6 +247,7 @@ class FakeCfg:
247247
total_tokens_limit: int = 1000
248248
orchestrator_agent_provider: str = "prov"
249249
orchestrator_agent_model: str = "mod"
250+
orchestrator_agent_temperature: float = 0.1
250251

251252
orch = OrchestratorAgent(FakeCfg(), _FakeLocatorAgent(is_failed=False))
252253
payload = PromptPayload(
@@ -277,6 +278,7 @@ class FakeCfg:
277278
total_tokens_limit: int = 1000
278279
orchestrator_agent_provider: str = "prov"
279280
orchestrator_agent_model: str = "mod"
281+
orchestrator_agent_temperature: float = 0.1
280282

281283
orch = OrchestratorAgent(FakeCfg(), _FakeLocatorAgent(is_failed=True))
282284
payload = PromptPayload(
@@ -316,6 +318,7 @@ class FakeCfg:
316318
total_tokens_limit: int = 1000
317319
orchestrator_agent_provider: str = "prov"
318320
orchestrator_agent_model: str = "mod"
321+
orchestrator_agent_temperature: float = 0.1
319322

320323
orch = OrchestratorAgent(FakeCfg(), _FakeLocatorAgent(is_failed=True))
321324
payload = PromptPayload(
@@ -353,6 +356,7 @@ class FakeCfg:
353356
total_tokens_limit: int = 1000
354357
orchestrator_agent_provider: str = "prov"
355358
orchestrator_agent_model: str = "mod"
359+
orchestrator_agent_temperature: float = 0.1
356360

357361
orch = OrchestratorAgent(
358362
FakeCfg(),
@@ -373,6 +377,7 @@ class FakeCfg:
373377
total_tokens_limit: int = 1000
374378
orchestrator_agent_provider: str = "prov"
375379
orchestrator_agent_model: str = "mod"
380+
orchestrator_agent_temperature: float = 0.1
376381

377382
orch = OrchestratorAgent(
378383
FakeCfg(), _FakeLocatorAgent(is_failed=True, raise_on_heal=True)
@@ -390,4 +395,4 @@ def test_catch_token_limit_exceedance_logs(
390395
assert logger.infos == ["error: out of tokens"]
391396
logger.infos.clear()
392397
OrchestratorAgent._catch_token_limit_exceedance("ok")
393-
assert logger.infos == []
398+
assert logger.infos == []

0 commit comments

Comments
 (0)