From c442768fb56fe95ba1dcf74d8262588457bbe78c Mon Sep 17 00:00:00 2001 From: alexsin368 Date: Wed, 30 Apr 2025 17:57:38 -0700 Subject: [PATCH 1/5] add support for remote server Signed-off-by: alexsin368 --- AgentQnA/README.md | 52 ++++++-- .../intel/cpu/xeon/compose_remote.yaml | 119 ++++++++++++++++++ 2 files changed, 159 insertions(+), 12 deletions(-) create mode 100644 AgentQnA/docker_compose/intel/cpu/xeon/compose_remote.yaml diff --git a/AgentQnA/README.md b/AgentQnA/README.md index 6844f716e7..ff8f6cc287 100644 --- a/AgentQnA/README.md +++ b/AgentQnA/README.md @@ -99,7 +99,7 @@ flowchart LR #### First, clone the `GenAIExamples` repo. -``` +```bash export WORKDIR= cd $WORKDIR git clone https://github.com/opea-project/GenAIExamples.git @@ -109,7 +109,7 @@ git clone https://github.com/opea-project/GenAIExamples.git ##### For proxy environments only -``` +```bash export http_proxy="Your_HTTP_Proxy" export https_proxy="Your_HTTPs_Proxy" # Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1" @@ -117,15 +117,21 @@ export no_proxy="Your_No_Proxy" ``` ##### For using open-source llms +Set up a [HuggingFace](https://huggingface.co/) account and generate a [user access token](https://huggingface.co/docs/transformers.js/en/guides/private#step-1-generating-a-user-access-token). -``` +Then set an environment variable with the token and another for a directory to download the models: +```bash export HUGGINGFACEHUB_API_TOKEN= -export HF_CACHE_DIR= #so that no need to redownload every time +export HF_CACHE_DIR= # to avoid redownloading models ``` -##### [Optional] OPANAI_API_KEY to use OpenAI models +##### [Optional] OPENAI_API_KEY to use OpenAI models or Enterprise Inference +To use OpenAI models, generate a key following these [instructions](https://platform.openai.com/api-keys). -``` +To use a remote server running Enterprise Inference, contact the cloud service provider or owner of the on-prem machine for a key to access the desired model on the server. + +Then set the environment variable `OPENAI_API_KEY` with the key contents: +```bash export OPENAI_API_KEY= ``` @@ -133,16 +139,18 @@ export OPENAI_API_KEY= ##### Gaudi -``` +```bash source $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi/set_env.sh ``` ##### Xeon -``` +```bash source $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/cpu/xeon/set_env.sh ``` +For running + ### 2. Launch the multi-agent system.
We make it convenient to launch the whole system with docker compose, which includes microservices for LLM, agents, UI, retrieval tool, vector database, dataprep, and telemetry. There are 3 docker compose files, which make it easy for users to pick and choose. Users can choose a different retrieval tool other than the `DocIndexRetriever` example provided in our GenAIExamples repo. Users can choose not to launch the telemetry containers. @@ -184,14 +192,29 @@ docker compose -f $WORKDIR/GenAIExamples/DocIndexRetriever/docker_compose/intel/ #### Launch on Xeon -On Xeon, only OpenAI models are supported. The command below will launch the multi-agent system with the `DocIndexRetriever` as the retrieval tool for the Worker RAG agent. +On Xeon, OpenAI models and models deployed on a remote server are supported. Both methods require an API key. ```bash export OPENAI_API_KEY= cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/cpu/xeon +``` + +##### OpenAI Models +The command below will launch the multi-agent system with the `DocIndexRetriever` as the retrieval tool for the Worker RAG agent. + +```bash docker compose -f $WORKDIR/GenAIExamples/DocIndexRetriever/docker_compose/intel/cpu/xeon/compose.yaml -f compose_openai.yaml up -d ``` +##### Models on Remote Server +To run on Xeon with models deployed on a remote server, run with the `compose_remote.yaml` instead. Additional environment variables also need to be set. + +```bash +export model= +export LLM_ENDPOINT_URL= +docker compose -f $WORKDIR/GenAIExamples/DocIndexRetriever/docker_compose/intel/cpu/xeon/compose.yaml -f compose_remote.yaml up -d +``` + ### 3. Ingest Data into the vector database The `run_ingest_data.sh` script will use an example jsonl file to ingest example documents into a vector database. Other ways to ingest data and other types of documents supported can be found in the OPEA dataprep microservice located in the opea-project/GenAIComps repo. @@ -208,12 +231,17 @@ bash run_ingest_data.sh The UI microservice is launched in the previous step with the other microservices. To see the UI, open a web browser to `http://${ip_address}:5173` to access the UI. Note the `ip_address` here is the host IP of the UI microservice. -1. `create Admin Account` with a random value -2. add opea agent endpoint `http://$ip_address:9090/v1` which is a openai compatible api +1. Click on the arrow above `Get started`. Create an admin account with a name, email, and password. +2. Add an OpenAI-compatible API endpoint. In the upper right, click on the circle button with the user's initial, go to `Admin Settings`->`Connections`. Under `Manage OpenAI API Connections`, click on the `+` to add a connection. Fill in these fields: + - **URL**: `http://${ip_address}:9090/v1`, do not forget the `v1` + - **Key**: any value + - **Model IDs**: any name i.e. `opea-agent`, then press `+` to add it + + Click "Save". ![opea-agent-setting](assets/img/opea-agent-setting.png) -3. test opea agent with ui +3. Test OPEA agent with UI. Return to `New Chat` and ensure the model (i.e. `opea-agent`) is selected near the upper left. Enter in any prompt to interact with the agent. ![opea-agent-test](assets/img/opea-agent-test.png) diff --git a/AgentQnA/docker_compose/intel/cpu/xeon/compose_remote.yaml b/AgentQnA/docker_compose/intel/cpu/xeon/compose_remote.yaml new file mode 100644 index 0000000000..be4f35f6d0 --- /dev/null +++ b/AgentQnA/docker_compose/intel/cpu/xeon/compose_remote.yaml @@ -0,0 +1,119 @@ +# Copyright (C) 2025 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 + +services: + worker-rag-agent: + image: opea/agent:latest + container_name: rag-agent-endpoint + volumes: + - ${TOOLSET_PATH}:/home/user/tools/ + ports: + - "9095:9095" + ipc: host + environment: + ip_address: ${ip_address} + strategy: rag_agent + with_memory: false + recursion_limit: ${recursion_limit_worker} + llm_engine: openai + llm_endpoint_url: ${LLM_ENDPOINT_URL} + api_key: ${OPENAI_API_KEY} + use_remote_service: true + model: ${model} + temperature: ${temperature} + max_new_tokens: ${max_new_tokens} + stream: false + tools: /home/user/tools/worker_agent_tools.yaml + require_human_feedback: false + RETRIEVAL_TOOL_URL: ${RETRIEVAL_TOOL_URL} + no_proxy: ${no_proxy} + http_proxy: ${http_proxy} + https_proxy: ${https_proxy} + LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY} + LANGCHAIN_TRACING_V2: ${LANGCHAIN_TRACING_V2} + LANGCHAIN_PROJECT: "opea-worker-agent-service" + port: 9095 + + worker-sql-agent: + image: opea/agent:latest + container_name: sql-agent-endpoint + volumes: + - ${WORKDIR}/GenAIExamples/AgentQnA/tests:/home/user/chinook-db # SQL database + ports: + - "9096:9096" + ipc: host + environment: + ip_address: ${ip_address} + strategy: sql_agent + with_memory: false + db_name: ${db_name} + db_path: ${db_path} + use_hints: false + recursion_limit: ${recursion_limit_worker} + llm_engine: openai + llm_endpoint_url: ${LLM_ENDPOINT_URL} + api_key: ${OPENAI_API_KEY} + use_remote_service: true + model: ${model} + temperature: 0 + max_new_tokens: ${max_new_tokens} + stream: false + require_human_feedback: false + no_proxy: ${no_proxy} + http_proxy: ${http_proxy} + https_proxy: ${https_proxy} + port: 9096 + + supervisor-react-agent: + image: opea/agent:latest + container_name: react-agent-endpoint + depends_on: + - worker-rag-agent + - worker-sql-agent + volumes: + - ${TOOLSET_PATH}:/home/user/tools/ + ports: + - "9090:9090" + ipc: host + environment: + ip_address: ${ip_address} + strategy: react_llama + with_memory: true + recursion_limit: ${recursion_limit_supervisor} + llm_engine: openai + llm_endpoint_url: ${LLM_ENDPOINT_URL} + api_key: ${OPENAI_API_KEY} + use_remote_service: true + model: ${model} + temperature: ${temperature} + max_new_tokens: ${max_new_tokens} + stream: true + tools: /home/user/tools/supervisor_agent_tools.yaml + require_human_feedback: false + no_proxy: ${no_proxy} + http_proxy: ${http_proxy} + https_proxy: ${https_proxy} + LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY} + LANGCHAIN_TRACING_V2: ${LANGCHAIN_TRACING_V2} + LANGCHAIN_PROJECT: "opea-supervisor-agent-service" + CRAG_SERVER: $CRAG_SERVER + WORKER_AGENT_URL: $WORKER_AGENT_URL + SQL_AGENT_URL: $SQL_AGENT_URL + port: 9090 + mock-api: + image: docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0 + container_name: mock-api + ports: + - "8080:8000" + ipc: host + agent-ui: + image: opea/agent-ui + container_name: agent-ui + ports: + - "5173:8080" + ipc: host + +networks: + default: + driver: bridge + From a9cbcbe3ff61aaa67390e58e933117c3d7632e83 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Fri, 2 May 2025 00:47:58 +0000 Subject: [PATCH 2/5] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- AgentQnA/README.md | 19 +++++++++++++------ .../intel/cpu/xeon/compose_remote.yaml | 1 - 2 files changed, 13 insertions(+), 7 deletions(-) diff --git a/AgentQnA/README.md b/AgentQnA/README.md index ff8f6cc287..3f7825f9e3 100644 --- a/AgentQnA/README.md +++ b/AgentQnA/README.md @@ -117,20 +117,24 @@ export no_proxy="Your_No_Proxy" ``` ##### For using open-source llms + Set up a [HuggingFace](https://huggingface.co/) account and generate a [user access token](https://huggingface.co/docs/transformers.js/en/guides/private#step-1-generating-a-user-access-token). Then set an environment variable with the token and another for a directory to download the models: + ```bash export HUGGINGFACEHUB_API_TOKEN= export HF_CACHE_DIR= # to avoid redownloading models ``` ##### [Optional] OPENAI_API_KEY to use OpenAI models or Enterprise Inference + To use OpenAI models, generate a key following these [instructions](https://platform.openai.com/api-keys). To use a remote server running Enterprise Inference, contact the cloud service provider or owner of the on-prem machine for a key to access the desired model on the server. Then set the environment variable `OPENAI_API_KEY` with the key contents: + ```bash export OPENAI_API_KEY= ``` @@ -149,7 +153,7 @@ source $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/hpu/gaudi/set_env.sh source $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/cpu/xeon/set_env.sh ``` -For running +For running ### 2. Launch the multi-agent system.
@@ -200,6 +204,7 @@ cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/cpu/xeon ``` ##### OpenAI Models + The command below will launch the multi-agent system with the `DocIndexRetriever` as the retrieval tool for the Worker RAG agent. ```bash @@ -207,7 +212,8 @@ docker compose -f $WORKDIR/GenAIExamples/DocIndexRetriever/docker_compose/intel/ ``` ##### Models on Remote Server -To run on Xeon with models deployed on a remote server, run with the `compose_remote.yaml` instead. Additional environment variables also need to be set. + +To run on Xeon with models deployed on a remote server, run with the `compose_remote.yaml` instead. Additional environment variables also need to be set. ```bash export model= @@ -233,11 +239,12 @@ To see the UI, open a web browser to `http://${ip_address}:5173` to access the U 1. Click on the arrow above `Get started`. Create an admin account with a name, email, and password. 2. Add an OpenAI-compatible API endpoint. In the upper right, click on the circle button with the user's initial, go to `Admin Settings`->`Connections`. Under `Manage OpenAI API Connections`, click on the `+` to add a connection. Fill in these fields: - - **URL**: `http://${ip_address}:9090/v1`, do not forget the `v1` - - **Key**: any value - - **Model IDs**: any name i.e. `opea-agent`, then press `+` to add it - Click "Save". +- **URL**: `http://${ip_address}:9090/v1`, do not forget the `v1` +- **Key**: any value +- **Model IDs**: any name i.e. `opea-agent`, then press `+` to add it + +Click "Save". ![opea-agent-setting](assets/img/opea-agent-setting.png) diff --git a/AgentQnA/docker_compose/intel/cpu/xeon/compose_remote.yaml b/AgentQnA/docker_compose/intel/cpu/xeon/compose_remote.yaml index be4f35f6d0..fe3ea504e2 100644 --- a/AgentQnA/docker_compose/intel/cpu/xeon/compose_remote.yaml +++ b/AgentQnA/docker_compose/intel/cpu/xeon/compose_remote.yaml @@ -116,4 +116,3 @@ services: networks: default: driver: bridge - From 1ccdda04bf2c387c5bca5dd536e2430031951a31 Mon Sep 17 00:00:00 2001 From: alexsin368 Date: Fri, 2 May 2025 17:34:11 -0700 Subject: [PATCH 3/5] address comments, simplify compose_remote.yaml Signed-off-by: alexsin368 --- AgentQnA/README.md | 13 ++- .../intel/cpu/xeon/compose_remote.yaml | 101 ------------------ 2 files changed, 9 insertions(+), 105 deletions(-) diff --git a/AgentQnA/README.md b/AgentQnA/README.md index ff8f6cc287..2fd54dd9cb 100644 --- a/AgentQnA/README.md +++ b/AgentQnA/README.md @@ -125,10 +125,10 @@ export HUGGINGFACEHUB_API_TOKEN= export HF_CACHE_DIR= # to avoid redownloading models ``` -##### [Optional] OPENAI_API_KEY to use OpenAI models or Enterprise Inference +##### [Optional] OPENAI_API_KEY to use OpenAI models or Intel® AI for Enterprise Inference To use OpenAI models, generate a key following these [instructions](https://platform.openai.com/api-keys). -To use a remote server running Enterprise Inference, contact the cloud service provider or owner of the on-prem machine for a key to access the desired model on the server. +To use a remote server running Intel® AI for Enterprise Inference, contact the cloud service provider or owner of the on-prem machine for a key to access the desired model on the server. Then set the environment variable `OPENAI_API_KEY` with the key contents: ```bash @@ -207,12 +207,17 @@ docker compose -f $WORKDIR/GenAIExamples/DocIndexRetriever/docker_compose/intel/ ``` ##### Models on Remote Server -To run on Xeon with models deployed on a remote server, run with the `compose_remote.yaml` instead. Additional environment variables also need to be set. +When models are deployed on a remote server with Intel® AI for Enterprise Inference, a base URL and an API key are required to access them. To run the Agent microservice on Xeon while using models deployed on a remote server, add `compose_remote.yaml` to the `docker compose` command and set additional environment variables. + +###### Notes +- `OPENAI_API_KEY` is already set in a previous step. +- `model` is used to overwrite the value set for this environment variable in `set_env.sh`. +- `LLM_ENDPOINT_URL` is the base URL given from the owner of the on-prem machine or cloud service provider. It will follow this format: "https://". Here is an example: "https://api.inference.example.com". ```bash export model= export LLM_ENDPOINT_URL= -docker compose -f $WORKDIR/GenAIExamples/DocIndexRetriever/docker_compose/intel/cpu/xeon/compose.yaml -f compose_remote.yaml up -d +docker compose -f $WORKDIR/GenAIExamples/DocIndexRetriever/docker_compose/intel/cpu/xeon/compose.yaml -f compose_openai.yaml -f compose_remote.yaml up -d ``` ### 3. Ingest Data into the vector database diff --git a/AgentQnA/docker_compose/intel/cpu/xeon/compose_remote.yaml b/AgentQnA/docker_compose/intel/cpu/xeon/compose_remote.yaml index be4f35f6d0..24536435a3 100644 --- a/AgentQnA/docker_compose/intel/cpu/xeon/compose_remote.yaml +++ b/AgentQnA/docker_compose/intel/cpu/xeon/compose_remote.yaml @@ -3,117 +3,16 @@ services: worker-rag-agent: - image: opea/agent:latest - container_name: rag-agent-endpoint - volumes: - - ${TOOLSET_PATH}:/home/user/tools/ - ports: - - "9095:9095" - ipc: host environment: - ip_address: ${ip_address} - strategy: rag_agent - with_memory: false - recursion_limit: ${recursion_limit_worker} - llm_engine: openai llm_endpoint_url: ${LLM_ENDPOINT_URL} api_key: ${OPENAI_API_KEY} - use_remote_service: true - model: ${model} - temperature: ${temperature} - max_new_tokens: ${max_new_tokens} - stream: false - tools: /home/user/tools/worker_agent_tools.yaml - require_human_feedback: false - RETRIEVAL_TOOL_URL: ${RETRIEVAL_TOOL_URL} - no_proxy: ${no_proxy} - http_proxy: ${http_proxy} - https_proxy: ${https_proxy} - LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY} - LANGCHAIN_TRACING_V2: ${LANGCHAIN_TRACING_V2} - LANGCHAIN_PROJECT: "opea-worker-agent-service" - port: 9095 worker-sql-agent: - image: opea/agent:latest - container_name: sql-agent-endpoint - volumes: - - ${WORKDIR}/GenAIExamples/AgentQnA/tests:/home/user/chinook-db # SQL database - ports: - - "9096:9096" - ipc: host environment: - ip_address: ${ip_address} - strategy: sql_agent - with_memory: false - db_name: ${db_name} - db_path: ${db_path} - use_hints: false - recursion_limit: ${recursion_limit_worker} - llm_engine: openai llm_endpoint_url: ${LLM_ENDPOINT_URL} api_key: ${OPENAI_API_KEY} - use_remote_service: true - model: ${model} - temperature: 0 - max_new_tokens: ${max_new_tokens} - stream: false - require_human_feedback: false - no_proxy: ${no_proxy} - http_proxy: ${http_proxy} - https_proxy: ${https_proxy} - port: 9096 supervisor-react-agent: - image: opea/agent:latest - container_name: react-agent-endpoint - depends_on: - - worker-rag-agent - - worker-sql-agent - volumes: - - ${TOOLSET_PATH}:/home/user/tools/ - ports: - - "9090:9090" - ipc: host environment: - ip_address: ${ip_address} - strategy: react_llama - with_memory: true - recursion_limit: ${recursion_limit_supervisor} - llm_engine: openai llm_endpoint_url: ${LLM_ENDPOINT_URL} api_key: ${OPENAI_API_KEY} - use_remote_service: true - model: ${model} - temperature: ${temperature} - max_new_tokens: ${max_new_tokens} - stream: true - tools: /home/user/tools/supervisor_agent_tools.yaml - require_human_feedback: false - no_proxy: ${no_proxy} - http_proxy: ${http_proxy} - https_proxy: ${https_proxy} - LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY} - LANGCHAIN_TRACING_V2: ${LANGCHAIN_TRACING_V2} - LANGCHAIN_PROJECT: "opea-supervisor-agent-service" - CRAG_SERVER: $CRAG_SERVER - WORKER_AGENT_URL: $WORKER_AGENT_URL - SQL_AGENT_URL: $SQL_AGENT_URL - port: 9090 - mock-api: - image: docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0 - container_name: mock-api - ports: - - "8080:8000" - ipc: host - agent-ui: - image: opea/agent-ui - container_name: agent-ui - ports: - - "5173:8080" - ipc: host - -networks: - default: - driver: bridge - From 2137e6f274a0877770a1d0007d11ae528e76465c Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Sat, 3 May 2025 00:36:38 +0000 Subject: [PATCH 4/5] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- AgentQnA/README.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/AgentQnA/README.md b/AgentQnA/README.md index 32fbdd5b55..c78703d6fb 100644 --- a/AgentQnA/README.md +++ b/AgentQnA/README.md @@ -128,6 +128,7 @@ export HF_CACHE_DIR= # to avoid redownload ``` ##### [Optional] OPENAI_API_KEY to use OpenAI models or Intel® AI for Enterprise Inference + To use OpenAI models, generate a key following these [instructions](https://platform.openai.com/api-keys). To use a remote server running Intel® AI for Enterprise Inference, contact the cloud service provider or owner of the on-prem machine for a key to access the desired model on the server. @@ -211,11 +212,13 @@ docker compose -f $WORKDIR/GenAIExamples/DocIndexRetriever/docker_compose/intel/ ``` ##### Models on Remote Server + When models are deployed on a remote server with Intel® AI for Enterprise Inference, a base URL and an API key are required to access them. To run the Agent microservice on Xeon while using models deployed on a remote server, add `compose_remote.yaml` to the `docker compose` command and set additional environment variables. ###### Notes + - `OPENAI_API_KEY` is already set in a previous step. -- `model` is used to overwrite the value set for this environment variable in `set_env.sh`. +- `model` is used to overwrite the value set for this environment variable in `set_env.sh`. - `LLM_ENDPOINT_URL` is the base URL given from the owner of the on-prem machine or cloud service provider. It will follow this format: "https://". Here is an example: "https://api.inference.example.com". ```bash From c3738c4d9aaaa29a48d606b3be7f97b5a0995d77 Mon Sep 17 00:00:00 2001 From: alexsin368 Date: Fri, 2 May 2025 17:37:08 -0700 Subject: [PATCH 5/5] simplify compose_remote.yaml Signed-off-by: alexsin368 --- .../intel/cpu/xeon/compose_remote.yaml | 36 ------------------- 1 file changed, 36 deletions(-) diff --git a/AgentQnA/docker_compose/intel/cpu/xeon/compose_remote.yaml b/AgentQnA/docker_compose/intel/cpu/xeon/compose_remote.yaml index 2069edea8a..24536435a3 100644 --- a/AgentQnA/docker_compose/intel/cpu/xeon/compose_remote.yaml +++ b/AgentQnA/docker_compose/intel/cpu/xeon/compose_remote.yaml @@ -16,39 +16,3 @@ services: environment: llm_endpoint_url: ${LLM_ENDPOINT_URL} api_key: ${OPENAI_API_KEY} -<<<<<<< HEAD -======= - use_remote_service: true - model: ${model} - temperature: ${temperature} - max_new_tokens: ${max_new_tokens} - stream: true - tools: /home/user/tools/supervisor_agent_tools.yaml - require_human_feedback: false - no_proxy: ${no_proxy} - http_proxy: ${http_proxy} - https_proxy: ${https_proxy} - LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY} - LANGCHAIN_TRACING_V2: ${LANGCHAIN_TRACING_V2} - LANGCHAIN_PROJECT: "opea-supervisor-agent-service" - CRAG_SERVER: $CRAG_SERVER - WORKER_AGENT_URL: $WORKER_AGENT_URL - SQL_AGENT_URL: $SQL_AGENT_URL - port: 9090 - mock-api: - image: docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0 - container_name: mock-api - ports: - - "8080:8000" - ipc: host - agent-ui: - image: opea/agent-ui - container_name: agent-ui - ports: - - "5173:8080" - ipc: host - -networks: - default: - driver: bridge ->>>>>>> a9cbcbe3ff61aaa67390e58e933117c3d7632e83