Skip to content

Workflow executor example workflow API #1102

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 26 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
727e9f3
Add workflow executor example
JoshuaL3000 Sep 30, 2024
87ed7db
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Sep 30, 2024
bc5cc12
Update workflow executor example
JoshuaL3000 Oct 8, 2024
03f0d01
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 8, 2024
11344a6
Update workflow executor example
JoshuaL3000 Oct 16, 2024
00de5a8
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 16, 2024
7f8f957
Rename CI script to 'test_compose_on_xeon.sh'
JoshuaL3000 Oct 22, 2024
14944f9
Update test files and add custom prompt
JoshuaL3000 Oct 23, 2024
891324f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 24, 2024
f8f3afd
Rename test script
JoshuaL3000 Oct 25, 2024
62fd863
Fix start_vllm_service.sh and handle convert dict to str in tools.py
JoshuaL3000 Oct 25, 2024
d251aa5
Update workflow id and retest pydantic version
JoshuaL3000 Oct 28, 2024
bfb0522
Add docstring for multiple files
JoshuaL3000 Nov 5, 2024
b784a68
Update workflow executor example for example workflow API
JoshuaL3000 Nov 8, 2024
77675ce
Update readme for example workflow
JoshuaL3000 Nov 8, 2024
3745142
Update test scripts and readme
JoshuaL3000 Nov 8, 2024
9b14045
Update workflow executor example for example workflow API
JoshuaL3000 Nov 8, 2024
07d4ed4
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 8, 2024
683654b
Update workflow id
JoshuaL3000 Nov 8, 2024
0098bf1
Merge branch 'main' into workflow-executor-example
xiguiw Dec 23, 2024
256dfbd
Merge branch 'main' into workflow-executor-example
chensuyue Jan 15, 2025
e4e8a70
Merge branch 'main' into workflow-executor-example
ZePan110 Jan 15, 2025
0523b68
Update docker compose path for agent comps change
JoshuaL3000 Jan 16, 2025
cbc87e7
Merge branch 'main' into workflow-executor-example
JoshuaL3000 Apr 3, 2025
8bb3f30
Enable test for workflow example API after package upgrades
JoshuaL3000 Apr 11, 2025
219cab2
Update wf_api_port
JoshuaL3000 Apr 11, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
71 changes: 58 additions & 13 deletions WorkflowExecAgent/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,25 @@

GenAI Workflow Executor Example showcases the capability to handle data/AI workflow operations via LangChain agents to execute custom-defined workflow-based tools. These workflow tools can be interfaced from any 3rd-party tools in the market (no-code/low-code/IDE) such as Alteryx, RapidMiner, Power BI, Intel Data Insight Automation which allows users to create complex data/AI workflow operations for different use-cases.

### Definitions

Before we begin, here are the definitions to some terms for clarity:

- servable/serving workflow - A workflow made ready to be executed through API. It should be able to accept parameter injection for workflow scheduling and have a way to retrieve the final output data. It should also have a unique workflow ID for referencing. For platform providers guide to create their own servable workflows compatible with this example, refer to [Workflow Building Platform](#workflow-building-platform)

- SDK Class - Performs requests to interface with a 3rd-party API to perform workflow operations on the servable workflow. Found in `tools/sdk.py`.

- workflow ID - A unique ID for the servable workflow.

- workflow instance - An instance created from the servable workflow. It is represented as a `Workflow` class created using `DataInsightAutomationSDK.create_workflow()` under `tools/sdk.py`. Contains methods to `start`, `get_status` and `get_results` from the workflow.

### Workflow Executor

This example demonstrates a single React-LangGraph with a `Workflow Executor` tool to ingest a user prompt to execute workflows and return an agent reasoning response based on the workflow output data.
Strategy - This example demonstrates a single React-LangGraph with a `Workflow Executor` tool to ingest a user prompt to execute workflows and return an agent reasoning response based on the workflow output data.

First the LLM extracts the relevant information from the user query based on the schema of the tool in `tools/tools.yaml`. Then the agent sends this `AgentState` to the `Workflow Executor` tool.

`Workflow Executor` tool uses `EasyDataSDK` class as seen under `tools/sdk.py` to interface with several high-level API's. There are 3 steps to this tool implementation:
`Workflow Executor` tool requires a SDK class to call the servable workflow API. In the code, `DataInsightAutomationSDK` is the example class as seen under `tools/sdk.py` to interface with several high-level API's. There are 3 steps to this tool implementation:

1. Starts the workflow with workflow parameters and workflow id extracted from the user query.

Expand All @@ -26,37 +38,50 @@ Below shows an illustration of this flow:

### Workflow Serving for Agent

#### Workflow Building Platform

The first step is to prepare a servable workflow using a platform with the capabilities to do so.

As an example, here we have a Churn Prediction use-case workflow as the serving workflow for the agent execution. It is created through Intel Data Insight Automation platform. The image below shows a snapshot of the Churn Prediction workflow.

![image](https://github.com/user-attachments/assets/c067f8b3-86cf-4abc-a8bd-51a98de8172d)

The workflow contains 2 paths which can be seen in the workflow illustrated, the top path and bottom path.
The workflow contains 2 paths which can be seen in the workflow illustrated, the top and bottom paths.

1. Top path - The training path which ends at the random forest classifier node is the training path. The data is cleaned through a series of nodes and used to train a random forest model for prediction.
1. Top path (Training path) - Ends at the random forest classifier node is the training path. The data is cleaned through a series of nodes and used to train a random forest model for prediction.

2. Bottom path - The inference path where trained random forest model is used for inferencing based on input parameter.
2. Bottom path (Inference path) - where trained random forest model is used for inferencing based on input parameter.

For this agent workflow execution, the inferencing path is executed to yield the final output result of the `Model Predictor` node. The same output is returned to the `Workflow Executor` tool through the `Langchain API Serving` node.

There are `Serving Parameters` in the workflow, which are the tool input variables used to start a workflow instance obtained from `params` the LLM extracts from the user query. Below shows the parameter configuration option for the Intel Data Insight Automation workflow UI.
There are `Serving Parameters` in the workflow, which are the tool input variables used to start a workflow instance at runtime obtained from `params` the LLM extracts from the user query. Below shows the parameter configuration option for the Intel Data Insight Automation workflow UI.

![image](https://github.com/user-attachments/assets/ce8ef01a-56ff-4278-b84d-b6e4592b28c6)
<img src="https://github.com/user-attachments/assets/ce8ef01a-56ff-4278-b84d-b6e4592b28c6" alt="image" width="500"/>

Manually running the workflow yields the tabular data output as shown below:

![image](https://github.com/user-attachments/assets/241c1aba-2a24-48da-8005-ec7bfe657179)

In the workflow serving for agent, this output will be returned to the `Workflow Executor` tool. The LLM can then answer the user's original question based on this output.

To start prompting the agent microservice, we will use the following command for this use case:
When the workflow is configured as desired, transform this into a servable workflow. We turn this workflow into a servable workflow format so that it can be called through API to perform operations on it. Data Insight Automation has tools to do this for its own workflows.

> [!NOTE]
> Remember to create a unique workflow ID along with the servable workflow.

#### Using Servable Workflow

Once we have our servable workflow ready, the serving workflow API can be prepared to accept requests from the SDK class. Refer to [Start Agent Microservice](#start-agent-microservice) on how to do this.

To start prompting the agent microservice, we will use the following command for this churn prediction use-case:

```sh
curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "I have a data with gender Female, tenure 55, MonthlyAvgCharges 103.7. Predict if this entry will churn. My workflow id is '${workflow_id}'."
}'
```

The user has to provide a `workflow_id` and workflow `params` in the query. `workflow_id` a unique id used for serving the workflow to the microservice. Notice that the `query` string includes all the workflow `params` which the user has defined in the workflow. The LLM will extract these parameters into a dictionary format for the workflow `Serving Parameters` as shown below:
The user has to provide a `workflow_id` and workflow `params` in the query. Notice that the `query` string includes all the workflow `params` which the user has defined in the workflow. The LLM will extract these parameters into a dictionary format for the workflow `Serving Parameters` as shown below:

```python
params = {"gender": "Female", "tenure": 55, "MonthlyAvgCharges": 103.7}
Expand All @@ -72,6 +97,16 @@ And finally here are the results from the microservice logs:

### Start Agent Microservice

For an out-of-box experience there is an example workflow serving API service prepared for users under `tests/example_workflow` to interface with the SDK. This section will guide users on setting up this service as well. Users may modify the logic, add your own database etc for your own use-case.

There are 3 services needed for the setup:

1. Agent microservice

2. LLM inference service - specified as `llm_endpoint_url`.

3. workflow serving API service - specified as `SDK_BASE_URL`

Workflow Executor will have a single docker image. First, build the agent docker image.

```sh
Expand All @@ -83,8 +118,9 @@ docker compose -f build.yaml build --no-cache
Configure `GenAIExamples/WorkflowExecAgent/docker_compose/.env` file with the following. Replace the variables according to your usecase.

```sh
export SDK_BASE_URL=${SDK_BASE_URL}
export SERVING_TOKEN=${SERVING_TOKEN}
export wf_api_port=5000 # workflow serving API port to use
export SDK_BASE_URL=http://$(hostname -I | awk '{print $1}'):${wf_api_port}/ # The workflow server will use this example workflow API url
export SERVING_TOKEN=${SERVING_TOKEN} # For example_workflow, can be empty
export HUGGINGFACEHUB_API_TOKEN=${HF_TOKEN}
export llm_engine=${llm_engine}
export llm_endpoint_url=${llm_endpoint_url}
Expand All @@ -106,9 +142,18 @@ cd $WORKDIR/GenAIExamples/WorkflowExecAgent/docker_compose
docker compose -f compose.yaml up -d
```

To launch the example workflow API server, open a new terminal and run the following:

```sh
cd $WORKDIR/GenAIExamples/WorkflowExecAgent/tests/example_workflow
. launch_workflow_service.sh
```

`launch_workflow_service.sh` will setup all the packages locally and launch the uvicorn server to host the API on port 5000. For a Dockerfile method, please refer to `Dockerfile.example_workflow_api` file.

### Validate service

The microservice logs can be viewed using:
The agent microservice logs can be viewed using:

```sh
docker logs workflowexec-agent-endpoint
Expand All @@ -120,7 +165,7 @@ You can validate the service using the following command:

```sh
curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "I have a data with gender Female, tenure 55, MonthlyAvgCharges 103.7. Predict if this entry will churn. My workflow id is '${workflow_id}'."
"query": "I have a data with gender Female, tenure 55, MonthlyCharges 103.7, TotalCharges 1840.75. Predict if this entry will churn. My workflow id is '${workflow_id}'."
}'
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ services:
- ${WORKDIR}/GenAIComps/comps/agent/src/:/home/user/comps/agent/src/
- ${TOOLSET_PATH}:/home/user/tools/
ports:
- "9090:9090"
- "9091:9090"
ipc: host
environment:
ip_address: ${ip_address}
Expand Down
39 changes: 39 additions & 0 deletions WorkflowExecAgent/tests/3_launch_agent_service.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

set -e

WORKPATH=$(dirname "$PWD")
vllm_port=${vllm_port}
[[ -z "$vllm_port" ]] && vllm_port=8084
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export SDK_BASE_URL=$1
echo "SDK_BASE_URL=$1"
export SERVING_TOKEN=${SERVING_TOKEN}
export HF_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export llm_engine=vllm
export ip_address=$(hostname -I | awk '{print $1}')
export llm_endpoint_url=http://${ip_address}:${vllm_port}
export model=mistralai/Mistral-7B-Instruct-v0.3
export recursion_limit=25
export temperature=0
export max_new_tokens=1000
export TOOLSET_PATH=$WORKDIR/GenAIExamples/WorkflowExecAgent/tools/

function start_agent() {
echo "Starting Agent services"
cd $WORKDIR/GenAIExamples/WorkflowExecAgent/docker_compose/intel/cpu/xeon
WORKDIR=$WORKPATH/docker_image_build/ docker compose -f compose_vllm.yaml up -d
echo "Waiting agent service ready"
sleep 5s
}

function main() {
echo "==================== Start agent service ===================="
start_agent
echo "==================== Agent service started ===================="
}

main
66 changes: 0 additions & 66 deletions WorkflowExecAgent/tests/3_launch_and_validate_agent.sh

This file was deleted.

40 changes: 40 additions & 0 deletions WorkflowExecAgent/tests/3_launch_example_wf_api.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

set -e

wf_api_port=${wf_api_port}
[[ -z "$wf_api_port" ]] && wf_api_port=5005
WORKPATH=$(dirname "$PWD")
LOG_PATH="$WORKPATH/tests/example_workflow"
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"

function start_example_workflow_api() {
echo "Starting example workflow API"
cd $WORKDIR/GenAIExamples/WorkflowExecAgent/tests/example_workflow
docker build -f Dockerfile.example_workflow_api -t example-workflow-service .
docker run -d -p ${wf_api_port}:${wf_api_port} --rm --network=host --name example-workflow-service -it example-workflow-service
echo "Waiting example workflow API ready"
until [[ "$n" -ge 100 ]] || [[ $ready == true ]]; do
docker logs example-workflow-service &> ${LOG_PATH}/example-workflow-service.log
n=$((n+1))
if grep -q "Uvicorn running on" ${LOG_PATH}/example-workflow-service.log; then
break
fi
if grep -q "No such container" ${LOG_PATH}/example-workflow-service.log; then
echo "container example-workflow-service not found"
exit 1
fi
sleep 5s
done
}

function main() {
echo "==================== Start example workflow API ===================="
start_example_workflow_api
echo "==================== Example workflow API started ===================="
}

main
43 changes: 43 additions & 0 deletions WorkflowExecAgent/tests/4_validate_agent.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

set -e

WORKPATH=$(dirname "$PWD")
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
query=$1
validate_result=$2

function validate() {
local CONTENT="$1"
local EXPECTED_RESULT="$2"
local SERVICE_NAME="$3"

if echo "$CONTENT" | grep -q "$EXPECTED_RESULT"; then
echo "[ $SERVICE_NAME ] Content is as expected: $CONTENT"
echo "[TEST INFO]: Workflow Executor agent service PASSED"
else
echo "[ $SERVICE_NAME ] Content does not match the expected result: $CONTENT"
echo "[TEST INFO]: Workflow Executor agent service FAILED"
fi
}

function validate_agent_service() {
echo "----------------Test agent ----------------"
local CONTENT=$(curl http://${ip_address}:9091/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"messages": "'"${query}"'"
}')
validate "$CONTENT" "$validate_result" "workflowexec-agent-endpoint"
docker logs workflowexec-agent-endpoint
}

function main() {
echo "==================== Validate agent service ===================="
validate_agent_service
echo "==================== Agent service validated ===================="
}

main
2 changes: 1 addition & 1 deletion WorkflowExecAgent/tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Launch validation by running the following command.

```sh
cd GenAIExamples/WorkflowExecAgent/tests
. /test_compose_on_xeon.sh
. /test_compose_vllm_on_xeon.sh
```

`test_compose_on_xeon.sh` will run the other `.sh` files under `tests/`. The validation script launches 1 docker container for the agent microservice, and another for the vllm model serving on CPU. When validation is completed, all containers will be stopped.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,12 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

function stop_agent_and_api_server() {
workflow_id=10277
query="I have a data with gender Female, tenure 55, MonthlyAvgCharges 103.7. Predict if this entry will churn. My workflow id is '${workflow_id}'."
validate_result="The entry is not likely to churn"

function stop_agent_server() {
echo "Stopping Agent services"
docker rm --force $(docker ps -a -q --filter="name=workflowexec-agent-endpoint")
}
Expand All @@ -21,13 +26,17 @@ echo "=================== #2 Start vllm service ===================="
bash 2_start_vllm_service.sh
echo "=================== #2 Start vllm service completed ===================="

echo "=================== #3 Start agent and API server ===================="
bash 3_launch_and_validate_agent.sh
echo "=================== #3 Agent test completed ===================="
echo "=================== #3 Start agent service ===================="
bash 3_launch_agent_service.sh $SDK_BASE_URL
echo "=================== #3 Agent service started ===================="

echo "=================== #4 Start validate agent ===================="
bash 4_validate_agent.sh "$query" "$validate_result"
echo "=================== #4 Validate agent completed ===================="

echo "=================== #4 Stop agent and API server ===================="
stop_agent_and_api_server
echo "=================== #4 Stop agent and vllm server ===================="
stop_agent_server
stop_vllm_docker
echo "=================== #4 Agent and API server stopped ===================="
echo "=================== #4 Agent and vllm server stopped ===================="

echo "ALL DONE!"
Loading
Loading