Skip to content

Workflow executor example workflow API #1102

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 40 commits into from
Jul 2, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
727e9f3
Add workflow executor example
JoshuaL3000 Sep 30, 2024
87ed7db
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Sep 30, 2024
bc5cc12
Update workflow executor example
JoshuaL3000 Oct 8, 2024
03f0d01
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 8, 2024
11344a6
Update workflow executor example
JoshuaL3000 Oct 16, 2024
00de5a8
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 16, 2024
7f8f957
Rename CI script to 'test_compose_on_xeon.sh'
JoshuaL3000 Oct 22, 2024
14944f9
Update test files and add custom prompt
JoshuaL3000 Oct 23, 2024
891324f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Oct 24, 2024
f8f3afd
Rename test script
JoshuaL3000 Oct 25, 2024
62fd863
Fix start_vllm_service.sh and handle convert dict to str in tools.py
JoshuaL3000 Oct 25, 2024
d251aa5
Update workflow id and retest pydantic version
JoshuaL3000 Oct 28, 2024
bfb0522
Add docstring for multiple files
JoshuaL3000 Nov 5, 2024
b784a68
Update workflow executor example for example workflow API
JoshuaL3000 Nov 8, 2024
77675ce
Update readme for example workflow
JoshuaL3000 Nov 8, 2024
3745142
Update test scripts and readme
JoshuaL3000 Nov 8, 2024
9b14045
Update workflow executor example for example workflow API
JoshuaL3000 Nov 8, 2024
07d4ed4
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 8, 2024
683654b
Update workflow id
JoshuaL3000 Nov 8, 2024
0098bf1
Merge branch 'main' into workflow-executor-example
xiguiw Dec 23, 2024
256dfbd
Merge branch 'main' into workflow-executor-example
chensuyue Jan 15, 2025
e4e8a70
Merge branch 'main' into workflow-executor-example
ZePan110 Jan 15, 2025
0523b68
Update docker compose path for agent comps change
JoshuaL3000 Jan 16, 2025
cbc87e7
Merge branch 'main' into workflow-executor-example
JoshuaL3000 Apr 3, 2025
8bb3f30
Enable test for workflow example API after package upgrades
JoshuaL3000 Apr 11, 2025
219cab2
Update wf_api_port
JoshuaL3000 Apr 11, 2025
b32ffe6
Minor fixes
JoshuaL3000 Jun 4, 2025
1502a83
Minor updates
JoshuaL3000 Jun 4, 2025
5398db5
Add debug vllm for test
JoshuaL3000 Jun 4, 2025
59d2046
Add list tags for debug
JoshuaL3000 Jun 4, 2025
1fbbc12
Merge branch 'main' into workflow-executor-example
JoshuaL3000 Jun 4, 2025
5318038
Merge branch 'main' into workflow-executor-example
JoshuaL3000 Jun 13, 2025
a4c1ec1
Merge branch 'main' into workflow-executor-example
JoshuaL3000 Jul 1, 2025
6bf69c0
Fix for Issue#1978: Update README for missing instructions
JoshuaL3000 Jul 1, 2025
57f880d
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jul 1, 2025
3979b23
Update README syntax
JoshuaL3000 Jul 1, 2025
77edb21
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jul 1, 2025
f35eff8
Add Quick start section in README.md for better clarity on using vari…
JoshuaL3000 Jul 2, 2025
c85f1d2
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jul 2, 2025
ea160ca
Merge branch 'main' into workflow-executor-example
JoshuaL3000 Jul 2, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
106 changes: 88 additions & 18 deletions WorkflowExecAgent/README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,51 @@
# Workflow Executor Agent

## Quick Start: Key Configuration Variables

Before proceeding, here are some key configuration variables needed for the workflow executor agent.

- **SDK_BASE_URL**: The URL to your platform workflow serving API.

Example: `http://<your-server-ip>:5000/`

This is where the agent will send workflow execution requests.

- **SERVING_TOKEN**: The authentication bearer token which is used in the `RequestHandler` class as `api_key`. This is used for authenticating API requests. 3rd party platforms can design their serving workflow API this way for user authentication.

More details can be found in the code [handle_requests.py](tools/utils/handle_requests.py#L23)

> **How to get these values:**
>
> - If you are using the provided example workflow API, refer to the test [README.md](tests/README.md)
> - For your own platform, refer to your API documentation or administrator for the correct values. If you are a platform provider you may refer to [Workflow Building Platform](#workflow-building-platform) section for prerequisites on setting up a serving workflow.

For more info on using these variables, refer to the [Microservice Setup](#microservice-setup) section below on using the [Example Workflow API](tests/example_workflow/) for a working example.

Set these variables in your environment before starting the service.

## Overview

GenAI Workflow Executor Example showcases the capability to handle data/AI workflow operations via LangChain agents to execute custom-defined workflow-based tools. These workflow tools can be interfaced from any 3rd-party tools in the market (no-code/low-code/IDE) such as Alteryx, RapidMiner, Power BI, Intel Data Insight Automation which allows users to create complex data/AI workflow operations for different use-cases.

### Definitions

Before we begin, here are the definitions to some terms for clarity:

- servable/serving workflow - A workflow made ready to be executed through API. It should be able to accept parameter injection for workflow scheduling and have a way to retrieve the final output data. It should also have a unique workflow ID for referencing. For platform providers guide to create their own servable workflows compatible with this example, refer to [Workflow Building Platform](#workflow-building-platform)

- SDK Class - Performs requests to interface with a 3rd-party API to perform workflow operations on the servable workflow. Found in `tools/sdk.py`.

- workflow ID - A unique ID for the servable workflow.

- workflow instance - An instance created from the servable workflow. It is represented as a `Workflow` class created using `DataInsightAutomationSDK.create_workflow()` under `tools/sdk.py`. Contains methods to `start`, `get_status` and `get_results` from the workflow.

### Workflow Executor

This example demonstrates a single React-LangGraph with a `Workflow Executor` tool to ingest a user prompt to execute workflows and return an agent reasoning response based on the workflow output data.
Strategy - This example demonstrates a single React-LangGraph with a `Workflow Executor` tool to ingest a user prompt to execute workflows and return an agent reasoning response based on the workflow output data.

First the LLM extracts the relevant information from the user query based on the schema of the tool in `tools/tools.yaml`. Then the agent sends this `AgentState` to the `Workflow Executor` tool.

`Workflow Executor` tool uses `EasyDataSDK` class as seen under `tools/sdk.py` to interface with several high-level API's. There are 3 steps to this tool implementation:
`Workflow Executor` tool requires a SDK class to call the servable workflow API. In the code, `DataInsightAutomationSDK` is the example class as seen under `tools/sdk.py` to interface with several high-level API's. There are 3 steps to this tool implementation:

1. Starts the workflow with workflow parameters and workflow id extracted from the user query.

Expand All @@ -26,37 +61,50 @@ Below shows an illustration of this flow:

### Workflow Serving for Agent

#### Workflow Building Platform

The first step is to prepare a servable workflow using a platform with the capabilities to do so.

As an example, here we have a Churn Prediction use-case workflow as the serving workflow for the agent execution. It is created through Intel Data Insight Automation platform. The image below shows a snapshot of the Churn Prediction workflow.

![image](https://github.com/user-attachments/assets/c067f8b3-86cf-4abc-a8bd-51a98de8172d)

The workflow contains 2 paths which can be seen in the workflow illustrated, the top path and bottom path.
The workflow contains 2 paths which can be seen in the workflow illustrated, the top and bottom paths.

1. Top path - The training path which ends at the random forest classifier node is the training path. The data is cleaned through a series of nodes and used to train a random forest model for prediction.
1. Top path (Training path) - Ends at the random forest classifier node is the training path. The data is cleaned through a series of nodes and used to train a random forest model for prediction.

2. Bottom path - The inference path where trained random forest model is used for inferencing based on input parameter.
2. Bottom path (Inference path) - where trained random forest model is used for inferencing based on input parameter.

For this agent workflow execution, the inferencing path is executed to yield the final output result of the `Model Predictor` node. The same output is returned to the `Workflow Executor` tool through the `Langchain API Serving` node.

There are `Serving Parameters` in the workflow, which are the tool input variables used to start a workflow instance obtained from `params` the LLM extracts from the user query. Below shows the parameter configuration option for the Intel Data Insight Automation workflow UI.
There are `Serving Parameters` in the workflow, which are the tool input variables used to start a workflow instance at runtime obtained from `params` the LLM extracts from the user query. Below shows the parameter configuration option for the Intel Data Insight Automation workflow UI.

![image](https://github.com/user-attachments/assets/ce8ef01a-56ff-4278-b84d-b6e4592b28c6)
<img src="https://github.com/user-attachments/assets/ce8ef01a-56ff-4278-b84d-b6e4592b28c6" alt="image" width="500"/>

Manually running the workflow yields the tabular data output as shown below:

![image](https://github.com/user-attachments/assets/241c1aba-2a24-48da-8005-ec7bfe657179)

In the workflow serving for agent, this output will be returned to the `Workflow Executor` tool. The LLM can then answer the user's original question based on this output.

To start prompting the agent microservice, we will use the following command for this use case:
When the workflow is configured as desired, transform this into a servable workflow. We turn this workflow into a servable workflow format so that it can be called through API to perform operations on it. Data Insight Automation has tools to do this for its own workflows.

> [!NOTE]
> Remember to create a unique workflow ID along with the servable workflow.

#### Using Servable Workflow

Once we have our servable workflow ready, the serving workflow API can be prepared to accept requests from the SDK class. Refer to [Start Agent Microservice](#start-agent-microservice) on how to do this.

To start prompting the agent microservice, we will use the following command for this churn prediction use-case:

```sh
curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "I have a data with gender Female, tenure 55, MonthlyAvgCharges 103.7. Predict if this entry will churn. My workflow id is '${workflow_id}'."
}'
```

The user has to provide a `workflow_id` and workflow `params` in the query. `workflow_id` a unique id used for serving the workflow to the microservice. Notice that the `query` string includes all the workflow `params` which the user has defined in the workflow. The LLM will extract these parameters into a dictionary format for the workflow `Serving Parameters` as shown below:
The user has to provide a `workflow_id` and workflow `params` in the query. Notice that the `query` string includes all the workflow `params` which the user has defined in the workflow. The LLM will extract these parameters into a dictionary format for the workflow `Serving Parameters` as shown below:

```python
params = {"gender": "Female", "tenure": 55, "MonthlyAvgCharges": 103.7}
Expand All @@ -72,6 +120,16 @@ And finally here are the results from the microservice logs:

### Start Agent Microservice

For an out-of-box experience there is an example workflow serving API service prepared for users under [Example Workflow API](tests/example_workflow/) to interface with the SDK. This section will guide users on setting up this service as well. Users may modify the logic, add your own database etc for your own use-case.

There are 3 services needed for the setup:

1. Agent microservice

2. LLM inference service - specified as `llm_endpoint_url`.

3. workflow serving API service - specified as `SDK_BASE_URL`

Workflow Executor will have a single docker image. First, build the agent docker image.

```sh
Expand All @@ -83,20 +141,23 @@ docker compose -f build.yaml build --no-cache
Configure `GenAIExamples/WorkflowExecAgent/docker_compose/.env` file with the following. Replace the variables according to your usecase.

```sh
export SDK_BASE_URL=${SDK_BASE_URL}
export SERVING_TOKEN=${SERVING_TOKEN}
export wf_api_port=5000 # workflow serving API port to use
export SDK_BASE_URL=http://$(hostname -I | awk '{print $1}'):${wf_api_port}/ # The workflow server will use this example workflow API url
export SERVING_TOKEN=${SERVING_TOKEN} # Authentication token. For example_workflow test, can be empty as no authentication required.
export ip_address=$(hostname -I | awk '{print $1}')
export HF_TOKEN=${HF_TOKEN}
export llm_engine=${llm_engine}
export llm_endpoint_url=${llm_endpoint_url}
export ip_address=$(hostname -I | awk '{print $1}')
export model="mistralai/Mistral-7B-Instruct-v0.3"
export recursion_limit=${recursion_limit}
export temperature=0
export max_new_tokens=1000
export WORKDIR=${WORKDIR}
export TOOLSET_PATH=$WORKDIR/GenAIExamples/WorkflowExecAgent/tools/
export http_proxy=${http_proxy}
export https_proxy=${https_proxy}

# LLM variables
export model="mistralai/Mistral-7B-Instruct-v0.3"
export recursion_limit=${recursion_limit}
export temperature=0
export max_new_tokens=1000
```

Launch service by running the docker compose command.
Expand All @@ -106,9 +167,18 @@ cd $WORKDIR/GenAIExamples/WorkflowExecAgent/docker_compose
docker compose -f compose.yaml up -d
```

To launch the example workflow API server, open a new terminal and run the following:

```sh
cd $WORKDIR/GenAIExamples/WorkflowExecAgent/tests/example_workflow
. launch_workflow_service.sh
```

`launch_workflow_service.sh` will setup all the packages locally and launch the uvicorn server to host the API on port 5000. For a Dockerfile method, please refer to `Dockerfile.example_workflow_api` file.

### Validate service

The microservice logs can be viewed using:
The agent microservice logs can be viewed using:

```sh
docker logs workflowexec-agent-endpoint
Expand All @@ -120,7 +190,7 @@ You can validate the service using the following command:

```sh
curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "I have a data with gender Female, tenure 55, MonthlyAvgCharges 103.7. Predict if this entry will churn. My workflow id is '${workflow_id}'."
"query": "I have a data with gender Female, tenure 55, MonthlyCharges 103.7, TotalCharges 1840.75. Predict if this entry will churn. My workflow id is '${workflow_id}'."
}'
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ services:
- ${WORKDIR}/GenAIComps/comps/agent/src/:/home/user/comps/agent/src/
- ${TOOLSET_PATH}:/home/user/tools/
ports:
- "9090:9090"
- "9091:9090"
ipc: host
environment:
ip_address: ${ip_address}
Expand Down
11 changes: 8 additions & 3 deletions WorkflowExecAgent/tests/2_start_vllm_service.sh
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ function build_vllm_docker_image() {
else
cd ./vllm
fi
docker build -f Dockerfile.cpu -t vllm-cpu-env --shm-size=100g .
docker build -f docker/Dockerfile.cpu -t vllm-cpu-env --shm-size=100g .
if [ $? -ne 0 ]; then
echo "opea/vllm:cpu failed"
exit 1
Expand All @@ -37,15 +37,20 @@ function build_vllm_docker_image() {

function start_vllm_service() {
echo "start vllm service"
export VLLM_SKIP_WARMUP=true
docker run -d -p ${vllm_port}:${vllm_port} --rm --network=host --name test-comps-vllm-service -v ~/.cache/huggingface:/root/.cache/huggingface -v ${WORKPATH}/tests/tool_chat_template_mistral_custom.jinja:/root/tool_chat_template_mistral_custom.jinja -e HF_TOKEN=$HF_TOKEN -e http_proxy=$http_proxy -e https_proxy=$https_proxy -it vllm-cpu-env --model ${model} --port ${vllm_port} --chat-template /root/tool_chat_template_mistral_custom.jinja --enable-auto-tool-choice --tool-call-parser mistral
echo ${LOG_PATH}/vllm-service.log
sleep 5s
sleep 10s
echo "Waiting vllm ready"
n=0
until [[ "$n" -ge 100 ]] || [[ $ready == true ]]; do
docker logs test-comps-vllm-service
if docker logs test-comps-vllm-service| grep "Error response from daemon: No such container:"; then
exit 1
fi
docker logs test-comps-vllm-service &> ${LOG_PATH}/vllm-service.log
n=$((n+1))
if grep -q "Uvicorn running on" ${LOG_PATH}/vllm-service.log; then
if grep -q "Application startup complete." ${LOG_PATH}/vllm-service.log; then
break
fi
if grep -q "No such container" ${LOG_PATH}/vllm-service.log; then
Expand Down
39 changes: 39 additions & 0 deletions WorkflowExecAgent/tests/3_launch_agent_service.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

set -e

WORKPATH=$(dirname "$PWD")
vllm_port=${vllm_port}
[[ -z "$vllm_port" ]] && vllm_port=8084
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export SDK_BASE_URL=$1
echo "SDK_BASE_URL=$1"
export SERVING_TOKEN=${SERVING_TOKEN}
export HF_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export llm_engine=vllm
export ip_address=$(hostname -I | awk '{print $1}')
export llm_endpoint_url=http://${ip_address}:${vllm_port}
export model=mistralai/Mistral-7B-Instruct-v0.3
export recursion_limit=25
export temperature=0
export max_new_tokens=1000
export TOOLSET_PATH=$WORKDIR/GenAIExamples/WorkflowExecAgent/tools/

function start_agent() {
echo "Starting Agent services"
cd $WORKDIR/GenAIExamples/WorkflowExecAgent/docker_compose/intel/cpu/xeon
WORKDIR=$WORKPATH/docker_image_build/ docker compose -f compose_vllm.yaml up -d
echo "Waiting agent service ready"
sleep 10s
}

function main() {
echo "==================== Start agent service ===================="
start_agent
echo "==================== Agent service started ===================="
}

main
40 changes: 40 additions & 0 deletions WorkflowExecAgent/tests/3_launch_example_wf_api.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

set -e

wf_api_port=${wf_api_port}
[[ -z "$wf_api_port" ]] && wf_api_port=5005
WORKPATH=$(dirname "$PWD")
LOG_PATH="$WORKPATH/tests/example_workflow"
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"

function start_example_workflow_api() {
echo "Starting example workflow API"
cd $WORKDIR/GenAIExamples/WorkflowExecAgent/tests/example_workflow
docker build -f Dockerfile.example_workflow_api -t example-workflow-service .
docker run -d -p ${wf_api_port}:${wf_api_port} --rm --network=host --name example-workflow-service -it example-workflow-service
echo "Waiting example workflow API ready"
until [[ "$n" -ge 100 ]] || [[ $ready == true ]]; do
docker logs example-workflow-service &> ${LOG_PATH}/example-workflow-service.log
n=$((n+1))
if grep -q "Uvicorn running on" ${LOG_PATH}/example-workflow-service.log; then
break
fi
if grep -q "No such container" ${LOG_PATH}/example-workflow-service.log; then
echo "container example-workflow-service not found"
exit 1
fi
sleep 5s
done
}

function main() {
echo "==================== Start example workflow API ===================="
start_example_workflow_api
echo "==================== Example workflow API started ===================="
}

main
43 changes: 43 additions & 0 deletions WorkflowExecAgent/tests/4_validate_agent.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
#!/bin/bash
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

set -e

WORKPATH=$(dirname "$PWD")
export WORKDIR=$WORKPATH/../../
echo "WORKDIR=${WORKDIR}"
export ip_address=$(hostname -I | awk '{print $1}')
query=$1
validate_result=$2

function validate() {
local CONTENT="$1"
local EXPECTED_RESULT="$2"
local SERVICE_NAME="$3"

if echo "$CONTENT" | grep -q "$EXPECTED_RESULT"; then
echo "[ $SERVICE_NAME ] Content is as expected: $CONTENT"
echo "[TEST INFO]: Workflow Executor agent service PASSED"
else
echo "[ $SERVICE_NAME ] Content does not match the expected result: $CONTENT"
echo "[TEST INFO]: Workflow Executor agent service FAILED"
fi
}

function validate_agent_service() {
echo "----------------Test agent ----------------"
local CONTENT=$(curl http://${ip_address}:9091/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"messages": "'"${query}"'"
}')
validate "$CONTENT" "$validate_result" "workflowexec-agent-endpoint"
docker logs workflowexec-agent-endpoint
}

function main() {
echo "==================== Validate agent service ===================="
validate_agent_service
echo "==================== Agent service validated ===================="
}

main
Loading
Loading