diff --git a/README.md b/README.md
index a732bc9b..28c7a12c 100644
--- a/README.md
+++ b/README.md
@@ -260,7 +260,7 @@ The table evaluates the frameworks in the following aspects:
We provide a comparison to validate our implementation of TextGrad in Trace:
-
+
To produce this table, we ran the TextGrad pip-installed repo on 2024-10-30, and we also include the numbers reported in the TextGrad paper.
@@ -269,6 +269,31 @@ The LLM APIs are called around the same time to ensure a fair comparison. TextGr
You can also easily implement your own optimizer that works directly with `TraceGraph` (more tutorials on how to work
with TraceGraph coming soon).
+## LLM API Setup
+
+Currently we rely on AutoGen for LLM caching and API-Key management.
+AutoGen relies on `OAI_CONFIG_LIST`, which is a file you put in your working directory. It has the format of:
+
+```json lines
+[
+ {
+ "model": "gpt-4",
+ "api_key": ""
+ },
+ {
+ "model": "claude-sonnet-3.5-latest",
+ "api_key": ""
+ }
+]
+```
+You switch between different LLM models by changing the `model` field in this configuration file.
+
+You can also set an `os.environ` variable `OAI_CONFIG_LIST` to point to the location of this file or directly set a JSON string as the value of this variable.
+
+For convenience, we also provide a method that directly grabs the API key from the environment variable `OPENAI_API_KEY` or `ANTHROPIC_API_KEY`.
+However, doing so, we specify the model version you use, which is `gpt-4o` for OpenAI and `claude-sonnet-3.5-latest` for Anthropic.
+
+
## Citation
If you use this code in your research please cite the following [publication](https://arxiv.org/abs/2406.16218):