|
15 | 15 | "## Before you begin\n", |
16 | 16 | "\n", |
17 | 17 | "### Prerequisite\n", |
18 | | - "The AI Red Teaming Agent requires an Azure AI Foundry project configuration and Azure credentials. Your project configuration will be used to log red teaming scan results after the run is finished.\n", |
| 18 | + "First, if you have an Azure subscription, create an [Azure AI hub](https://learn.microsoft.com/en-us/azure/ai-studio/concepts/ai-resources) then [create an Azure AI project](https://learn.microsoft.com/en-us/azure/ai-studio/concepts/ai-resources). AI projects and Hubs can be served within a private network and are compatible with private endpoints. You **do not** need to provide your own LLM deployment as the AI Red Teaming Agent hosts adversarial models for both simulation and evaluation of harmful content and connects to it via your Azure AI project.\n", |
19 | 19 | "\n", |
20 | | - "**Important**: Make sure to authenticate to Azure using `az login` in your terminal before running this notebook.\n", |
| 20 | + "**Note**: In order to upload your results to Azure AI Foundry, you must have the `Storage Blob Data Contributor` role\n", |
| 21 | + "\n", |
| 22 | + "**Important**: First, ensure that you've installed the [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) and then make sure to authenticate to Azure using `az login` in your terminal before running this notebook.\n", |
21 | 23 | "\n", |
22 | 24 | "### Installation\n", |
23 | 25 | "From a terminal window, navigate to your working directory which contains this sample notebook, and execute the following.\n", |
|
95 | 97 | "AZURE_OPENAI_API_KEY=\"your-api-key-here\"\n", |
96 | 98 | "AZURE_OPENAI_ENDPOINT=\"https://endpoint-name.openai.azure.com/openai/deployments/deployment-name/chat/completions\"\n", |
97 | 99 | "AZURE_OPENAI_DEPLOYMENT_NAME=\"gpt-4\"\n", |
98 | | - "AZURE_OPENAI_API_VERSION=\"2023-12-01-preview\"\n", |
| 100 | + "AZURE_OPENAI_API_VERSION=\"2024-12-01-preview\"\n", |
99 | 101 | "\n", |
100 | 102 | "# Azure AI Project\n", |
101 | 103 | "AZURE_SUBSCRIPTION_ID=\"12345678-1234-1234-1234-123456789012\"\n", |
|
123 | 125 | " \"AZURE_OPENAI_ENDPOINT\"\n", |
124 | 126 | ") # e.g., \"https://endpoint-name.openai.azure.com/openai/deployments/deployment-name/chat/completions\"\n", |
125 | 127 | "azure_openai_api_key = os.environ.get(\"AZURE_OPENAI_API_KEY\") # e.g., \"your-api-key\"\n", |
126 | | - "azure_openai_api_version = \"2023-12-01-preview\" # Use the latest API version" |
| 128 | + "azure_openai_api_version = os.environ.get(\"AZURE_OPENAI_API_VERSION\") # Use the latest API version" |
127 | 129 | ] |
128 | 130 | }, |
129 | 131 | { |
|
236 | 238 | " target=financial_advisor_callback,\n", |
237 | 239 | " scan_name=\"Basic-Callback-Scan\",\n", |
238 | 240 | " attack_strategies=[AttackStrategy.Flip],\n", |
239 | | - " output_file=\"red_team_output.json\",\n", |
| 241 | + " output_path=\"red_team_output.json\",\n", |
240 | 242 | ")" |
241 | 243 | ] |
242 | 244 | }, |
|
304 | 306 | " session_state: Optional[str] = None, # noqa: ARG001\n", |
305 | 307 | " context: Optional[Dict[str, Any]] = None, # noqa: ARG001\n", |
306 | 308 | ") -> dict[str, list[dict[str, str]]]:\n", |
307 | | - " deployment = os.environ.get(\"AZURE_DEPLOYMENT_NAME\")\n", |
308 | | - " endpoint = os.environ.get(\"AZURE_ENDPOINT\")\n", |
309 | | - " api_version = os.environ.get(\"AZURE_API_VERSION\")\n", |
310 | | - "\n", |
311 | 309 | " # Get token provider for Azure AD authentication\n", |
312 | 310 | " token_provider = get_bearer_token_provider(DefaultAzureCredential(), \"https://cognitiveservices.azure.com/.default\")\n", |
313 | 311 | "\n", |
314 | 312 | " # Initialize Azure OpenAI client\n", |
315 | | - " client = AzureOpenAI(azure_endpoint=endpoint, api_version=api_version, azure_ad_token_provider=token_provider)\n", |
| 313 | + " client = AzureOpenAI(\n", |
| 314 | + " azure_endpoint=azure_openai_endpoint,\n", |
| 315 | + " api_version=azure_openai_api_version,\n", |
| 316 | + " azure_ad_token_provider=token_provider,\n", |
| 317 | + " )\n", |
316 | 318 | "\n", |
317 | 319 | " ## Extract the latest message from the conversation history\n", |
318 | 320 | " messages_list = [{\"role\": message.role, \"content\": message.content} for message in messages]\n", |
|
321 | 323 | " try:\n", |
322 | 324 | " # Call the model\n", |
323 | 325 | " response = client.chat.completions.create(\n", |
324 | | - " model=deployment,\n", |
| 326 | + " model=azure_openai_deployment,\n", |
325 | 327 | " messages=[\n", |
326 | 328 | " {\"role\": \"user\", \"content\": latest_message},\n", |
327 | 329 | " ],\n", |
|
0 commit comments