Self-Guided Learners
These instructions are for self-guided learners who are not part of the AI Tour and do not have access to a pre-configured lab environment. Follow these steps to set up your environment and begin the workshop.
Introduction¶
This workshop is designed to teach you about the Azure AI Agents Service and the Python SDK. It consists of multiple labs, each highlighting a specific feature of the Azure AI Agents Service. The labs are meant to be completed in order, as each one builds on the knowledge and work from the previous lab.
Prerequisites¶
- Access to an Azure subscription. If you don't have an Azure subscription, create a free account before you begin.
- You need a GitHub account. If you don’t have one, create it at GitHub.
GitHub Codespaces¶
The preferred way to run this workshop is using GitHub Codespaces. This option provides a pre-configured environment with all the tools and resources needed to complete the workshop.
Select Open in GitHub Codespaces to open the project in GitHub Codespaces.
It will take several minutes to build the Codespace so carry on reading the instructions while it builds.
Lab Structure¶
Each lab in this workshop includes:
- An Introduction: Explains the relevant concepts.
- An Exercise: Guides you through the process of implementing the feature.
Project Structure¶
The workshop’s source code is located in the src/workshop folder. Be sure to familiarize yourself with the key subfolders and files you’ll be working with throughout the session.
- The files folder: Contains the files created by the agent app.
- The instructions folder: Contains the instructions passed to the LLM.
- The main.py: The entry point for the app, containing its main logic.
- The sales_data.py: Contains the function logic to execute dynamic SQL queries against the SQLite database.
- The stream_event_handler.py: Contains the event handler logic for token streaming.
Authenticate with Azure¶
You need to authenticate with Azure so the agent app can access the Azure AI Agents Service and models. Follow these steps:
- Ensure the Codespace has been created.
- In the Codespace, open a new terminal window by selecting Terminal > New Terminal from the VS Code menu.
-
Run the following command to authenticate with Azure:
az login --use-device-codeNote
You'll be prompted to open a browser link and log in to your Azure account.
- A browser window will open automatically, select your account type and click Next.
- Sign in with your Azure subscription Username and Password.
- Select OK, then Done.
-
Then select the appropriate subscription from the command line.
- Leave the terminal window open for the next steps.
Deploy the Azure Resources¶
The following resources will be created in your Azure subscription:
- An Azure AI Foundry hub named agent-wksp
- An Azure AI Foundry project named Agent Service Workshop
- A Serverless (pay-as-you-go) GPT-4o model deployment named gpt-4o (Global 2024-08-06). See pricing details here.
You will need 140K TPM quota availability in the westus region for the gpt-4o Global Standard SKU. Review your quota availability in the AI Foundry Management Center. You can change the requested region and TPM limit by modifying the environment variables in the file infra/deploy.sh.
From the VS Code terminal run the following command:
cd infra && ./deploy.sh
Manual Deployment¶
If you prefer not to use the deploy.sh script, you can deploy the resources manually using the Azure AI Foundry studio as follows:
- Visit
ai.azure.comand sign into your account - Click "+ Create project"
- Project name: agent-workshop
- Hub: Create new hub, name: agent-workshop-hub
- Click Create and wait for the project to be created
- In "My assets", click "Models + endpoints"
- Click Deploy Model / Deploy Base Model
- select gpt-4o, click Confirm
- Deployment name: gpt-4o
- Deployment type: Global Standard
- Click Customize
- Tokens Per Minute Rate Limit: 10k
- Click deploy
Note
A specific version of GPT-4o may be required depending on your the region where you deployed your project. See Models: Assistants (Preview) for details.
- Click Models+Endpoints
- Click to select the
gpt-4omodel (a blue checkbox will appear left of its name) - Click "Edit" in the header
- Under "Model Version" select "2024-08-06"
- Click Save and Close
- Click to select the
Workshop Configuration File¶
The deploy script generates the src/workshop/.env file, which contains the project connection string, model deployment name, and Bing connection name.
Your .env file should look similar to this but with your project connection string.
MODEL_DEPLOYMENT_NAME="gpt-4o"
BING_CONNECTION_NAME="Grounding-with-Bing-Search"
PROJECT_CONNECTION_STRING="<your_project_connection_string>"
If you deployed your resources using the manual process and not the deploy.sh script, first create the file using the command below:
cp src/workshop/.env.sample src/workshop/.env
Then edit the file src/workshop/.env to provide the Project Connection String. You can find this in the AI Foundry studio in the Overview page for your Project agent-project (look in in the Project details section).
