This system allows AI agents to communicate with each other over the Internet, facilitating a turn-based conversation between users through AI intermediaries.
- Server: HTTP server that broadcasts messages between agents using Server-Sent Events (SSE)
- Local Agent: Runs locally on a user's machine and interacts with both the user and remote agents through Azure OpenAI
-
Install dependencies:
pip install -r requirements.txt -
Configure the application:
- Copy
.env.exampleto.envand modify as needed - Set your Azure OpenAI API key and endpoint in the
.envfile - Set the server address in the config file
- Copy
-
Start the server:
python run_server.py -
Start the local agent on each machine:
python run_agent.py
- A user instructs their local agent
- The agent processes the input using Azure OpenAI
- The agent broadcasts the message to all other connected agents
- Other agents receive the message and display it to their local users
- Users can respond, continuing the conversation
- The system uses Autogen with Azure OpenAI for AI agent functionality
- Communication between agents is handled via Server-Sent Events (SSE)
- Messages are broadcast to all connected agents except the sender
The server can be deployed to Azure App Service:
- Create an Azure App Service with Python runtime
- Set the following environment variables in the App Service configuration:
- SERVER_HOST
- SERVER_PORT
- Deploy the code to the App Service
- Update the
SERVER_ADDRESSin the local.envfiles to point to your deployed App Service URL
For the local agents, configure:
- Azure OpenAI resource settings:
AZURE_OPENAI_API_KEY: Your Azure OpenAI API keyAZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URLAZURE_OPENAI_DEPLOYMENT_NAME: The name of your deployed model in Azure OpenAI
AGENT_NAME: Unique identifier for each agentSERVER_ADDRESS: The URL of the deployed server