Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
108 changes: 106 additions & 2 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,14 @@ Taskmaster uses two primary methods for configuration:
"defaultTag": "master",
"projectName": "Your Project Name",
"ollamaBaseURL": "http://localhost:11434/api",
"azureBaseURL": "https://your-endpoint.azure.com/",
"azureBaseURL": "https://your-endpoint.azure.com/openai/deployments",
"vertexProjectId": "your-gcp-project-id",
"vertexLocation": "us-central1"
}
}
```


2. **Legacy `.taskmasterconfig` File (Backward Compatibility)**

- For projects that haven't migrated to the new structure yet.
Expand Down Expand Up @@ -129,13 +130,15 @@ ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
PERPLEXITY_API_KEY=pplx-your-key-here
# OPENAI_API_KEY=sk-your-key-here
# GOOGLE_API_KEY=AIzaSy...
# AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
# etc.

# Optional Endpoint Overrides
# Use a specific provider's base URL, e.g., for an OpenAI-compatible API
# OPENAI_BASE_URL=https://api.third-party.com/v1
#
# AZURE_OPENAI_ENDPOINT=https://your-azure-endpoint.openai.azure.com/
# Azure OpenAI Configuration
# AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/ or https://your-endpoint-name.cognitiveservices.azure.com/openai/deployments
# OLLAMA_BASE_URL=http://custom-ollama-host:11434/api

# Google Vertex AI Configuration (Required if using 'vertex' provider)
Expand Down Expand Up @@ -207,3 +210,104 @@ Google Vertex AI is Google Cloud's enterprise AI platform and requires specific
"vertexLocation": "us-central1"
}
```

### Azure OpenAI Configuration

Azure OpenAI provides enterprise-grade OpenAI models through Microsoft's Azure cloud platform and requires specific configuration:

1. **Prerequisites**:
- An Azure account with an active subscription
- Azure OpenAI service resource created in the Azure portal
- Azure OpenAI API key and endpoint URL
- Deployed models (e.g., gpt-4o, gpt-4o-mini, gpt-4.1, etc) in your Azure OpenAI resource

2. **Authentication**:
- Set the `AZURE_OPENAI_API_KEY` environment variable with your Azure OpenAI API key
- Configure the endpoint URL using one of the methods below

3. **Configuration Options**:

**Option 1: Using Global Azure Base URL (affects all Azure models)**
```json
// In .taskmaster/config.json
{
"models": {
"main": {
"provider": "azure",
"modelId": "gpt-4o",
"maxTokens": 16000,
"temperature": 0.7
},
"fallback": {
"provider": "azure",
"modelId": "gpt-4o-mini",
"maxTokens": 10000,
"temperature": 0.7
}
},
"global": {
"azureBaseURL": "https://your-resource-name.azure.com/openai/deployments"
}
}
```

**Option 2: Using Per-Model Base URLs (recommended for flexibility)**
```json
// In .taskmaster/config.json
{
"models": {
"main": {
"provider": "azure",
"modelId": "gpt-4o",
"maxTokens": 16000,
"temperature": 0.7,
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
},
"research": {
"provider": "perplexity",
"modelId": "sonar-pro",
"maxTokens": 8700,
"temperature": 0.1
},
"fallback": {
"provider": "azure",
"modelId": "gpt-4o-mini",
"maxTokens": 10000,
"temperature": 0.7,
"baseURL": "https://your-resource-name.azure.com/openai/deployments"
}
}
}
```

4. **Environment Variables**:
```bash
# In .env file
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here

# Optional: Override endpoint for all Azure models
AZURE_OPENAI_ENDPOINT=https://your-resource-name.azure.com/openai/deployments
```

5. **Important Notes**:
- **Model Deployment Names**: The `modelId` in your configuration should match the **deployment name** you created in Azure OpenAI Studio, not the underlying model name
- **Base URL Priority**: Per-model `baseURL` settings override the global `azureBaseURL` setting
- **Endpoint Format**: When using per-model `baseURL`, use the full path including `/openai/deployments`

6. **Troubleshooting**:

**"Resource not found" errors:**
- Ensure your `baseURL` includes the full path: `https://your-resource-name.openai.azure.com/openai/deployments`
- Verify that your deployment name in `modelId` exactly matches what's configured in Azure OpenAI Studio
- Check that your Azure OpenAI resource is in the correct region and properly deployed

**Authentication errors:**
- Verify your `AZURE_OPENAI_API_KEY` is correct and has not expired
- Ensure your Azure OpenAI resource has the necessary permissions
- Check that your subscription has not been suspended or reached quota limits

**Model availability errors:**
- Confirm the model is deployed in your Azure OpenAI resource
- Verify the deployment name matches your configuration exactly (case-sensitive)
- Ensure the model deployment is in a "Succeeded" state in Azure OpenAI Studio
- Ensure youre not getting rate limited by `maxTokens` maintain appropriate Tokens per Minute Rate Limit (TPM) in your deployment.
32 changes: 32 additions & 0 deletions scripts/modules/supported-models.json
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,38 @@
"max_tokens": 64000
}
],
"azure": [
{
"id": "gpt-4o",
"swe_score": 0.332,
"cost_per_1m_tokens": {
"input": 2.5,
"output": 10.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 16384
},
{
"id": "gpt-4o-mini",
"swe_score": 0.3,
"cost_per_1m_tokens": {
"input": 0.15,
"output": 0.6
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 16384
},
{
"id": "gpt-4-1",
"swe_score": 0,
"cost_per_1m_tokens": {
"input": 2.0,
"output": 10.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 16384
}
],
"openai": [
{
"id": "gpt-4o",
Expand Down