- Add synthetic.new skill (primary AI provider) - Add z.ai skill (fallback with GLM models) - Add lean backlog management skill with WSJF prioritization - Add lean prioritization skill with scheduling/parallelization - Add WWS serverless architecture overview 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
206 lines
5.0 KiB
Markdown
206 lines
5.0 KiB
Markdown
# Skill: AI Provider - synthetic.new
|
|
|
|
## Description
|
|
Primary AI provider for the Mylder platform. OpenAI-compatible API with access to DeepSeek, Kimi, and other high-performance models.
|
|
|
|
## Status
|
|
**PRIMARY** - Use for all AI tasks unless fallback needed.
|
|
|
|
## Configuration
|
|
```yaml
|
|
provider: synthetic.new
|
|
base_url: https://api.synthetic.new/openai/v1
|
|
api_key_env: SYNTHETIC_AI_API_KEY
|
|
compatibility: openai
|
|
rate_limit: 100 requests/minute
|
|
```
|
|
|
|
## Available Models
|
|
|
|
### DeepSeek-V3 (Coding & Implementation)
|
|
```json
|
|
{
|
|
"model_id": "hf:deepseek-ai/DeepSeek-V3",
|
|
"best_for": ["code_generation", "implementation", "debugging", "refactoring"],
|
|
"context_window": 128000,
|
|
"max_output": 8192,
|
|
"temperature_range": [0.0, 2.0],
|
|
"recommended_temp": 0.3
|
|
}
|
|
```
|
|
|
|
**Use when:**
|
|
- Writing production code
|
|
- Debugging issues
|
|
- Code refactoring
|
|
- API implementation
|
|
- Database queries
|
|
|
|
### Kimi-K2-Thinking (Planning & Reasoning)
|
|
```json
|
|
{
|
|
"model_id": "hf:moonshotai/Kimi-K2-Thinking",
|
|
"best_for": ["planning", "reasoning", "analysis", "architecture"],
|
|
"context_window": 200000,
|
|
"max_output": 4096,
|
|
"temperature_range": [0.0, 1.0],
|
|
"recommended_temp": 0.5
|
|
}
|
|
```
|
|
|
|
**Use when:**
|
|
- Task planning
|
|
- Architecture decisions
|
|
- Complex reasoning
|
|
- Research synthesis
|
|
- Trade-off analysis
|
|
|
|
## Model Selection Logic
|
|
```javascript
|
|
function selectModel(taskType, complexity) {
|
|
const modelMap = {
|
|
// Planning & Design
|
|
'research': 'hf:moonshotai/Kimi-K2-Thinking',
|
|
'planning': 'hf:moonshotai/Kimi-K2-Thinking',
|
|
'architecture': 'hf:moonshotai/Kimi-K2-Thinking',
|
|
'analysis': 'hf:moonshotai/Kimi-K2-Thinking',
|
|
|
|
// Implementation
|
|
'code': 'hf:deepseek-ai/DeepSeek-V3',
|
|
'implementation': 'hf:deepseek-ai/DeepSeek-V3',
|
|
'debugging': 'hf:deepseek-ai/DeepSeek-V3',
|
|
'testing': 'hf:deepseek-ai/DeepSeek-V3',
|
|
'review': 'hf:deepseek-ai/DeepSeek-V3',
|
|
|
|
// Default
|
|
'default': 'hf:deepseek-ai/DeepSeek-V3'
|
|
};
|
|
|
|
return modelMap[taskType] || modelMap.default;
|
|
}
|
|
```
|
|
|
|
## n8n Integration
|
|
|
|
### HTTP Request Node Configuration
|
|
```json
|
|
{
|
|
"method": "POST",
|
|
"url": "https://api.synthetic.new/openai/v1/chat/completions",
|
|
"headers": {
|
|
"Authorization": "Bearer {{ $env.SYNTHETIC_AI_API_KEY }}",
|
|
"Content-Type": "application/json"
|
|
},
|
|
"body": {
|
|
"model": "hf:deepseek-ai/DeepSeek-V3",
|
|
"messages": [
|
|
{ "role": "system", "content": "{{ systemPrompt }}" },
|
|
{ "role": "user", "content": "{{ userPrompt }}" }
|
|
],
|
|
"max_tokens": 4000,
|
|
"temperature": 0.7
|
|
},
|
|
"timeout": 120000
|
|
}
|
|
```
|
|
|
|
### Code Node Helper
|
|
```javascript
|
|
// AI Request Helper for n8n Code Node
|
|
async function callSyntheticAI(systemPrompt, userPrompt, options = {}) {
|
|
const {
|
|
model = 'hf:deepseek-ai/DeepSeek-V3',
|
|
maxTokens = 4000,
|
|
temperature = 0.7
|
|
} = options;
|
|
|
|
const response = await $http.request({
|
|
method: 'POST',
|
|
url: 'https://api.synthetic.new/openai/v1/chat/completions',
|
|
headers: {
|
|
'Authorization': `Bearer ${$env.SYNTHETIC_AI_API_KEY}`,
|
|
'Content-Type': 'application/json'
|
|
},
|
|
body: {
|
|
model,
|
|
messages: [
|
|
{ role: 'system', content: systemPrompt },
|
|
{ role: 'user', content: userPrompt }
|
|
],
|
|
max_tokens: maxTokens,
|
|
temperature
|
|
}
|
|
});
|
|
|
|
return response.choices[0].message.content;
|
|
}
|
|
```
|
|
|
|
## Error Handling
|
|
|
|
### Retry Strategy
|
|
```javascript
|
|
const retryConfig = {
|
|
maxRetries: 3,
|
|
retryDelay: 1000, // ms
|
|
retryOn: [429, 500, 502, 503, 504],
|
|
fallbackProvider: 'z.ai' // Switch to fallback after max retries
|
|
};
|
|
```
|
|
|
|
### Common Errors
|
|
| Error Code | Cause | Action |
|
|
|------------|-------|--------|
|
|
| 401 | Invalid API key | Check SYNTHETIC_AI_API_KEY |
|
|
| 429 | Rate limit | Retry with backoff or fallback to z.ai |
|
|
| 500 | Server error | Retry or fallback |
|
|
| Timeout | Long response | Increase timeout or reduce max_tokens |
|
|
|
|
## Cost Optimization
|
|
|
|
### Token Estimation
|
|
```javascript
|
|
// Rough estimate: 1 token ≈ 4 characters
|
|
function estimateTokens(text) {
|
|
return Math.ceil(text.length / 4);
|
|
}
|
|
|
|
// Budget check before request
|
|
function checkBudget(input, maxOutput, budgetTokens) {
|
|
const inputTokens = estimateTokens(input);
|
|
const totalEstimate = inputTokens + maxOutput;
|
|
return totalEstimate <= budgetTokens;
|
|
}
|
|
```
|
|
|
|
### Best Practices
|
|
1. **Use appropriate model** - Kimi for reasoning, DeepSeek for coding
|
|
2. **Set max_tokens wisely** - Don't over-allocate
|
|
3. **Cache common responses** - Use KV store for repeated queries
|
|
4. **Batch similar requests** - Group related tasks
|
|
|
|
## Testing
|
|
```bash
|
|
# Test API connection
|
|
curl -X POST https://api.synthetic.new/openai/v1/chat/completions \
|
|
-H "Authorization: Bearer $SYNTHETIC_AI_API_KEY" \
|
|
-H "Content-Type: application/json" \
|
|
-d '{
|
|
"model": "hf:deepseek-ai/DeepSeek-V3",
|
|
"messages": [{"role": "user", "content": "Hello"}],
|
|
"max_tokens": 50
|
|
}'
|
|
```
|
|
|
|
## Related Skills
|
|
- `ai-providers/z-ai.md` - Fallback provider
|
|
- `code/implement.md` - Code generation with AI
|
|
- `design-thinking/ideate.md` - Solution brainstorming
|
|
|
|
## Token Budget
|
|
- Max input: 500 tokens
|
|
- Max output: 800 tokens
|
|
|
|
## Model
|
|
- Recommended: haiku (configuration lookup)
|