Everything you need to get started and master AITeamCollab.
The orchestrator that plans and delegates tasks to other roles.
18 specialized roles for different tasks, each with its own model.
Automated multi-step workflows that chain roles together.
Human checkpoints for quality control at critical stages.
Model readiness indicators showing load status.
JWT-based authentication for secure API access.
Body: { "username": "...", "password": "..." }
Response: { "token": "..." }
List all available models
Get model warm state
Trigger model warmup
Unload model from memory
Send task to commander
Send task to specific role
Get recent activity
List pins/codex/checkpoints
Save snippet/file ref + tags
Search memory (BM25)
Real-time streaming and updates via WebSocket connection.
Warm up frequently-used models before starting work. Set appropriate keepalive times and use the warmup allowlist to control resource usage.
Commander plans → Architect designs → Coder implements → Reviewer validates → Tester creates tests → Docs Writer generates OpenAPI spec
Set up Trio Review preset, configure approval gates, run a review pipeline
Mix cloud and local models to reduce API costs while maintaining quality
Create your own team preset with specific models for each role
Run entirely on Ollama local models with no internet required
Install: curl -fsSL https://ollama.ai/install.sh | sh
Pull models: ollama pull deepseek-coder:6.7b
Verify: ollama list
AITeamCollab auto-detects local Ollama.
Get API key from console.anthropic.com
Enter key in Settings → Cloud Models → Anthropic
Select Claude models in role dropdowns
Get API key from platform.openai.com
Enter key in Settings → Cloud Models → OpenAI
Select GPT models in role dropdowns
Get API key from makersuite.google.com
Enter key in Settings → Cloud Models → Google
Select Gemini models in role dropdowns
Commander Web Tools uses privacy-respecting search backends.
Deploy SearXNG or LibreX instance
Configure URL in Settings → Technical → Web Tools
Enable web search in Commander panel