Pages

Home Features Solutions Documentation About Founders Privacy Promise

Resources

Getting Started API Reference Tutorials Changelog Roadmap

Company

Our Story Contact Privacy Policy Terms of Service

Documentation

Everything you need to get started and master AITeamCollab.

Quick Start (10 minutes)

  1. Download AITeamCollab Get the HTML app or VS Code extension
  2. Open the Command Center Launch the application
  3. Select a Team Preset Start with "Solo Commander"
  4. Configure your first model Ollama local or cloud API key
  5. Send your first task Message the Commander

First Project Tutorial

  • Creating a new workspace
  • Setting your Skill Level and Work Role
  • Choosing models for each role
  • Running your first pipeline
  • Understanding the output panels

Core Concepts

Commander

The orchestrator that plans and delegates tasks to other roles.

Crew Roles

18 specialized roles for different tasks, each with its own model.

Pipelines

Automated multi-step workflows that chain roles together.

Approval Gates

Human checkpoints for quality control at critical stages.

Warm State

Model readiness indicators showing load status.

Authentication

JWT-based authentication for secure API access.

POST /auth/login

Body: { "username": "...", "password": "..." }
Response: { "token": "..." }

Model Endpoints

GET /api/models

List all available models

GET /api/models/:id/status

Get model warm state

POST /api/models/:id/warmup

Trigger model warmup

POST /api/models/:id/cooldown

Unload model from memory

Command Endpoints

POST /api/commander/send

Send task to commander

POST /api/team/send

Send task to specific role

GET /api/recent

Get recent activity

Memory Endpoints

GET /memory/items

List pins/codex/checkpoints

POST /memory/pin

Save snippet/file ref + tags

GET /memory/search?q=...

Search memory (BM25)

WebSocket

Real-time streaming and updates via WebSocket connection.

// Connection wss://your-server/ws/ // Authentication via JWT in subprotocol header

File Structure

project: name: "My Project" version: "1.0.0" phases: - name: "Planning" tasks: - name: "Requirements Analysis" role: commander prompt: "Analyze the following requirements..." - name: "Implementation" tasks: - name: "Core Features" role: coder depends_on: "Requirements Analysis"

Task Parameters

  • name — Task identifier
  • role — Which crew role handles this
  • prompt — Instructions for the AI
  • depends_on — Task dependencies
  • approval_required — Enable approval gate
  • timeout — Max execution time

Actions Reference

  • generate — Create new code/content
  • review — Validate existing work
  • test — Run test cases
  • document — Generate documentation

Model Selection

  • Use cloud models (Claude, GPT) for strategic/complex reasoning
  • Use local Ollama models for high-volume, repetitive tasks
  • Match model size to task complexity

Workflow Optimization

  • Start with Team Presets, customize later
  • Use approval gates for critical paths (security, production deploys)
  • Enable parallel mode for independent tasks

Memory Management

  • Pin important context to keep it accessible
  • Use Project Codex for decisions and patterns
  • Export conversations before major changes

Performance Tips

💡 Pro Tip

Warm up frequently-used models before starting work. Set appropriate keepalive times and use the warmup allowlist to control resource usage.

Step-by-Step Tutorials

Tutorial 1: Building a REST API

Commander plans → Architect designs → Coder implements → Reviewer validates → Tester creates tests → Docs Writer generates OpenAPI spec

Tutorial 2: Code Review Workflow

Set up Trio Review preset, configure approval gates, run a review pipeline

Tutorial 3: Multi-Model Cost Optimization

Mix cloud and local models to reduce API costs while maintaining quality

Tutorial 4: Custom Team Configuration

Create your own team preset with specific models for each role

Tutorial 5: Offline Development

Run entirely on Ollama local models with no internet required

Model Integrations

Ollama Setup

Install: curl -fsSL https://ollama.ai/install.sh | sh

Pull models: ollama pull deepseek-coder:6.7b

Verify: ollama list

AITeamCollab auto-detects local Ollama.

Claude (Anthropic)

Get API key from console.anthropic.com

Enter key in Settings → Cloud Models → Anthropic

Select Claude models in role dropdowns

GPT (OpenAI)

Get API key from platform.openai.com

Enter key in Settings → Cloud Models → OpenAI

Select GPT models in role dropdowns

Gemini (Google)

Get API key from makersuite.google.com

Enter key in Settings → Cloud Models → Google

Select Gemini models in role dropdowns

Web Search Integration

SearXNG / LibreX

Commander Web Tools uses privacy-respecting search backends.

Deploy SearXNG or LibreX instance
Configure URL in Settings → Technical → Web Tools
Enable web search in Commander panel