CLICLI Commands

CLI Commands Reference

Complete documentation for all Syaala CLI commands. All commands use production API endpoints.

Authentication Required: Most commands require authentication. Run syaala auth login first.

Global Options

Available for all commands:

FlagDescription
--help, -hShow command help
--version, -vShow CLI version
--format <type>Output format: table, json, yaml
--debugEnable debug logging

syaala auth

Manage authentication and user sessions.

syaala auth login

Authenticate with the Syaala platform.

syaala auth login [options]

Options:

FlagDescriptionTypeRequired
--api-key <key>Authenticate using API keystring*
--email <email>Email for credential-based loginstring
--password <password>Password for credential-based loginstring
--org <slug>Organization slugstring
--profile <name>Authentication profile namestring
--interactiveUse interactive modeboolean

Examples:

# Login with API key
syaala auth login --api-key sk_live_...
 
# Interactive login
syaala auth login --interactive
 
# Login with credentials
syaala auth login --email user@example.com --password ***

syaala auth logout

Sign out and remove authentication profile.

syaala auth logout [profile]

Options:

FlagDescription
--allLogout from all profiles

Examples:

# Logout from current profile
syaala auth logout
 
# Logout from specific profile
syaala auth logout my-profile
 
# Logout from all profiles
syaala auth logout --all

syaala auth status

Show current authentication status.

syaala auth status

Example output:

✓ Authenticated as user@example.com
  Organization: Acme Inc (org_abc123)
  Profile: default
  API URL: https://api.syaala.com

syaala deployments

Manage model deployments on GPU infrastructure.

syaala deployments list

List all deployments in your organization.

syaala deployments list [options]

Options:

FlagDescriptionDefault
--state <state>Filter by state: HEALTHY, UNHEALTHY, PROVISIONING, SCALING, STOPPED, FAILEDAll
--limit <number>Maximum results to return20

Examples:

# List all deployments
syaala deployments list
 
# List only healthy deployments
syaala deployments list --state HEALTHY
 
# Get JSON output
syaala deployments list --format json

syaala deployments get

Get detailed information about a specific deployment.

syaala deployments get <deployment-id>

Examples:

# Get deployment details
syaala deployments get dep_abc123
 
# Get JSON output
syaala deployments get dep_abc123 --format json

syaala deployments create

Create a new model deployment.

syaala deployments create [options]

Options:

FlagDescriptionRequiredDefault
--name <name>Deployment name
--model <model-id>Model ID to deploy
--runtime <runtime>Runtime: VLLM, TRITON, FASTAPI, CUSTOMVLLM
--gpu-type <type>GPU type: A100, A6000, RTX4090, RTX3090, V100, T4A100
--min-replicas <number>Minimum replicas0
--max-replicas <number>Maximum replicas10
--env <key=value>Environment variables (multiple allowed)
--secret <secret-id>Secret IDs to attach (multiple allowed)
--interactiveUse interactive mode

Examples:

# Create basic deployment
syaala deployments create \
  --name my-llm \
  --model meta-llama/Llama-2-7b-hf
 
# Create with custom configuration
syaala deployments create \
  --name prod-api \
  --model gpt2 \
  --runtime VLLM \
  --gpu-type A100 \
  --min-replicas 2 \
  --max-replicas 10 \
  --env MAX_TOKENS=2048 \
  --env TEMPERATURE=0.7 \
  --secret sec_abc123
 
# Interactive mode
syaala deployments create --interactive

syaala deployments update

Update an existing deployment configuration.

syaala deployments update <deployment-id> [options]

Options:

FlagDescription
--min-replicas <number>Update minimum replicas
--max-replicas <number>Update maximum replicas
--env <key=value>Add/update environment variables
--remove-env <key>Remove environment variables

Examples:

# Scale deployment
syaala deployments update dep_abc123 \
  --min-replicas 5 \
  --max-replicas 20
 
# Update environment variables
syaala deployments update dep_abc123 \
  --env MAX_TOKENS=4096 \
  --remove-env OLD_VAR

syaala deployments delete

Delete a deployment.

syaala deployments delete <deployment-id>

Examples:

# Delete deployment
syaala deployments delete dep_abc123
 
# Force delete without confirmation
syaala deployments delete dep_abc123 --force

syaala deployments logs

Stream real-time logs from a deployment.

syaala deployments logs <deployment-id> [options]

Options:

FlagDescription
--follow, -fFollow log output (stream)
--tail <number>Number of recent lines to show
--since <time>Show logs since timestamp

Examples:

# Stream logs
syaala deployments logs dep_abc123 --follow
 
# Show last 100 lines
syaala deployments logs dep_abc123 --tail 100
 
# Logs since 1 hour ago
syaala deployments logs dep_abc123 --since 1h

syaala deployments metrics

View deployment metrics and performance data.

syaala deployments metrics <deployment-id> [options]

Options:

FlagDescription
--period <duration>Time period: 1h, 24h, 7d, 30d
--metric <type>Specific metric: requests, latency, gpu_util, memory

Examples:

# View all metrics for last 24 hours
syaala deployments metrics dep_abc123 --period 24h
 
# View GPU utilization
syaala deployments metrics dep_abc123 --metric gpu_util

syaala models

Browse and manage AI models.

Search available models from HuggingFace.

syaala models search [query] [options]

Options:

FlagDescription
--task <task>Filter by task: text-generation, image-classification, etc.
--library <library>Filter by library: transformers, diffusers, etc.
--limit <number>Maximum results

Examples:

# Search for LLaMA models
syaala models search "llama"
 
# Search text generation models
syaala models search --task text-generation --limit 50

syaala models get

Get detailed information about a model.

syaala models get <model-id>

Examples:

syaala models get meta-llama/Llama-2-7b-hf
syaala models get gpt2 --format json

syaala models validate

Validate a model for deployment compatibility.

syaala models validate <model-id>

Examples:

# Check if model can be deployed
syaala models validate meta-llama/Llama-2-7b-hf

syaala models discover

Discover models from HuggingFace Hub with auto-configured deployment settings.

syaala models discover <query> [options]

Options:

FlagDescription
--task <task>Filter by task type (text-generation, image-classification, etc.)
--sort <field>Sort by: downloads, likes, created
--limit <number>Maximum results (default: 10)

Examples:

# Discover LLaMA models
syaala models discover "llama"
 
# Discover image classification models
syaala models discover "resnet" --task image-classification
 
# Sort by popularity
syaala models discover "stable diffusion" --sort downloads --limit 20

Output includes:

  • Model ID and description
  • Downloads and likes count
  • Suggested GPU type with cost estimate
  • Auto-configured runtime and Docker image
  • Ready-to-use deployment command

syaala models recommend

Get personalized model recommendations based on your use case and budget.

syaala models recommend [options]

Options:

FlagDescription
--use-case <case>Use case: text-generation, image-generation, embeddings, etc.
--budget <level>Budget level: low ($100/mo), medium ($500/mo), high ($5000+/mo)
--interactiveInteractive mode with prompts

Examples:

# Get recommendations for text generation
syaala models recommend --use-case text-generation --budget medium
 
# Interactive mode
syaala models recommend --interactive
 
# Specific use case
syaala models recommend --use-case embeddings --budget low

Output includes:

  • Personalized template recommendations
  • Auto-configured GPU and runtime
  • Cost estimates within budget
  • One-click deployment commands

syaala templates

Manage deployment templates.

syaala templates list

List all deployment templates.

syaala templates list [options]

Options:

FlagDescription
--runtime <runtime>Filter by runtime
--publicShow only public templates

Examples:

# List all templates
syaala templates list
 
# List VLLM templates
syaala templates list --runtime VLLM

syaala templates get

Get template details.

syaala templates get <template-id>

syaala templates create

Create a new deployment template.

syaala templates create [options]

Options:

FlagDescriptionRequired
--name <name>Template name
--description <text>Template description
--runtime <runtime>Runtime configuration
--config <json>Template configuration (JSON)
--publicMake template public

Examples:

# Create from config file
syaala templates create \
  --name llm-template \
  --runtime VLLM \
  --config @template.json

syaala templates deploy

Deploy from a template.

syaala templates deploy <template-id> [options]

Options:

FlagDescription
--name <name>Deployment name
--model <model-id>Override model ID

Examples:

# Deploy from template
syaala templates deploy tpl_abc123 --name my-deployment

syaala templates create-from-hf

Create a deployment template from a HuggingFace model with auto-configuration.

syaala templates create-from-hf <model-id> [options]

Options:

FlagDescriptionRequired
--name <name>Display name for the template
--category <category>Template category: llm, vision, multimodal, audio, embedding
--description <text>Custom description (defaults to HuggingFace description)
--tags <tags>Comma-separated tags (max 10)
--visibility <vis>public or private (default: public)

Examples:

# Create template from Llama model
syaala templates create-from-hf \
  meta-llama/Llama-3.3-70B-Instruct \
  --name "Llama 3.3 70B Instruct" \
  --category llm \
  --tags "chat,instruction-following,large-context"
 
# Create private template
syaala templates create-from-hf \
  mistralai/Mistral-7B-Instruct-v0.3 \
  --name "Mistral 7B Custom" \
  --category llm \
  --visibility private
 
# With custom description
syaala templates create-from-hf \
  stabilityai/stable-diffusion-xl-base-1.0 \
  --name "SDXL Base" \
  --category vision \
  --description "High-resolution image generation" \
  --tags "diffusion,image-generation"

Auto-Configuration:

  • Runtime selection (vLLM, Triton, FastAPI)
  • Optimal GPU type based on model size
  • Docker image with dependencies
  • Cost estimation

Output includes:

  • Created template ID
  • Auto-configured settings (runtime, GPU, Docker)
  • Estimated monthly cost
  • Ready-to-use deployment command

syaala batch

Run batch inference operations.

syaala batch create

Create a batch inference job.

syaala batch create [options]

Options:

FlagDescriptionRequired
--deployment <id>Deployment ID
--input <file>Input file (JSONL)
--output <file>Output file path

Examples:

# Run batch job
syaala batch create \
  --deployment dep_abc123 \
  --input requests.jsonl \
  --output results.jsonl

syaala batch status

Check batch job status.

syaala batch status <job-id>

syaala batch cancel

Cancel a running batch job.

syaala batch cancel <job-id>

syaala notifications

Manage alert notifications.

syaala notifications list

List notification channels.

syaala notifications list

syaala notifications create

Create a notification channel.

syaala notifications create [options]

Options:

FlagDescriptionRequired
--type <type>Channel type: email, slack, webhook, pagerduty
--config <json>Channel configuration

Examples:

# Create Slack notification
syaala notifications create \
  --type slack \
  --config '{"webhookUrl": "https://hooks.slack.com/..."}'
 
# Create email notification
syaala notifications create \
  --type email \
  --config '{"email": "alerts@example.com"}'

syaala notifications test

Test a notification channel.

syaala notifications test <channel-id>

Environment Variables

Configure CLI behavior via environment variables:

export SYAALA_API_KEY=sk_live_...          # API authentication key
export SYAALA_API_URL=https://api.syaala.com  # API base URL
export SYAALA_ORG_ID=org_...               # Default organization
export SYAALA_DEBUG=true                   # Enable debug logging
export SYAALA_TIMEOUT=30000                # Request timeout (ms)

Configuration File

The CLI stores settings in ~/.config/syaala/config.json:

{
  "profiles": {
    "default": {
      "apiKey": "sk_live_...",
      "orgId": "org_...",
      "apiUrl": "https://api.syaala.com"
    }
  },
  "activeProfile": "default",
  "outputFormat": "table"
}

Next Steps