# Aria Deployment Guide

## What This Covers

This guide describes how the current codebase boots and runs. It replaces the older, more speculative deployment notes that referenced an install script and default hosting flow that are not implemented in this repository.

Aria can run in three practical modes today:

1. Local only: backend, Mission Control, and optional Ollama all on one machine.
2. VPS only: backend on a server, with remote channels and APIs always on.
3. Hybrid: backend on a VPS, tunnel client on your local machine for local-shell, local-file, browser, and desktop-adjacent tasks.

## Runtime Pieces

- Backend: Bun app in `src/index.ts`
- Gateway API: default `http://localhost:3000`
- Mission Control: Next.js app in `mission-control/`
- Database: SQLite at `data/aria.db`
- Agent configs: `agents/*.yaml`
- Skills: `data/skills/{agent}/`
- Optional local model host: Ollama
- Optional local-machine bridge: tunnel client

## Prerequisites

- Bun installed
- Node-compatible build environment for Mission Control
- Optional: Ollama if you want local models
- Optional: API keys for cloud providers and integrations

## First Run

On a fresh repo, the backend starts a setup wizard if the required initial state is missing.

```bash
bun install
bun run start
```

The setup flow:

- creates or verifies agent configs
- writes starter env values
- stores initial user context in `data/user-context.md`
- bootstraps the normal runtime after setup completes

## Local Development

Run backend and frontend together:

```bash
bun run dev
```

This script currently does:

- backend with hot reload on port `3000`
- Mission Control dev server on port `4000`
- `MC_DEV_PORT=4000` so the backend redirects `/` to the frontend dev server

If you want to run them separately:

```bash
bun run dev:backend
bun run dev:frontend
```

## Production-ish Local Run

If you only want the backend and CLI/API:

```bash
bun run start
```

The backend will:

- initialize SQLite
- load agents from `agents/`
- register tools and integrations
- start cron, triggers, dream runner scheduling, and the gateway
- start the CLI REPL

## Mission Control Notes

Mission Control talks to the backend through:

- `NEXT_PUBLIC_API_URL` defaulting to `http://localhost:3000`
- `NEXT_PUBLIC_API_SECRET` when auth-protected API access is needed

In local development, the repo already assumes:

- backend on `3000`
- frontend on `4000`

## Production Frontend Serving

The gateway can serve a static Mission Control build when `MC_DEV_PORT` is not set. The current gateway looks for files in:

```text
mission-control/out
```

That means your production frontend flow must produce a static output there, or the gateway will return:

```text
Mission Control not built. Run: bun run build:frontend
```

The current repo script is:

```bash
bun run build:frontend
```

Before relying on gateway-served frontend assets in production, verify that your Next.js build/export pipeline actually writes the expected files into `mission-control/out`.

## Environment-Driven Integrations

Many tools are only registered when their credentials exist.

### Model providers

- `OPENAI_API_KEY`
- `ANTHROPIC_API_KEY`
- `GOOGLE_API_KEY` or `GEMINI_API_KEY`
- `OPENROUTER_API_KEY`
- `DEEPSEEK_API_KEY`
- `MINIMAX_API_KEY`
- `KIMI_API_KEY` or `MOONSHOT_API_KEY`
- OpenAI Codex provider can also be enabled through imported ChatGPT/Codex auth state

### Channels

- `TELEGRAM_BOT_TOKEN` for fallback shared Telegram bot
- `DISCORD_BOT_TOKEN` for fallback shared Discord bot
- Per-agent bot tokens can also live directly in agent YAML under `channels`

### Productivity and external tools

- Google Workspace tools need `GOOGLE_CLIENT_ID` and `GOOGLE_CLIENT_SECRET`, then OAuth completion
- `SLACK_BOT_TOKEN` enables Slack tools
- IMAP/SMTP env vars enable generic email tools
- `APIFY_TOKEN` enables Apify marketplace tools

### Tunnel

- `TUNNEL_SECRET` secures the VPS-to-local tunnel

## Tunnel / Hybrid Setup

Run the server normally on the VPS, then start the tunnel client on your local machine:

```bash
bun run tunnel --server ws://your-host:3000/tunnel --secret your-secret
```

When connected, agents can use:

- `local_shell`
- `local_read_file`
- `local_write_file`
- `local_list_dir`
- `local_proxy`
- `coding_task`

This is the practical way to keep always-on bots and automations on a VPS while still giving agents access to your laptop.

## Storage and Persistence

Important runtime state lives in:

- `data/aria.db` for conversations, memories, tasks, logs, contacts, cron jobs, and learning state
- `data/skills/` for YAML skill files
- `agents/` for agent definitions
- `data/files/` for editor documents
- `data/audio/` for generated voice assets

Back up `data/` and `agents/` together.

## Operational Advice

### Small local machine

Good for:

- direct personal use
- fast iteration on prompts, tools, and UI
- local desktop integrations

Less ideal for:

- always-on Telegram/Discord bots
- unattended cron/trigger workloads when the machine sleeps

### VPS

Good for:

- always-on channels
- API access
- automations, cron jobs, and trigger delivery

Less ideal for:

- local desktop control without a tunnel
- heavy local-model workloads on weak CPUs

### Hybrid

Best default if you want both:

- stable, always-on network presence
- access to your local machine when needed

## Current Gaps

The deployment surface is usable, but a few areas are still rough:

- static frontend build/serve expectations need tightening
- packaging/install story is not fully productized
- daemon/service install paths are not yet fully documented in this repo
- test and ops documentation is lighter than the code warrants

## Recommended Start

If you are setting this up from scratch, use this sequence:

1. `bun install`
2. `bun run dev`
3. complete the setup wizard
4. verify `http://localhost:3000/health`
5. add one provider or channel integration at a time
6. add per-agent bots or a shared Aria bot after the core runtime is stable
