Environment Setup

Set up your registries, environments, and providers
View as Markdown

This guide walks you through setting up the core building blocks of Elacity: a prompt registry, an environment, and optionally a fleet and LLM provider. By the end you’ll have a working foundation for versioning and deploying prompts.

Set up a Prompt Registry

A registry is where your versioned prompt artifacts live — think of it as a package repository scoped to your organization.

2

Create a new registry

Click Create Registry and fill in:

  • Name — a human-readable label (e.g. “Production Prompts”)
  • Visibility — choose Public or Private

Create a new registry

Public vs Private Registries

PublicPrivate
AccessAnyone can read without authenticationRestricted to org members and API keys
Use caseOpen-source prompts, community templates, shared examplesProprietary prompts, internal agent logic, production systems
API accessNo X-API-Key header requiredRequires a valid X-API-Key header

Accessing a public registry — no authentication needed:

$curl "https://elacity.ai/api/registries/artifact-versions/content?\
>registryRef=your-org/public-prompts&\
>promptName=greeting&\
>version=1.0.0&\
>model=generic"

Accessing a private registry — requires your API key:

$curl -H "X-API-Key: YOUR_API_KEY" \
> "https://elacity.ai/api/registries/artifact-versions/content?\
>registryRef=your-org/internal-prompts&\
>promptName=greeting&\
>version=1.0.0&\
>model=generic"

Public registries are great for sharing prompt templates with the community or across teams. Use private registries for anything you don’t want exposed outside your organization.

Add an Environment

Environments let you isolate provider credentials, variables, and configuration per deployment stage. A typical setup includes dev, staging, and prod environments.

2

Create an environment

Click Create Environment and give it a name (e.g. dev).

Create a new environment

3

Configure a provider

Within your new environment, set up a deployment provider (e.g. VAPI, Ultravox, or Telnyx). Add the provider’s API credentials so Elacity can deploy agents on your behalf.

Configure a provider in the environment

4

Add variables

Define key-value pairs that get injected into your prompts at deployment time. For example:

VariableValue
COMPANY_NAMEAcme Corp
SUPPORT_URLhttps://support.acme.com
ESCALATION_PHONE+1-555-0100

Variables are resolved during prompt compilation, so your prompt templates stay generic while each environment fills in the right values.

Add environment variables

Why use environments?

  • Safe testing — validate changes in dev before they reach prod
  • Separate credentials — each environment can use its own provider API keys
  • Variable substitution — the same prompt template produces different output per environment (e.g. different support URLs, company names, or escalation contacts)
  • Approval gates — optionally require team approval before deploying to sensitive environments

Create a Fleet (Optional)

Fleets are logical groupings of agents that share default configuration. If you only have a few agents, you can skip this step and come back later.

2

Configure the fleet

Give your fleet a name and description. Optionally set default configuration (provider, model, temperature) that all agents in the fleet inherit.

Configure fleet defaults

Why use fleets?

  • Inherited defaults — set a default model and provider once; every agent in the fleet picks it up
  • Bulk operations — deploy or update all agents in a fleet at once
  • Organization — group agents by function (e.g. “Customer Support”, “Sales Outbound”, “Internal Tools”)

Set up an LLM Provider (Optional)

LLM providers give Elacity access to language models for the playground and prompt compilation previews. This is separate from the deployment provider credentials you set in an environment.

2

Add a provider

Click Add Provider and select your provider (OpenAI, Anthropic, Google, Groq, etc.). Enter your API key and optionally set it as the default.

Add an LLM provider

3

Test connectivity

Click Test to verify the API key works. You should see a list of available models.

Test provider connectivity

Why set up an LLM provider?

  • Playground — test prompts interactively against real models before deploying
  • Model discovery — browse available models from each provider
  • Encrypted storage — API keys are encrypted at rest, not stored in plaintext
  • Shared access — team members can use the playground without needing their own API keys

Next steps