> For clean Markdown of any page, append .md to the page URL.
> For a complete documentation index, see https://docs.elacity.ai/llms.txt.
> For full documentation content, see https://docs.elacity.ai/llms-full.txt.

# Using the Prompt Registry

The Elacity Prompt Registry works like a package manager for AI prompts. Instead of embedding prompt strings in your application code, you author them in the registry with semantic versioning, then pull the compiled result at runtime via API.

## Prompts as Versioned Artifacts

Every prompt in a registry is a versioned artifact with:

* **Semantic versioning** (e.g. `1.0.0`, `1.1.0`, `2.0.0`) for safe rollbacks and predictable upgrades
* **Model-specific variants** — the same prompt name can have different content optimized for `gpt-4o`, `claude-3`, or a `generic` fallback
* **Metadata frontmatter** — description, tags, and configuration stored alongside the prompt content
* **Content hashing** — every version is hashed for integrity verification and drift detection

### Promptlets

Promptlets are reusable prompt components that can be composed into larger prompts. Think of them like shared utility functions — define a tone-of-voice section, a compliance disclaimer, or a tool-usage preamble once, then reference it across multiple prompts. When a promptlet is updated, every prompt that includes it picks up the change on the next compilation.

## Pulling Prompts at Runtime

Fetch prompt content via the API by calling the artifact version content endpoint:

```bash
curl -H "X-API-Key: YOUR_API_KEY" \
  "https://elacity.ai/api/registries/artifact-versions/content?\
registryRef=your-org/my-registry&\
promptName=customer-support&\
version=1.0.0&\
model=gpt-4o"
```

**Response includes:**

| Field      | Description                                                                                                |
| ---------- | ---------------------------------------------------------------------------------------------------------- |
| `source`   | The raw prompt content exactly as authored                                                                 |
| `compiled` | Promptlets expanded and model/environment preprocessors applied — but **template variables are preserved** |
| `metadata` | Frontmatter metadata (description, tags, etc.)                                                             |

### What compilation resolves vs what it leaves for you

Compilation is a two-stage process. Elacity handles the first stage (structural resolution) at publish time, and your application handles the second stage (context substitution) at runtime.

**Stage 1 — Elacity resolves at compile time:**

* **Promptlet imports** — `{{ import "tone-of-voice@^1.0" }}` is replaced with the promptlet's content
* **Model preprocessors** — `#ECP-IF-MODEL gpt-4o` / `#ECP-NOT-MODEL` blocks are included or excluded based on the target model
* **Environment preprocessors** — `#ECP-IF-ENV prod` / `#ECP-NOT-ENV` blocks are included or excluded based on the target environment

**Stage 2 — Your application substitutes at runtime:**

* **Template variables** — `{{variable_name}}` placeholders remain in the compiled output for you to fill in with your own runtime context

This separation is intentional. Elacity manages the structural composition of your prompt (which components, which model variant, which environment branch), while your application injects the dynamic context that changes per request (user names, session data, query parameters).

### Template Variable Syntax

Template variables use Mustache-style double braces and support dot notation for nested access:

```text
Hello {{user.name}}, welcome to {{company_name}}.

Your account ID is {{session.account_id}}.

{{#if premium}}
As a premium member, you have access to priority support.
{{/if}}
```

### Example: What the compiled output looks like

Suppose you author this prompt in the registry:

```text
{{ import "compliance-disclaimer@^1.0" }}

You are a support agent for {{company_name}}.
The caller's name is {{caller.name}} and their account is {{caller.account_id}}.

#ECP-IF-MODEL gpt-4o
Use markdown formatting in your responses.
#ECP-NOT-MODEL

#ECP-IF-MODEL claude-3
Use XML tags to structure your responses.
#ECP-NOT-MODEL

Always follow the guidelines above.
```

When you fetch it with `model=gpt-4o`, the `compiled` field returns:

```text
[Compliance disclaimer content inlined here by Elacity]

You are a support agent for {{company_name}}.
The caller's name is {{caller.name}} and their account is {{caller.account_id}}.

Use markdown formatting in your responses.

Always follow the guidelines above.
```

Notice: the promptlet import and model preprocessor are resolved, but `{{company_name}}`, `{{caller.name}}`, and `{{caller.account_id}}` remain as placeholders for your application to fill in.

### Substituting Variables in Your Application

After fetching the compiled prompt, substitute the template variables with your runtime context before sending it to an LLM:

```python
import re
import requests

ELACITY_API_KEY = "your-api-key"
BASE_URL = "https://elacity.ai/api"


def get_compiled_prompt(registry_ref, prompt_name, version="latest", model="generic"):
    """Fetch a compiled prompt from the Elacity registry."""
    response = requests.get(
        f"{BASE_URL}/registries/artifact-versions/content",
        params={
            "registryRef": registry_ref,
            "promptName": prompt_name,
            "version": version,
            "model": model,
        },
        headers={"X-API-Key": ELACITY_API_KEY},
    )
    response.raise_for_status()
    return response.json()["compiled"]


def substitute_variables(template: str, context: dict) -> str:
    """Replace {{variable}} placeholders with values from a context dict.

    Supports dot notation: {{user.name}} resolves to context["user"]["name"].
    Unmatched variables are left as-is.
    """
    def resolve(match):
        key = match.group(1).strip()
        value = context
        for part in key.split("."):
            if isinstance(value, dict) and part in value:
                value = value[part]
            else:
                return match.group(0)  # leave unmatched variables intact
        return str(value)

    return re.sub(r"\{\{\s*([a-zA-Z0-9_.]+)\s*\}\}", resolve, template)


# 1. Fetch the compiled prompt (promptlets and preprocessors resolved)
compiled = get_compiled_prompt(
    registry_ref="acme/production",
    prompt_name="customer-support",
    version="2.1.0",
    model="gpt-4o",
)

# 2. Substitute runtime context (template variables resolved)
system_prompt = substitute_variables(compiled, {
    "company_name": "Acme Corp",
    "caller": {
        "name": "Jane Doe",
        "account_id": "ACC-12345",
    },
})

# 3. Use the final prompt with your LLM
print(system_prompt)
```

<Note>
  When deploying agents through Elacity (e.g. to VAPI), template variable substitution is handled automatically using the variables defined in your environment. The manual substitution shown above is only needed when you pull prompts directly via the API for use in your own application code.
</Note>

### When to use `source` vs `compiled`

|                        | Raw (`source`)                                           | Compiled (`compiled`)                                            |
| ---------------------- | -------------------------------------------------------- | ---------------------------------------------------------------- |
| **Promptlets**         | Import references like `{{ import "..." }}` appear as-is | Promptlet content inlined                                        |
| **Model blocks**       | `#ECP-IF-MODEL` directives appear as-is                  | Correct model branch selected                                    |
| **Template variables** | `{{variable}}` placeholders appear as-is                 | `{{variable}}` placeholders appear as-is (for you to substitute) |
| **Use case**           | Debugging, auditing, viewing template syntax             | Production runtime — substitute variables and send to LLM        |

This pattern keeps your application code free of prompt strings. When you update a prompt in the registry, your application picks up the change without a code deploy.

## Versioning & Compilation Workflow

The typical lifecycle for a prompt is:

<Steps>
  <Step title="Draft">
    Author or edit a prompt in the Elacity editor. Drafts are saved automatically and are not visible to consumers of the registry.
  </Step>

  <Step title="Compile & Preview">
    Preview the compiled output to verify that promptlets are resolved correctly, variables are substituted, and model-specific preprocessors produce the expected result.
  </Step>

  <Step title="Publish">
    Publish the draft as a new version (e.g. `1.0.0 → 1.1.0`). Published versions are immutable — once published, a version's content never changes.
  </Step>

  <Step title="Deploy or Pull">
    Reference the published version in an agent deployment, or pull it at runtime via the API.
  </Step>
</Steps>

### Model-Specific Variants

The same prompt name can have variants optimized for different models. When you request a prompt with `model=gpt-4o`, Elacity returns the GPT-4o-specific variant if one exists, otherwise falls back to the `generic` version.

This is useful when different models need different formatting, token budgets, or instruction styles. For example, Claude models may benefit from XML-structured prompts while GPT models may work better with markdown.

## Structuring Your Registries

Registries are the primary organizational boundary in Elacity. How you partition them determines who can collaborate on which prompts, which artifacts version independently, and where access control boundaries fall. Below are three proven patterns — pick the one that matches your situation, or combine them.

### Per-Client Registries

If you are a service provider or agency managing AI solutions for multiple clients, create a **private registry per client** alongside a **shared registry** for common promptlets.

| Registry               | Visibility | Contents                                                                           |
| ---------------------- | ---------- | ---------------------------------------------------------------------------------- |
| `agency/shared`        | Private    | Reusable promptlets — compliance disclaimers, tone-of-voice, escalation procedures |
| `agency/client-acme`   | Private    | Acme Corp's prompt artifacts (support agent, booking agent, etc.)                  |
| `agency/client-globex` | Private    | Globex's prompt artifacts                                                          |

Client-specific prompts import shared promptlets so you author common logic once:

```text
{{ import "compliance-disclaimer@^1.0" }}
{{ import "escalation-procedure@^2.0" }}

You are a support agent for {{company_name}}.
Handle all inquiries following the compliance and escalation guidelines above.
```

When you fetch from each client's registry, the shared promptlets are resolved automatically:

```python
# Pull the same prompt structure, customized per client
acme_prompt = get_compiled_prompt(
    registry_ref="agency/client-acme",
    prompt_name="support-agent",
    version="1.2.0",
    model="gpt-4o",
)

globex_prompt = get_compiled_prompt(
    registry_ref="agency/client-globex",
    prompt_name="support-agent",
    version="1.0.0",
    model="gpt-4o",
)
```

Pair each client registry with a dedicated [environment](/environment-setup) for that client's provider credentials and variables (company name, support URL, escalation contacts). Use [fleets](/environment-setup#create-a-fleet-optional) to group each client's agents for bulk operations.

<Note>
  The Service Provider plan includes unlimited private registries, designed for exactly this scaling pattern. See [Plans](/creating-an-account#plans) for details.
</Note>

### Per-Domain Registries

Larger organizations with multiple teams building AI products benefit from a **registry per domain or business unit**. Each team owns its registry and release cadence while sharing common promptlets through a cross-team registry.

| Registry                      | Owner         | Contents                                                  |
| ----------------------------- | ------------- | --------------------------------------------------------- |
| `enterprise/shared`           | Platform team | Brand voice, compliance language, formatting standards    |
| `enterprise/customer-support` | CX team       | Support agent prompts, triage logic, CSAT follow-ups      |
| `enterprise/sales-enablement` | Sales ops     | Lead qualification, objection handling, demo prep prompts |
| `enterprise/internal-tools`   | Platform team | Code review agents, incident responders, onboarding bots  |

The CX team can iterate on support prompts at their own pace without affecting sales prompts, and vice versa. Shared promptlets keep brand voice consistent across every domain:

```text
{{ import "brand-voice@^1.0" }}
{{ import "data-handling-policy@^3.0" }}

You are a customer support agent for {{company_name}}.
Always follow the brand voice and data handling policy above.

{{#if premium}}
This caller is a premium customer. Prioritize their request.
{{/if}}
```

This pattern works well when different teams need different approval workflows — the CX team can ship daily while the compliance-sensitive internal-tools registry requires review before every publish.

### Shared Component Registries

Regardless of whether you structure by client or by domain, dedicate a registry to **reusable promptlets only**. Think of it as a shared library that other registries depend on.

Common promptlets to centralize:

* **Compliance disclaimers** — legal language that must appear in every customer-facing prompt
* **Tone-of-voice guides** — brand personality instructions
* **Tool-usage preambles** — instructions for how the model should invoke tools
* **Error-handling patterns** — how to respond when something goes wrong
* **Output format standards** — JSON schemas, markdown formatting rules

When you update a promptlet in the shared registry and publish a new version, every prompt that imports it with a compatible semver range (e.g. `@^1.0`) picks up the change on the next compilation — no need to touch each consuming registry individually.

```text
# In org/shared — promptlet: "output-format" v1.0.0
Always respond in valid JSON with the following structure:
- "answer": your response text
- "confidence": a number between 0 and 1
- "sources": an array of source references
```

```text
# In org/customer-support — prompt: "support-agent"
{{ import "compliance-disclaimer@^1.0" }}
{{ import "output-format@^1.0" }}

You are a support agent for {{company_name}}.
Route the caller to the correct department based on their issue.
```

```text
# In org/sales — prompt: "lead-qualifier"
{{ import "output-format@^1.0" }}

You are a lead qualification assistant.
Evaluate the prospect based on: {{qualification_criteria}}.
```

Both prompts share the same output format. When you update `output-format` to `1.1.0`, both pick up the change automatically.

### Registries vs Environments

<Warning>
  Do not create separate registries for `dev`, `staging`, and `prod`. Use [environments](/environment-setup) and `#ECP-IF-ENV` preprocessors instead. Registries define organizational boundaries (who owns which prompts); environments define deployment boundaries (which credentials and variables apply at runtime). Mixing the two leads to duplicated prompts and version drift.
</Warning>

## Storing OpenClaw Souls

The registry is well suited for storing [OpenClaw](https://github.com/opensouls/community) soul definitions. A soul's personality, drives, and behavioral instructions are essentially structured prompts — they benefit from the same versioning, model-specific variants, and runtime resolution that Elacity provides.

### Why store souls in Elacity?

* **Version control** — roll back to a previous soul version if a change causes regressions
* **Model-specific tuning** — maintain separate soul definitions optimized for different LLM backends
* **Runtime resolution** — pull the latest soul at startup without redeploying your application
* **Collaboration** — your team can iterate on soul definitions in the Elacity editor with compilation previews

### Example: OpenClaw wrapper with Elacity

```python
import requests

ELACITY_API_KEY = "your-api-key"
ELACITY_BASE_URL = "https://elacity.ai/api"


def fetch_soul(registry_ref: str, soul_name: str, version: str = "latest", model: str = "generic") -> str:
    """Fetch a compiled soul definition from the Elacity registry."""
    response = requests.get(
        f"{ELACITY_BASE_URL}/registries/artifact-versions/content",
        params={
            "registryRef": registry_ref,
            "promptName": soul_name,
            "version": version,
            "model": model,
        },
        headers={"X-API-Key": ELACITY_API_KEY},
    )
    response.raise_for_status()
    return response.json()["compiled"]


class ElacitySoul:
    """A simple OpenClaw soul wrapper that sources its definition from Elacity."""

    def __init__(self, registry_ref: str, soul_name: str, version: str = "latest", model: str = "generic"):
        self.registry_ref = registry_ref
        self.soul_name = soul_name
        self.version = version
        self.model = model
        self.definition = None

    def load(self):
        """Load the soul definition from the registry."""
        self.definition = fetch_soul(
            self.registry_ref,
            self.soul_name,
            self.version,
            self.model,
        )
        return self

    def get_system_prompt(self) -> str:
        """Return the compiled soul definition as a system prompt."""
        if not self.definition:
            self.load()
        return self.definition


# Usage: load a soul optimized for GPT-4o
soul = ElacitySoul(
    registry_ref="acme/souls",
    soul_name="helpful-assistant",
    version="2.0.0",
    model="gpt-4o",
).load()

# Use with your OpenClaw agent
system_prompt = soul.get_system_prompt()
print(f"Loaded soul ({len(system_prompt)} chars)")

# Switch to a Claude-optimized variant — same soul, different model tuning
claude_soul = ElacitySoul(
    registry_ref="acme/souls",
    soul_name="helpful-assistant",
    version="2.0.0",
    model="claude-3",
).load()
```

<Note>
  Store your soul definitions under a dedicated registry (e.g. `your-org/souls`) to keep them organized separately from your agent prompts and promptlets.
</Note>

## Next steps

<CardGroup cols={2}>
  <Card title="Deploying Agents" icon="duotone rocket" href="/deploying-agents">
    Deploy a voice agent with tools to VAPI
  </Card>

  <Card title="API Reference" icon="duotone code" href="/api-reference">
    Explore the full API
  </Card>
</CardGroup>