Using the Prompt Registry

Version, compile, and pull prompts at runtime
View as Markdown

The Elacity Prompt Registry works like a package manager for AI prompts. Instead of embedding prompt strings in your application code, you author them in the registry with semantic versioning, then pull the compiled result at runtime via API.

Prompts as Versioned Artifacts

Every prompt in a registry is a versioned artifact with:

  • Semantic versioning (e.g. 1.0.0, 1.1.0, 2.0.0) for safe rollbacks and predictable upgrades
  • Model-specific variants — the same prompt name can have different content optimized for gpt-4o, claude-3, or a generic fallback
  • Metadata frontmatter — description, tags, and configuration stored alongside the prompt content
  • Content hashing — every version is hashed for integrity verification and drift detection

Promptlets

Promptlets are reusable prompt components that can be composed into larger prompts. Think of them like shared utility functions — define a tone-of-voice section, a compliance disclaimer, or a tool-usage preamble once, then reference it across multiple prompts. When a promptlet is updated, every prompt that includes it picks up the change on the next compilation.

Pulling Prompts at Runtime

Fetch prompt content via the API by calling the artifact version content endpoint:

$curl -H "X-API-Key: YOUR_API_KEY" \
> "https://elacity.ai/api/registries/artifact-versions/content?\
>registryRef=your-org/my-registry&\
>promptName=customer-support&\
>version=1.0.0&\
>model=gpt-4o"

Response includes:

FieldDescription
sourceThe raw prompt content exactly as authored
compiledPromptlets expanded and model/environment preprocessors applied — but template variables are preserved
metadataFrontmatter metadata (description, tags, etc.)

What compilation resolves vs what it leaves for you

Compilation is a two-stage process. Elacity handles the first stage (structural resolution) at publish time, and your application handles the second stage (context substitution) at runtime.

Stage 1 — Elacity resolves at compile time:

  • Promptlet imports{{ import "tone-of-voice@^1.0" }} is replaced with the promptlet’s content
  • Model preprocessors#ECP-IF-MODEL gpt-4o / #ECP-NOT-MODEL blocks are included or excluded based on the target model
  • Environment preprocessors#ECP-IF-ENV prod / #ECP-NOT-ENV blocks are included or excluded based on the target environment

Stage 2 — Your application substitutes at runtime:

  • Template variables{{variable_name}} placeholders remain in the compiled output for you to fill in with your own runtime context

This separation is intentional. Elacity manages the structural composition of your prompt (which components, which model variant, which environment branch), while your application injects the dynamic context that changes per request (user names, session data, query parameters).

Template Variable Syntax

Template variables use Mustache-style double braces and support dot notation for nested access:

Hello {{user.name}}, welcome to {{company_name}}.
Your account ID is {{session.account_id}}.
{{#if premium}}
As a premium member, you have access to priority support.
{{/if}}

Example: What the compiled output looks like

Suppose you author this prompt in the registry:

{{ import "compliance-disclaimer@^1.0" }}
You are a support agent for {{company_name}}.
The caller's name is {{caller.name}} and their account is {{caller.account_id}}.
#ECP-IF-MODEL gpt-4o
Use markdown formatting in your responses.
#ECP-NOT-MODEL
#ECP-IF-MODEL claude-3
Use XML tags to structure your responses.
#ECP-NOT-MODEL
Always follow the guidelines above.

When you fetch it with model=gpt-4o, the compiled field returns:

[Compliance disclaimer content inlined here by Elacity]
You are a support agent for {{company_name}}.
The caller's name is {{caller.name}} and their account is {{caller.account_id}}.
Use markdown formatting in your responses.
Always follow the guidelines above.

Notice: the promptlet import and model preprocessor are resolved, but {{company_name}}, {{caller.name}}, and {{caller.account_id}} remain as placeholders for your application to fill in.

Substituting Variables in Your Application

After fetching the compiled prompt, substitute the template variables with your runtime context before sending it to an LLM:

1import re
2import requests
3
4ELACITY_API_KEY = "your-api-key"
5BASE_URL = "https://elacity.ai/api"
6
7
8def get_compiled_prompt(registry_ref, prompt_name, version="latest", model="generic"):
9 """Fetch a compiled prompt from the Elacity registry."""
10 response = requests.get(
11 f"{BASE_URL}/registries/artifact-versions/content",
12 params={
13 "registryRef": registry_ref,
14 "promptName": prompt_name,
15 "version": version,
16 "model": model,
17 },
18 headers={"X-API-Key": ELACITY_API_KEY},
19 )
20 response.raise_for_status()
21 return response.json()["compiled"]
22
23
24def substitute_variables(template: str, context: dict) -> str:
25 """Replace {{variable}} placeholders with values from a context dict.
26
27 Supports dot notation: {{user.name}} resolves to context["user"]["name"].
28 Unmatched variables are left as-is.
29 """
30 def resolve(match):
31 key = match.group(1).strip()
32 value = context
33 for part in key.split("."):
34 if isinstance(value, dict) and part in value:
35 value = value[part]
36 else:
37 return match.group(0) # leave unmatched variables intact
38 return str(value)
39
40 return re.sub(r"\{\{\s*([a-zA-Z0-9_.]+)\s*\}\}", resolve, template)
41
42
43# 1. Fetch the compiled prompt (promptlets and preprocessors resolved)
44compiled = get_compiled_prompt(
45 registry_ref="acme/production",
46 prompt_name="customer-support",
47 version="2.1.0",
48 model="gpt-4o",
49)
50
51# 2. Substitute runtime context (template variables resolved)
52system_prompt = substitute_variables(compiled, {
53 "company_name": "Acme Corp",
54 "caller": {
55 "name": "Jane Doe",
56 "account_id": "ACC-12345",
57 },
58})
59
60# 3. Use the final prompt with your LLM
61print(system_prompt)

When deploying agents through Elacity (e.g. to VAPI), template variable substitution is handled automatically using the variables defined in your environment. The manual substitution shown above is only needed when you pull prompts directly via the API for use in your own application code.

When to use source vs compiled

Raw (source)Compiled (compiled)
PromptletsImport references like {{ import "..." }} appear as-isPromptlet content inlined
Model blocks#ECP-IF-MODEL directives appear as-isCorrect model branch selected
Template variables{{variable}} placeholders appear as-is{{variable}} placeholders appear as-is (for you to substitute)
Use caseDebugging, auditing, viewing template syntaxProduction runtime — substitute variables and send to LLM

This pattern keeps your application code free of prompt strings. When you update a prompt in the registry, your application picks up the change without a code deploy.

Versioning & Compilation Workflow

The typical lifecycle for a prompt is:

1

Draft

Author or edit a prompt in the Elacity editor. Drafts are saved automatically and are not visible to consumers of the registry.

2

Compile & Preview

Preview the compiled output to verify that promptlets are resolved correctly, variables are substituted, and model-specific preprocessors produce the expected result.

3

Publish

Publish the draft as a new version (e.g. 1.0.0 → 1.1.0). Published versions are immutable — once published, a version’s content never changes.

4

Deploy or Pull

Reference the published version in an agent deployment, or pull it at runtime via the API.

Model-Specific Variants

The same prompt name can have variants optimized for different models. When you request a prompt with model=gpt-4o, Elacity returns the GPT-4o-specific variant if one exists, otherwise falls back to the generic version.

This is useful when different models need different formatting, token budgets, or instruction styles. For example, Claude models may benefit from XML-structured prompts while GPT models may work better with markdown.

Structuring Your Registries

Registries are the primary organizational boundary in Elacity. How you partition them determines who can collaborate on which prompts, which artifacts version independently, and where access control boundaries fall. Below are three proven patterns — pick the one that matches your situation, or combine them.

Per-Client Registries

If you are a service provider or agency managing AI solutions for multiple clients, create a private registry per client alongside a shared registry for common promptlets.

RegistryVisibilityContents
agency/sharedPrivateReusable promptlets — compliance disclaimers, tone-of-voice, escalation procedures
agency/client-acmePrivateAcme Corp’s prompt artifacts (support agent, booking agent, etc.)
agency/client-globexPrivateGlobex’s prompt artifacts

Client-specific prompts import shared promptlets so you author common logic once:

{{ import "compliance-disclaimer@^1.0" }}
{{ import "escalation-procedure@^2.0" }}
You are a support agent for {{company_name}}.
Handle all inquiries following the compliance and escalation guidelines above.

When you fetch from each client’s registry, the shared promptlets are resolved automatically:

1# Pull the same prompt structure, customized per client
2acme_prompt = get_compiled_prompt(
3 registry_ref="agency/client-acme",
4 prompt_name="support-agent",
5 version="1.2.0",
6 model="gpt-4o",
7)
8
9globex_prompt = get_compiled_prompt(
10 registry_ref="agency/client-globex",
11 prompt_name="support-agent",
12 version="1.0.0",
13 model="gpt-4o",
14)

Pair each client registry with a dedicated environment for that client’s provider credentials and variables (company name, support URL, escalation contacts). Use fleets to group each client’s agents for bulk operations.

The Service Provider plan includes unlimited private registries, designed for exactly this scaling pattern. See Plans for details.

Per-Domain Registries

Larger organizations with multiple teams building AI products benefit from a registry per domain or business unit. Each team owns its registry and release cadence while sharing common promptlets through a cross-team registry.

RegistryOwnerContents
enterprise/sharedPlatform teamBrand voice, compliance language, formatting standards
enterprise/customer-supportCX teamSupport agent prompts, triage logic, CSAT follow-ups
enterprise/sales-enablementSales opsLead qualification, objection handling, demo prep prompts
enterprise/internal-toolsPlatform teamCode review agents, incident responders, onboarding bots

The CX team can iterate on support prompts at their own pace without affecting sales prompts, and vice versa. Shared promptlets keep brand voice consistent across every domain:

{{ import "brand-voice@^1.0" }}
{{ import "data-handling-policy@^3.0" }}
You are a customer support agent for {{company_name}}.
Always follow the brand voice and data handling policy above.
{{#if premium}}
This caller is a premium customer. Prioritize their request.
{{/if}}

This pattern works well when different teams need different approval workflows — the CX team can ship daily while the compliance-sensitive internal-tools registry requires review before every publish.

Shared Component Registries

Regardless of whether you structure by client or by domain, dedicate a registry to reusable promptlets only. Think of it as a shared library that other registries depend on.

Common promptlets to centralize:

  • Compliance disclaimers — legal language that must appear in every customer-facing prompt
  • Tone-of-voice guides — brand personality instructions
  • Tool-usage preambles — instructions for how the model should invoke tools
  • Error-handling patterns — how to respond when something goes wrong
  • Output format standards — JSON schemas, markdown formatting rules

When you update a promptlet in the shared registry and publish a new version, every prompt that imports it with a compatible semver range (e.g. @^1.0) picks up the change on the next compilation — no need to touch each consuming registry individually.

# In org/shared — promptlet: "output-format" v1.0.0
Always respond in valid JSON with the following structure:
- "answer": your response text
- "confidence": a number between 0 and 1
- "sources": an array of source references
# In org/customer-support — prompt: "support-agent"
{{ import "compliance-disclaimer@^1.0" }}
{{ import "output-format@^1.0" }}
You are a support agent for {{company_name}}.
Route the caller to the correct department based on their issue.
# In org/sales — prompt: "lead-qualifier"
{{ import "output-format@^1.0" }}
You are a lead qualification assistant.
Evaluate the prospect based on: {{qualification_criteria}}.

Both prompts share the same output format. When you update output-format to 1.1.0, both pick up the change automatically.

Registries vs Environments

Do not create separate registries for dev, staging, and prod. Use environments and #ECP-IF-ENV preprocessors instead. Registries define organizational boundaries (who owns which prompts); environments define deployment boundaries (which credentials and variables apply at runtime). Mixing the two leads to duplicated prompts and version drift.

Storing OpenClaw Souls

The registry is well suited for storing OpenClaw soul definitions. A soul’s personality, drives, and behavioral instructions are essentially structured prompts — they benefit from the same versioning, model-specific variants, and runtime resolution that Elacity provides.

Why store souls in Elacity?

  • Version control — roll back to a previous soul version if a change causes regressions
  • Model-specific tuning — maintain separate soul definitions optimized for different LLM backends
  • Runtime resolution — pull the latest soul at startup without redeploying your application
  • Collaboration — your team can iterate on soul definitions in the Elacity editor with compilation previews

Example: OpenClaw wrapper with Elacity

1import requests
2
3ELACITY_API_KEY = "your-api-key"
4ELACITY_BASE_URL = "https://elacity.ai/api"
5
6
7def fetch_soul(registry_ref: str, soul_name: str, version: str = "latest", model: str = "generic") -> str:
8 """Fetch a compiled soul definition from the Elacity registry."""
9 response = requests.get(
10 f"{ELACITY_BASE_URL}/registries/artifact-versions/content",
11 params={
12 "registryRef": registry_ref,
13 "promptName": soul_name,
14 "version": version,
15 "model": model,
16 },
17 headers={"X-API-Key": ELACITY_API_KEY},
18 )
19 response.raise_for_status()
20 return response.json()["compiled"]
21
22
23class ElacitySoul:
24 """A simple OpenClaw soul wrapper that sources its definition from Elacity."""
25
26 def __init__(self, registry_ref: str, soul_name: str, version: str = "latest", model: str = "generic"):
27 self.registry_ref = registry_ref
28 self.soul_name = soul_name
29 self.version = version
30 self.model = model
31 self.definition = None
32
33 def load(self):
34 """Load the soul definition from the registry."""
35 self.definition = fetch_soul(
36 self.registry_ref,
37 self.soul_name,
38 self.version,
39 self.model,
40 )
41 return self
42
43 def get_system_prompt(self) -> str:
44 """Return the compiled soul definition as a system prompt."""
45 if not self.definition:
46 self.load()
47 return self.definition
48
49
50# Usage: load a soul optimized for GPT-4o
51soul = ElacitySoul(
52 registry_ref="acme/souls",
53 soul_name="helpful-assistant",
54 version="2.0.0",
55 model="gpt-4o",
56).load()
57
58# Use with your OpenClaw agent
59system_prompt = soul.get_system_prompt()
60print(f"Loaded soul ({len(system_prompt)} chars)")
61
62# Switch to a Claude-optimized variant — same soul, different model tuning
63claude_soul = ElacitySoul(
64 registry_ref="acme/souls",
65 soul_name="helpful-assistant",
66 version="2.0.0",
67 model="claude-3",
68).load()

Store your soul definitions under a dedicated registry (e.g. your-org/souls) to keep them organized separately from your agent prompts and promptlets.

Next steps