Using the Prompt Registry
The Elacity Prompt Registry works like a package manager for AI prompts. Instead of embedding prompt strings in your application code, you author them in the registry with semantic versioning, then pull the compiled result at runtime via API.
Prompts as Versioned Artifacts
Every prompt in a registry is a versioned artifact with:
- Semantic versioning (e.g.
1.0.0,1.1.0,2.0.0) for safe rollbacks and predictable upgrades - Model-specific variants — the same prompt name can have different content optimized for
gpt-4o,claude-3, or agenericfallback - Metadata frontmatter — description, tags, and configuration stored alongside the prompt content
- Content hashing — every version is hashed for integrity verification and drift detection
Promptlets
Promptlets are reusable prompt components that can be composed into larger prompts. Think of them like shared utility functions — define a tone-of-voice section, a compliance disclaimer, or a tool-usage preamble once, then reference it across multiple prompts. When a promptlet is updated, every prompt that includes it picks up the change on the next compilation.
Pulling Prompts at Runtime
Fetch prompt content via the API by calling the artifact version content endpoint:
Response includes:
What compilation resolves vs what it leaves for you
Compilation is a two-stage process. Elacity handles the first stage (structural resolution) at publish time, and your application handles the second stage (context substitution) at runtime.
Stage 1 — Elacity resolves at compile time:
- Promptlet imports —
{{ import "tone-of-voice@^1.0" }}is replaced with the promptlet’s content - Model preprocessors —
#ECP-IF-MODEL gpt-4o/#ECP-NOT-MODELblocks are included or excluded based on the target model - Environment preprocessors —
#ECP-IF-ENV prod/#ECP-NOT-ENVblocks are included or excluded based on the target environment
Stage 2 — Your application substitutes at runtime:
- Template variables —
{{variable_name}}placeholders remain in the compiled output for you to fill in with your own runtime context
This separation is intentional. Elacity manages the structural composition of your prompt (which components, which model variant, which environment branch), while your application injects the dynamic context that changes per request (user names, session data, query parameters).
Template Variable Syntax
Template variables use Mustache-style double braces and support dot notation for nested access:
Example: What the compiled output looks like
Suppose you author this prompt in the registry:
When you fetch it with model=gpt-4o, the compiled field returns:
Notice: the promptlet import and model preprocessor are resolved, but {{company_name}}, {{caller.name}}, and {{caller.account_id}} remain as placeholders for your application to fill in.
Substituting Variables in Your Application
After fetching the compiled prompt, substitute the template variables with your runtime context before sending it to an LLM:
When deploying agents through Elacity (e.g. to VAPI), template variable substitution is handled automatically using the variables defined in your environment. The manual substitution shown above is only needed when you pull prompts directly via the API for use in your own application code.
When to use source vs compiled
This pattern keeps your application code free of prompt strings. When you update a prompt in the registry, your application picks up the change without a code deploy.
Versioning & Compilation Workflow
The typical lifecycle for a prompt is:
Draft
Author or edit a prompt in the Elacity editor. Drafts are saved automatically and are not visible to consumers of the registry.
Compile & Preview
Preview the compiled output to verify that promptlets are resolved correctly, variables are substituted, and model-specific preprocessors produce the expected result.
Model-Specific Variants
The same prompt name can have variants optimized for different models. When you request a prompt with model=gpt-4o, Elacity returns the GPT-4o-specific variant if one exists, otherwise falls back to the generic version.
This is useful when different models need different formatting, token budgets, or instruction styles. For example, Claude models may benefit from XML-structured prompts while GPT models may work better with markdown.
Structuring Your Registries
Registries are the primary organizational boundary in Elacity. How you partition them determines who can collaborate on which prompts, which artifacts version independently, and where access control boundaries fall. Below are three proven patterns — pick the one that matches your situation, or combine them.
Per-Client Registries
If you are a service provider or agency managing AI solutions for multiple clients, create a private registry per client alongside a shared registry for common promptlets.
Client-specific prompts import shared promptlets so you author common logic once:
When you fetch from each client’s registry, the shared promptlets are resolved automatically:
Pair each client registry with a dedicated environment for that client’s provider credentials and variables (company name, support URL, escalation contacts). Use fleets to group each client’s agents for bulk operations.
The Service Provider plan includes unlimited private registries, designed for exactly this scaling pattern. See Plans for details.
Per-Domain Registries
Larger organizations with multiple teams building AI products benefit from a registry per domain or business unit. Each team owns its registry and release cadence while sharing common promptlets through a cross-team registry.
The CX team can iterate on support prompts at their own pace without affecting sales prompts, and vice versa. Shared promptlets keep brand voice consistent across every domain:
This pattern works well when different teams need different approval workflows — the CX team can ship daily while the compliance-sensitive internal-tools registry requires review before every publish.
Shared Component Registries
Regardless of whether you structure by client or by domain, dedicate a registry to reusable promptlets only. Think of it as a shared library that other registries depend on.
Common promptlets to centralize:
- Compliance disclaimers — legal language that must appear in every customer-facing prompt
- Tone-of-voice guides — brand personality instructions
- Tool-usage preambles — instructions for how the model should invoke tools
- Error-handling patterns — how to respond when something goes wrong
- Output format standards — JSON schemas, markdown formatting rules
When you update a promptlet in the shared registry and publish a new version, every prompt that imports it with a compatible semver range (e.g. @^1.0) picks up the change on the next compilation — no need to touch each consuming registry individually.
Both prompts share the same output format. When you update output-format to 1.1.0, both pick up the change automatically.
Registries vs Environments
Do not create separate registries for dev, staging, and prod. Use environments and #ECP-IF-ENV preprocessors instead. Registries define organizational boundaries (who owns which prompts); environments define deployment boundaries (which credentials and variables apply at runtime). Mixing the two leads to duplicated prompts and version drift.
Storing OpenClaw Souls
The registry is well suited for storing OpenClaw soul definitions. A soul’s personality, drives, and behavioral instructions are essentially structured prompts — they benefit from the same versioning, model-specific variants, and runtime resolution that Elacity provides.
Why store souls in Elacity?
- Version control — roll back to a previous soul version if a change causes regressions
- Model-specific tuning — maintain separate soul definitions optimized for different LLM backends
- Runtime resolution — pull the latest soul at startup without redeploying your application
- Collaboration — your team can iterate on soul definitions in the Elacity editor with compilation previews
Example: OpenClaw wrapper with Elacity
Store your soul definitions under a dedicated registry (e.g. your-org/souls) to keep them organized separately from your agent prompts and promptlets.
