Shared prompt templates for elizaOS across TypeScript, Python, and Rust runtimes.
This package provides a single source of truth for all prompt templates used by elizaOS agents. Prompts are stored as .txt files and generated into native formats for each language.
packages/prompts/
├── prompts/ # Source prompt templates (.txt files)
│ ├── reply.txt
│ ├── choose_option.txt
│ ├── image_generation.txt
│ └── ...
├── scripts/ # Build scripts
│ └── generate.js # Generates native code from prompts
├── dist/ # Generated output
│ ├── typescript/ # TypeScript exports
│ ├── python/ # Python module
│ └── rust/ # Rust source
└── package.json
All prompts use Handlebars-style template variables:
{{variableName}}- Simple variable substitution{{#each items}}...{{/each}}- Iteration over arrays{{#if condition}}...{{/if}}- Conditional blocks
Use camelCase for all template variables to ensure consistency across languages:
{{agentName}}- The agent's name{{providers}}- Provider context{{recentMessages}}- Recent conversation messages
# Build all targets
npm run build
# Build specific target
npm run build:typescript
npm run build:python
npm run build:rustimport { REPLY_TEMPLATE, CHOOSE_OPTION_TEMPLATE } from "@elizaos/prompts";
const prompt = composePrompt({
state: { agentName: "Alice" },
template: REPLY_TEMPLATE,
});from elizaos.prompts import REPLY_TEMPLATE, CHOOSE_OPTION_TEMPLATE
prompt = compose_prompt(state={'agentName': 'Alice'}, template=REPLY_TEMPLATE)use elizaos_prompts::{REPLY_TEMPLATE, CHOOSE_OPTION_TEMPLATE};
let prompt = compose_prompt(&state, REPLY_TEMPLATE);- Create a new
.txtfile inprompts/directory - Name the file using snake_case (e.g.,
my_new_action.txt) - Run
npm run buildto generate native code - The prompt will be exported as
MY_NEW_ACTION_TEMPLATEin all languages
Plugins can use the same prompt system! See README-PLUGIN-PROMPTS.md for details on how to set up prompts in your plugin.
The scripts/generate-plugin-prompts.js utility can be used by any plugin to generate TypeScript, Python, and Rust exports from .txt prompt templates.
- Start with a task description - Begin prompts with
# Task:to clearly state the objective - Include providers placeholder - Use
{{providers}}where provider context should be injected - Use TOON output format - Standardize on TOON response format for consistent parsing
- Add clear instructions - Include explicit instructions for the LLM
- End with output format - Always specify the expected output format
Example:
# Task: Generate dialog for the character {{agentName}}.
{{providers}}
# Instructions: Write the next message for {{agentName}}.
Respond using TOON format like this:
thought: Your thought here
text: Your message here
IMPORTANT: Your response must ONLY contain the TOON document above. No XML, no JSON, no markdown fences.- Do not embed real secrets in prompt templates. Prompts are source-controlled and often distributed.
- Avoid including PII (emails, phone numbers, addresses, IDs) in templates or examples.
- Prefer placeholders (e.g.,
{{apiKey}},{{userEmail}}) and ensure the runtime injects only the minimum needed.
This package includes a conservative scanner that flags prompt templates containing strings that strongly resemble real credentials (or private key material).
Run:
npm run check:secretsIt scans:
packages/prompts/prompts/**/*.txtplugins/**/prompts/**/*.txt