Skill sources for progressive-disclosure knowledge loading. (v1.3.1+)
Collections are keyed maps, not arrays
prompts, fragments, and tools are all objects (keyed maps), not arrays. Each key serves as the identifier for the entry. For example, prompts maps task type strings like "support" or "billing" to their Prompt definitions.
A single prompt configuration within a pack. Each prompt represents a specific task type (e.g., "support", "sales") with its own template, variables, tools, and validation rules. Prompts can evolve independently with their own version numbers.
Field
Type
Required
Description
id
string
Yes
Unique identifier, typically matching the map key. Pattern: ^[a-z][a-z0-9_-]*$.
name
string
Yes
Human-readable name.
version
string
Yes
Prompt version following Semantic Versioning, independent from the pack version.
system_template
string
Yes
The system prompt template. Use template syntax (e.g., {{variable}}) for variable substitution.
description
string
No
Detailed description of the prompt's purpose and behavior.
A tool definition following the function calling convention. Tools enable the LLM to call external functions. Tools are defined at the pack level and referenced by name in each prompt's tools array.
Field
Type
Required
Description
name
string
Yes
Tool name. Pattern: ^[a-zA-Z_][a-zA-Z0-9_]*$.
description
string
Yes
What the tool does. The LLM uses this to decide when to call it.
parameters
object
No
JSON Schema object defining the tool's input parameters (see below).
A validation rule (guardrail) applied to LLM responses.
Field
Type
Required
Description
type
string
Yes
The validator type that determines how validation is performed. Not an enum — runtimes define and register their own types. Examples: "banned_words", "max_length", "length", "max_sentences", "regex_match", "sentiment", "custom".
enabled
boolean
Yes
Whether this validator is active.
fail_on_violation
boolean
No
If true, violations cause an error. Default: false.
params
object
No
Validator-specific parameters (e.g., word lists, character limits).
Evals are automated quality checks on LLM outputs. Unlike validators (which run inline and can block responses), evals run asynchronously and produce scores or metrics. Evals can be defined at both pack level (cross-cutting) and prompt level (prompt-specific). Prompt-level evals with the same id override pack-level evals.
Metric name following Prometheus conventions (snake_case). Pattern: ^[a-zA-Z_:][a-zA-Z0-9_:]*$.
type
string
Yes
Metric type. One of: "gauge", "counter", "histogram", "boolean".
range
object
No
Optional value bounds with min and/or max fields.
The metric object uses additionalProperties: true, so runtimes can attach extra fields (e.g., labels, help, buckets).
"evals":[ { "id":"json_format", "type":"json_valid", "trigger":"every_turn", "description":"Verify the assistant always returns valid JSON", "metric":{ "name":"promptpack_json_valid", "type":"boolean" } }, { "id":"tone-check", "type":"llm_judge", "trigger":"sample_turns", "sample_percentage":10, "params":{ "judge_prompt":"Rate the response tone on a 1-5 scale for professionalism.", "model":"gpt-4o", "passing_score":4 }, "metric":{ "name":"promptpack_tone_score", "type":"gauge", "range":{"min":1,"max":5} } } ]
Validators vs Evals
Both sit on the quality spectrum: validators run inline on every response and can block output (fail_on_violation), while evals run asynchronously and produce scores/metrics without blocking. Use validators for hard guardrails, evals for quality measurement and monitoring.
Fragments are shared, reusable template text blocks defined at the pack level. They are simple string values keyed by name.
"fragments":{ "customer_context":"Customer: {{customer_name}}\nAccount Type: {{account_type}}", "greeting":"Hello! How can I help you today?", "escalation_notice":"I'm going to connect you with a specialist." }
Prompts reference fragments in their system_template using template syntax: {{fragments.customer_context}}.
PromptPack v1.3 adds a state-machine workflow over the pack's prompts. Each state references a prompt key and declares event-driven transitions to other states.
PromptPack v1.3 adds agent definitions that map prompts to A2A (Agent-to-Agent) compatible agent cards. This enables multi-agent orchestration via the A2A protocol.
Agent description published in the A2A Agent Card. Overrides the prompt's description if set.
tags
string[]
No
Discovery tags for the agent, used by A2A registries and routers.
input_modes
string[]
No
MIME types the agent accepts as input. Defaults to ["text/plain"].
output_modes
string[]
No
MIME types the agent can produce as output. Defaults to ["text/plain"].
"agents":{ "entry":"triage", "members":{ "triage":{ "description":"Routes customer requests to the right specialist", "tags":["router","customer-service"], "input_modes":["text/plain"], "output_modes":["text/plain"] }, "billing":{ "description":"Handles billing inquiries and payment issues", "tags":["billing","payments"], "input_modes":["text/plain"], "output_modes":["text/plain","application/json"] }, "technical":{ "description":"Provides technical troubleshooting assistance", "tags":["support","technical"] } } }
Workflow + Agents
workflow and agents are independent features — you can use either or both. When used together, the workflow drives state transitions while agent definitions provide A2A discoverability metadata for each prompt.
PromptPack v1.3.1 adds skills for progressive-disclosure knowledge loading. Skills are modular knowledge sources that agents load on demand, keeping system templates lean while providing access to deep domain expertise.
The skill's instructions or knowledge content. Loaded into the agent's context when activated.
"skills":[ "./skills/billing", {"path":"./skills/compliance","preload":true}, { "name":"escalation-protocol", "description":"Steps for escalating unresolved customer issues", "instructions":"When a customer issue cannot be resolved within 3 exchanges:\n1. Acknowledge the complexity\n2. Collect case details\n3. Create an escalation ticket\n4. Provide the ticket reference to the customer" } ]
Skills + Workflow
When a WorkflowState declares a skills field, it scopes which skills are available in that state. Use "none" to disable skills for a state. Without a skills field, all pack-level skills are available.