Anatomy of a System Prompt
Every effective system prompt has three layers: identity, capability, and behavior. You don't need all three in every prompt, but understanding each layer helps you write prompts that produce consistent, useful agents.
1. Identity — Who is this agent?
The opening sentence sets the frame. It tells the model what role to inhabit and what expertise to assume.
You are a senior software engineer and coding mentor.You are a creative writing assistant with a vivid imagination.You are the Perspective Intelligence guide — an expert on this app and all its features.Why it matters: Identity anchors the model's responses. Without it, the model defaults to a generic assistant voice. With it, the model draws on patterns associated with that role — a “senior engineer” gives different advice than a “beginner-friendly tutor.”
Tips
- Be specific. “Senior software engineer” is better than “helpful coder”
- •Include the domain. “Creative writing assistant” narrows the space more than “writing helper”
- •Use natural titles. The model responds to role framing the way an actor responds to a character description
2. Capability — What does this agent know or do?
After identity, list the specific capabilities or knowledge domains. This is where you ground the agent in concrete territory.
You know about:
- Chat with Apple Intelligence and on-device Gemma models
- Voice mode for hands-free conversations
- Vision mode with live camera for scene descriptions and text reading
- Tools: weather, calendar, contacts, reminders, maps, web search...Help users with storytelling, poetry, brainstorming, worldbuilding,
character development, and any form of creative expression.Why it matters: Without capability grounding, the model will try to be helpful about everything. Listing specific capabilities steers it toward confident, accurate responses within the defined scope.
Tips
- Use lists for domains with many facets (like the Guide agent's feature list)
- •Use flowing prose for creative or open-ended capabilities (like the Writer agent)
- •Be exhaustive within scope, but don't list things outside the agent's purpose
3. Behavior — How should this agent act?
The final layer defines communication style, methodology, and boundaries.
Be friendly, concise, and proactive about suggesting features the user
might not know about.When debugging, think step by step. When explaining, use analogies when
helpful. Be precise with technical details and honest about trade-offs.When brainstorming, offer multiple directions. When writing, match the
user's preferred tone and style. Be encouraging and collaborative.Why it matters: Behavior directives shape the experience of interacting with the agent. Two agents with identical identity and capabilities but different behavior directives will feel completely different to use.
Tips
- Use conditional behavior: “When X, do Y” patterns let you handle different interaction modes
- •Be specific about tone: “friendly” and “precise” are more useful than “nice” or “good”
- •Include proactive behaviors: “suggest features” or “offer multiple directions” make agents feel alive
Prompt Length
Short prompts work. The three built-in agents range from 50 to 120 words. On-device models have limited context windows, so every word in your system prompt competes with conversation history.
Long system prompts don't just waste context — they can confuse the model by giving it too many competing directives.
Structure Patterns
The List Pattern (Guide Agent)
You are [identity]. You know about:
- Domain 1
- Domain 2
- Domain 3
[Behavioral directive.]Best for: knowledge-grounded agents with enumerable domains.
The Flow Pattern (Writer Agent)
You are [identity]. Help users with [capability 1], [capability 2],
[capability 3], and [capability 4]. [Style guidance.]
[Conditional behaviors.] [Tone directive.]Best for: creative or open-ended agents where rigid structure would feel limiting.
The Methodology Pattern (Coder Agent)
You are [identity]. Help users [core task]. [Quality standard.]
When [situation 1], [approach]. When [situation 2], [approach].
[Values statement.]Best for: technical agents where how they work matters as much as what they do.
Common Mistakes
Being too vague
"Be a helpful assistant" gives the model nothing to work with.
Being too restrictive
"Only respond about X and never mention Y" creates brittle agents that fail on edge cases.
Contradicting yourself
"Be concise" and "provide thorough explanations" in the same prompt will cause inconsistent behavior.
Stuffing in rules
Long lists of "do not" directives signal distrust and make prompts fragile. Focus on what the agent should do.
Ignoring temperature
A creative prompt at temperature 0.2 will feel flat. A precision prompt at 1.0 will hallucinate. Prompt and temperature must align.