$ man content-wiki/voice-system
Voice and Anti-Slopadvanced
Building a Voice System
3-tier architecture for encoding your voice into a repo
Why Voice Systems Matter
Every AI can write. Very few AI outputs sound like a specific person. The difference is a voice system — a structured set of rules, patterns, and examples that constrain AI generation to match your actual voice. Without one, every post sounds like it was written by the same generic AI. With one, the AI becomes an extension of how you actually communicate.
The problem is not that AI writes badly. It writes competently but generically. Same sentence rhythms, same transition phrases, same structural patterns. A voice system breaks that homogeneity by giving the AI specific constraints: these words yes, these words never, this sentence length, this paragraph structure, this tone.
PATTERN
The 3-Tier Architecture
Tier 1 — Voice DNA: the foundational layer. Core voice rules that apply to ALL content regardless of platform. Sentence style, word choices, anti-patterns, identity markers, formatting rules. This tier inherits into everything above it. Files: core-voice.md, anti-slop.md, viral-hooks.md.
Tier 2 — Context Playbooks: platform-specific adaptations of the voice DNA. How the voice changes for LinkedIn vs X vs TikTok vs Substack. Each playbook inherits from Tier 1 and adds platform-specific constraints. The voice stays consistent but the format, length, and delivery adapt.
Tier 3 — Content Ops: production-level rules for creating content. Pre-publish checklist, substance requirements, improvement protocol, content pillars, pitfall avoidance. This tier operationalizes the voice — turning principles into checklists and workflows.
Each tier builds on the one below it. A LinkedIn post loads Tier 1 (voice DNA) + Tier 2 (LinkedIn playbook) + Tier 3 (pre-publish checklist). A TikTok script loads Tier 1 + Tier 2 (TikTok playbook) + Tier 3 (substance requirements). The voice is modular.
PATTERN
Encoding Voice Into a Repo
The voice system lives as markdown files in a git repository. This is the key architectural decision. Voice rules are not prompts you paste into ChatGPT. They are versioned documents that evolve over time, are loaded by agent skills, and can be diffed to see how your voice has changed.
Directory structure: skills/tier-1-voice-dna/ contains the foundation. skills/tier-2-context-playbooks/ contains per-platform adaptations. skills/tier-3-content-ops/ contains production rules, checklists, and pillar definitions.
When an agent skill generates content, it reads the relevant voice files first, then generates with those constraints loaded into context. The skill does not need the voice rules hardcoded — it loads them dynamically from the repo. Change a voice rule in the markdown file, and every future content generation reflects the change immediately.
The Journey from Generic to Calibrated
Building a voice system is iterative. You do not sit down and write the perfect voice guide on day one. You start with basic rules: lowercase first word, no em-dashes, short paragraphs. You generate content. You read the output. You catch patterns that sound wrong. You add rules to catch those patterns. You generate again. The voice guide grows from 10 lines to 100 to 500.
The anti-slop guide started as 3 rules. It now catches 14+ patterns because each piece of generated content revealed new patterns that needed catching. The LinkedIn playbook started as tone notes. It now covers emoji systems, CTA patterns, sign-off styles, and five content pillars. Each rule was earned by catching a specific failure in real content.
CODE
Modular Voice Loading
The modular loading pattern: each agent skill specifies which voice files it needs. A LinkedIn post skill loads: tier-1-voice-dna/core-voice.md + tier-1-voice-dna/anti-slop.md + tier-2-context-playbooks/linkedin.md + tier-3-content-ops/pre-publish-checklist.md. A TikTok script skill loads: tier-1-voice-dna/core-voice.md + tier-2-context-playbooks/tiktok.md. Each combination produces platform-appropriate output while maintaining voice consistency.
The loading is explicit in each SKILL.md file. The skill tells the agent: before generating, read these files. This means you can see exactly which voice rules influenced any piece of content by checking which skill generated it. Full traceability from output back to voice configuration.
related entries