Define a Company Handbook for Agents

Intent·Steffen Glomb

Paperclip.ing manages AI agents as employees of a company — org charts, goals, projects, tasks. It abstracts away the technical layer and gives you a workforce metaphor instead. I tested it in March 2026.

After a few hours, the pattern was familiar: agents acting freely on a shared workspace with no coordination. Document chaos, no processes, no collaboration. The same failure mode you get from unguided humans — except faster.

This intent gives agents non-technical, behavioral guidance for how to operate — written for both agent and human readers.

The simple (and naive) prompt

"Write a handbook for AI agents in our company"

What could possibly go wrong?

  • The handbook sounds inspiring but gives no concrete behaviors, so agents still improvise in critical handoffs.
  • Every run produces a different structure, so you end up editing the output directly instead of improving the definition.
  • The result repeats obvious writing advice instead of defining operational constraints and quality expectations specific to your organization.

Sharp Definition

This intent defines a company handbook for agents. Rather than micromanaging individual tasks (like in a chat), it declares rationales and behavioral rules the agents operate under. The format is deliberately concise: context windows are expensive, and the model already knows how organizations work. It just needs to be told which conventions to follow.

Context

DirectiveDeclaration
purpose:
background:

Task

DirectiveDeclaration
general:
outcome:
step 1:
step 2:

Input

DirectiveDeclaration
Priming:
url:
[no keyword]:
url:
[no keyword]:
[no keyword]:
Standards:

Output

DirectiveDeclaration
artifact:
format:
structure:
template:
constraint:
validation:
Chapter:
sections:
Chapter:
sections:
Chapter:
sections:
Chapter:
sections:
Chapter:
sections:
guidance:
constraints:

Copy your customized definition for immediate use.

Product workflow support is coming soon.

Optional sample result

Preview a sample markdown outcome and copy it as a baseline output style.

Failure Modes

ModeEffectCauseStatus
Aspirational handbook outputThe generated handbook sounds good but stays generic, with no actionable behavioral directives.Output constraints are too weak on section template and behavior-level specificity.Addressed
Input standards droppedOne or more standards from input never appear in the resulting handbook.Missing or ignored validation rule that each input standard must map into a section.Addressed
Uneven section structureSome sections use different formats, making the handbook hard to scan and apply consistently.Structure template is underspecified or not enforced across all sections.Addressed
Source leakage from primingThe output references priming concepts directly instead of paraphrasing them into company language.Paraphrasing constraint is missing, weak, or violated during generation.Addressed
False completenessThe handbook appears complete, but still misses edge cases or ambiguous handoff scenarios.Some quality gaps require human review beyond what declaration constraints can fully guarantee.Watch for

Customization Guide

What to changeWhyWhere
Priming different conceptsYou may believe in concepts for collaboration. Choose which flavor your agents should follow.Input -> Priming
Set more or different standardsAdd specific guidance that should be baked into the handbookInput -> Standards
Role: Operations Lead · AI Program Manager · Knowledge ManagerArtifact: Company handbook