Setting up AI Agent Escalation to Human
Escalation is not an admission of system weakness, but a sign of properly designed autonomy. An agent should know the limits of its authority and be able to hand over a task to a human in time — before damage is done.
Escalation Triggers
Configuration limits:
- Transaction amount > $X → escalation to financial controller
- Email goes to >N recipients → confirmation
- Deletion of files older than N days → verification
- Action affects production system → approval
Semantic triggers: LLM recognizes escalation signs in the task: legal threats ("sue", "consult lawyer"), VIP customer complaints, financial claims, mention of regulatory bodies.
Technical failures: Tool returned error three times in a row. Task execution timeout. Conflicting instructions in task.
Explicit uncertainty: Agent explicitly expresses uncertainty: "I'm not sure whether I should..." → automatic escalation.
Escalation Mechanism
- Agent stops current task execution (saves state)
- Forms escalation message: task context, reason for escalation, suggested options
- Sends to responsible party via priority channel (Telegram / SMS / email)
- Waits for decision (timeout configured)
- After human decision — continues or closes task
Escalation Routing
Different escalation types → different responsible parties. On-call schedule. Fallback on primary responsible unavailable (after N minutes → next level).







