Governance Framework
Every AI action passes through 5 governance layers before execution. Built on the immune system model: validate, authenticate, classify, enforce, and evidence. Fail-closed by default. Compliant by design.
The 5-Layer Governance Model
Inspired by the human immune system, ARQERA's governance model uses five defence layers. Each layer assumes the layer above it has been compromised. Every request must pass through all five before execution.
Input Validation
Skin
The first barrier. Every request is validated for format, size, rate limits, and structural integrity before it enters the system. Malformed or excessive requests are rejected at the edge.
- Schema validation on every API call
- Rate limiting per tenant, user, and endpoint
- Payload size and depth checks
- Input sanitisation against injection attacks
Authentication & Identity
Innate Immunity
Verifies who is making the request and isolates tenant boundaries. Every action is scoped to a verified identity within a verified tenant. Cross-tenant access is impossible by design.
- SSO via SAML 2.0 and OpenID Connect
- Multi-factor authentication enforcement
- Role-based access control (RBAC)
- Complete tenant isolation at the data layer
AI Intent Classification
Adaptive Immunity
A dual-brain AI system (Claude + OpenAI) classifies the intent behind every medium-and-high-risk action. If either brain flags the action as suspicious, it escalates to human approval. Disagreement between brains automatically triggers the highest approval tier.
- Dual-brain consensus for high-risk actions
- Intent classified as safe, suspicious, or malicious
- Confidence scoring with configurable thresholds
- Anomaly detection against historical behaviour patterns
Policy Enforcement
Governance Engine
The core of ARQERA governance. Every action is evaluated against seven immutable laws and your custom policy set. The most restrictive outcome always applies. Policy violations produce infinite cost in the routing function, making the action impossible to execute.
- Seven immutable laws checked on every action
- Custom policy rules per tenant
- Most-restrictive-wins conflict resolution
- Fail-closed: uncertain evaluations block the action
Audit & Evidence
Memory
Every action, evaluation, approval, and outcome is recorded in a SHA-256 hash-chained evidence ledger. Trust scores update based on outcomes. The system learns from every interaction, strengthening proven patterns and flagging anomalies.
- SHA-256 hash-chained evidence artifacts
- Merkle tree verification for bulk integrity checks
- Bayesian trust score updates after every action
- Pattern learning for anomaly detection
The 7 Laws of AI Governance
These laws are absolute. They cannot be overridden, relaxed, or bypassed by any actor, at any trust level, under any circumstance. A violation produces infinite cost, making the action impossible to execute.
Audit Conservation
“No meaningful action without an auditable trace.”
Every action must have sufficient context (tenant, actor, action name) to produce a complete audit trail. Actions without audit metadata are blocked.
Budget Conservation
“Every action consumes budget and must be attributed.”
Cost-bearing actions are checked against remaining budget. When budget is exhausted, all non-essential operations halt. The system never overspends.
Evidence Gravity
“Confidence rises with authoritative evidence.”
Actions requiring confidence thresholds must have sufficient evidence. Low-confidence actions are escalated for human review rather than executed blindly.
Least Action Path
“Prefer the simplest route that satisfies constraints.”
The governance engine evaluates whether a simpler, lower-risk path exists. Unnecessary complexity is flagged before execution.
Safety Dominance
“Safety concerns override all other considerations.”
High-risk and irreversible actions trigger mandatory escalation. Risk level, irreversibility, and financial impact are evaluated. HARD-tier actions require explicit human approval.
Monotonic Truth
“Trust changes must be explainable and monotonic.”
Trust scores are computed from evidence artifacts. Changes must be gradual and traceable. Sudden unexplained trust jumps are blocked and investigated.
Bounded Autonomy
“Every actor operates within defined boundaries.”
Agents and users have trust-based autonomy limits. Actions exceeding an actor's trust level require approval from a higher-trust actor. No actor has unlimited autonomy.
Action Tiers
Every action in ARQERA is classified into one of three tiers based on its impact, reversibility, and risk. The tier determines whether the action executes immediately, provides an undo window, or requires explicit human approval.
AUTO
Automatic Execution
Read operations, searches, report generation, and analytics queries execute instantly. These actions are non-destructive, fully reversible, and pose no risk to data integrity.
Behaviour
Immediate execution, no approval needed
Examples
- View dashboards and reports
- Search records and knowledge base
- Generate analytics insights
- Answer questions from the knowledge graph
- List users, agents, and integrations
SOFT
30-Second Undo Window
Non-destructive writes and notifications execute immediately but provide a 30-second window to undo. If the action is undone, it is as if it never happened. Evidence records both the action and the undo.
Behaviour
Execute with 30-second undo window
Examples
- Send internal notifications
- Create draft documents
- Update non-critical settings
- Create support tickets
- Modify workflow configurations
HARD
Explicit Human Approval
Irreversible actions, financial transactions, and data deletions require explicit human approval before execution. The action is queued, the approver is notified via their preferred channel, and the action only executes after approval is recorded.
Behaviour
Requires explicit human approval before execution
Examples
- Send external emails on behalf of the organisation
- Delete data permanently
- Deploy code to production
- Execute financial transactions
- Modify governance policies
- Deactivate user accounts
Action tier classification in API responses
// Every action response includes the governance evaluation
{
"action": "email.send",
"tier": "HARD",
"status": "pending_approval",
"governance": {
"layers_passed": 5,
"laws_checked": 7,
"violations": 0,
"verdict": "escalate",
"reason": "Action classified as HARD tier — requires human approval"
},
"approval": {
"id": "appr_8f3k2m",
"required_approvers": 1,
"notified_via": ["slack", "email"],
"expires_at": "2026-02-20T12:00:00Z"
},
"evidence_hash": "sha256:a1b2c3d4e5..."
}Approval Flows
HARD-tier actions follow a structured approval workflow. Approvers receive full context including the governance evaluation, policy checks, and evidence chain. Approvals are time-bound and support multi-approver consensus for high-risk actions.
Step 1
Request
An AI agent or user initiates a HARD-tier action.
Step 2
Governance Evaluation
The 5-layer governance model evaluates the request. Dual-brain AI classifies intent.
Step 3
Pending Approval
The action is queued. Approvers are notified via Slack, Teams, email, or in-app.
Step 4
Approve or Reject
Authorised approvers review the action with full context and evidence. Multi-approver support for high-risk actions.
Step 5
Execute or Cancel
Approved actions execute with full evidence emission. Rejected actions are cancelled with reason recorded.
Multi-Approver
High-risk actions can require consensus from multiple approvers. Configure per action type or per policy.
Time-Bound
Approvals expire after a configurable window. Expired approvals are automatically cancelled with evidence.
Multi-Channel Notifications
Approvers are notified via Slack, Microsoft Teams, email, or in-app. Configure per user preference.
Full Context
Every approval request includes the action details, governance evaluation, policy check results, and actor trust score.
Human-in-the-Loop Triggers
Beyond tier classification, these conditions always require human approval
| Condition | Requirement |
|---|---|
| Action is HARD tier | Explicit approval required before execution |
| Action affects PII or PHI | Approval required regardless of tier |
| Actor trust score below 50 | All non-AUTO actions require approval |
| AI confidence below threshold | Escalated to human for decision |
| Action crosses tenant boundary | Approval from both tenant admins |
| Financial impact above threshold | Finance role approval required |
| Policy change of any kind | CEO or Compliance role approval |
Evidence Chain
Every action in ARQERA emits a signed evidence artifact. Artifacts are SHA-256 hash-chained: each record includes the hash of the previous record, creating a tamper-evident, immutable audit trail. If any record is altered, the chain breaks and the tampering is immediately detectable.
Hash-Chained Evidence Ledger
Tamper-evident, immutable, and cryptographically verifiable
Evidence #N-1
user.loginsha256:a1b2c3...sha256:9f8e7d...Evidence #N
report.generatedsha256:d4e5f6...sha256:a1b2c3...Evidence #N+1
policy.evaluatedsha256:g7h8i9...sha256:d4e5f6...Each evidence artifact's hash includes the previous artifact's hash, forming an unbreakable chain. Merkle tree verification is available for bulk integrity checks.
Evidence Artifact Fields
Every audit trail entry contains these fields for full reconstructability
| Field | Description |
|---|---|
timestamp | ISO 8601 timestamp of the action |
actor_id | The user or agent that performed the action |
actor_type | user, agent, or system |
tenant_id | The tenant context |
action | The action type (e.g., policy.evaluated, user.created) |
resource_type | The type of resource affected |
resource_id | The specific resource affected |
policy_evaluation | Which policies were evaluated and the result |
approval_id | Reference to approval record (if applicable) |
evidence_hash | SHA-256 hash of this entry |
previous_hash | SHA-256 hash of the preceding entry (chain link) |
Immutable Records
Evidence artifacts cannot be modified or deleted before their retention period expires. Write-once, read-many.
Export Formats
Export evidence in JSON and CSV formats. Export requests are themselves evidenced for a complete chain of custody.
Auditor Portal
External auditors get scoped read-only access to evidence for specific frameworks and time ranges. Business tier and above.
Retention Tiers
Evidence retention scales with your plan: 30 days (Free), 1 year (Team), 3 years (Business), or custom (Enterprise).
Policy Configuration
ARQERA's governance is not a black box. Every policy is visible, configurable, and auditable. Start with battle-tested defaults, then customise to match your organisation's risk profile and regulatory obligations.
Default Policies
Every tenant starts with ARQERA's battle-tested policy templates. The 7 Laws are enforced from day one. Industry-specific policies are composed automatically based on your sector and jurisdiction.
Custom Policies
Define your own governance rules with configurable triggers, enforcement actions, confidence thresholds, and failure behaviours. Custom policies layer on top of defaults and can tighten controls, but never weaken regulatory requirements.
Framework Mapping
Map every policy to compliance framework controls. See which SOC 2, GDPR, HIPAA, or EU AI Act controls are covered by which policies, and identify gaps before auditors do.
Policy Visibility
Policies are visible in Settings, in action feedback when an action is blocked, in the compliance dashboard, through Ore AI chat, in API responses, and in the audit trail. Nothing is hidden.
Violation Handling
Policy violations are handled proportionally to their severity
Action proceeds. Warning logged. Compliance role notified.
Action proceeds with visible warning. Evidence emitted. Compliance notified.
Action prevented. Actor receives explanation. Approval workflow initiated if applicable.
Action prevented. Session flagged. Security role alerted immediately.
What You Can Customise
- Approval thresholds (above regulatory minimums)
- Action tier classification (upgrade only, e.g. AUTO to SOFT)
- Budget limits per department, agent, and period
- Data retention periods (extend beyond defaults)
- DLP rules (add to base set)
- Custom policy rules and triggers
- Trust score thresholds (within platform range)
What Cannot Change
- The 7 Laws of AI Governance are platform invariants
- Evidence emission cannot be disabled
- Hash-chain integrity cannot be bypassed
- Fail-closed behaviour is always enforced
- Tenant isolation cannot be self-enabled
Programmatic policy management via API
// Create a custom policy rule
POST /api/v1/policies/rules
{
"name": "Financial approval threshold",
"trigger": {
"action_pattern": "payment.*",
"conditions": { "amount_usd": { "gt": 10000 } }
},
"enforcement": "require_approval",
"approvers": ["finance_lead", "ceo"],
"failure_action": "block",
"evidence_required": true
}
// Response
{
"id": "rule_7k9m2p",
"status": "active",
"created_at": "2026-02-19T10:30:00Z",
"evidence_hash": "sha256:f1a2b3..."
}Compliance Frameworks
ARQERA maps governance controls to major compliance frameworks automatically. Evidence is collected continuously, not scrambled together before an audit. Coverage gaps are identified in real time so you can close them before auditors find them.
SOC 2 Type II
Global
Automated control mapping across trust service criteria. Continuous evidence collection for access control, change management, monitoring, and risk assessment.
Controls: CC6.1 (access security), CC6.3 (access removal), CC7.2 (monitoring), CC8.1 (change management)
GDPR
EU / EEA / UK
Full data subject rights automation. Consent management, purpose limitation, data minimisation, and automated breach notification within regulatory timeframes.
Controls: Art. 15 (access), Art. 16 (rectification), Art. 17 (erasure), Art. 20 (portability)
HIPAA
United States
PHI handling with access controls, audit trails, and encryption at rest and in transit. Automated evidence for administrative, physical, and technical safeguards.
Controls: Administrative safeguards, Technical safeguards, Audit controls, Access controls
EU AI Act
EU / EEA
AI risk classification with mandatory human oversight for high-risk systems. Transparency requirements, bias monitoring, and conformity assessment support.
Controls: Risk classification, Human oversight, Transparency, Bias monitoring
ISO 27001
Global
Information security management system controls with automated evidence. Continuous monitoring against Annex A controls.
Controls: Annex A controls, Risk assessment, Continuous monitoring, Incident management
Custom Frameworks
Any
Define your own compliance framework with custom controls, evidence sources, and review cadences. Map to any industry-specific standard or internal policy.
Controls: Custom controls, Custom evidence mapping, Custom review cadence
Continuous Gap Analysis
Real-time compliance coverage monitoring across all active frameworks
Control has automated evidence collection and recent evidence exists
Control has evidence collection but evidence is stale (needs refresh)
Control has no evidence source mapped or no evidence collected
Gap analysis is available in the Governance Space compliance dashboard and can be exported as a report for auditor review.
Trust Scoring
ARQERA uses a Bayesian trust model where every actor (human or AI) earns trust through verifiable evidence. Trust is never manually assigned. High trust unlocks more autonomous operations. Low trust increases governance scrutiny. The system learns from every interaction.
Trust Tiers
Trust scores determine the level of autonomy and scrutiny applied to each actor
Maximum autonomy. Most actions execute as AUTO.
Standard operation. Normal tier classification applies.
Elevated scrutiny. Some AUTO actions require SOFT approval.
All non-read actions require human approval.
What Affects Trust
Successful Actions
Clean action completions with no policy violations increase trust.
Clean Audits
Passing compliance checks and framework evaluations strengthen trust.
Consistent Behaviour
Actions that match historical patterns for the actor build confidence.
Policy Violations
Actions blocked by policy enforcement reduce trust proportionally to severity.
Anomalous Behaviour
Actions that deviate significantly from established patterns trigger trust decay.
Rejected Approvals
Repeatedly submitting actions that get rejected by approvers reduces trust.
Trust-Based Automation
The more an actor proves themselves, the more autonomy they earn
Most SOFT actions promoted to AUTO. Reduced approval requirements.
Standard operation. Normal tier classification applies.
Elevated scrutiny. Some AUTO actions demoted to SOFT.
Maximum scrutiny. All non-read actions require human approval.
Key principle: Trust changes only through evidence (Law 5). No manual override, no administrative adjustment. An AI agent that performs 500 clean actions earns more autonomy. An agent that triggers policy violations loses it. The system self-calibrates.
Ready to govern your AI operations?
Request early access and deploy enterprise-grade governance in minutes. Every action evidenced. Every decision auditable.