CLARIXO Behavior Evidence Export API Integration Manual
The implementation document for turning user-facing AI behavior into stable exportable evidence records.
Turn the protocol into a working implementation path
This manual explains how to choose an evidence-event boundary, map required fields, normalize a payload, generate evidence_hash, submit records, retrieve them later, verify that the exported result is operationally useful, and roll the integration out in a stable way. It is the implementation document for evidence export only, not a runtime control or enforcement manual.
Define one evidence event first
Start with one completed user-facing AI behavior and one stable event boundary rather than trying to export every internal intermediate state.
Close the first working loop
The minimum integration is complete when you can submit one evidence record, query it back, and confirm that a human reviewer can understand what happened.
Preserve one meaningful behavior in a stable evidence form
A good first integration is not the one that exports the most data. It is the one that exports one stable, reviewable evidence record with the least ambiguity and enough context for later support, reconstruction, and audit work.
One completed user-facing output
In most systems, the best starting point is one assistant answer, one generated explanation, one recommendation batch, one moderated output, or one operator-assisted final response.
Do not confuse internal steps with evidence events
Avoid making every helper call an evidence event unless that call is itself the meaningful user-facing behavior being reviewed later.
Export after the behavior happens
Complete the user-facing AI behavior
Let the enterprise system finish the user-visible response or result before exporting an evidence record, so the delivered path is already known.
Normalize into an evidence payload
Map the event into stable fields such as project scope, application scope, event identity, summaries, provider, route path, fallback usage, outcome, and metadata.
Attach context admissibility metadata
When the source system can identify input binding, runtime source, runtime scope, and context pollution signals, attach a review-strengthening metadata.context_admissibility summary before export.
Generate evidence_hash
Generate a SHA-256 evidence hash from canonicalized core fields for practical record-integrity verification.
Submit and retrieve
Write the evidence event to CLARIXO, then query it back by project, application, session, route, event, or time window to validate real review usefulness.
Map each required field to a real upstream origin
Before writing the export path, map every required evidence field to a real data source in the application. This keeps exported records stable even while the source system continues to evolve.
Keep the first payload stable and reviewable
{
"schema_version": "v1",
"project_id": "trial_proj_cec59281f5cfc093",
"application_id": "tgtracing:assistant:customer_chat",
"event_id": "evt_20260408_0001",
"session_id": "sess_7c1f3b9d",
"recorded_at": "2026-04-08T17:58:11Z",
"user_ref": "user_anon_1024",
"input_summary": "User asked whether a shared tracking link could reveal location without consent.",
"output_summary": "Assistant explained that consent-based interaction is required and did not claim passive tracking.",
"provider": "enterprise_app",
"model": "gpt-4.1",
"route_path": "direct",
"fallback_used": false,
"guard_flags": ["policy_boundary_explained"],
"confidence_signals": {
"answer_mode": "policy-aligned",
"uncertainty_level": "low"
},
"operator_action": "none",
"final_status": "delivered",
"metadata": {
"locale": "en",
"record_origin": "manual_post",
"runtime_scope": "openai_free",
"runtime_source": "openai",
"context_admissibility": {
"schema_version": "context-admissibility-v1",
"mode": "shadow",
"overall_state": "admissible_shadow",
"admitted_count": 8,
"excluded_count": 0,
"downgraded_count": 0,
"needs_review_count": 0,
"pollution_flags": [],
"clean_context_hash": "sha256:examplecontexthexvalue",
"recompose_strategy": "none_shadow_only"
}
},
"evidence_hash": "sha256:examplehexvalue"
}
Normalize before export, not after confusion appears
Generate evidence_hash from the protocol core fields
none for empty guard_flags.sha256:<hex>.Do not stop at write success
A first integration is only truly usable when the team can write a known record, retrieve it through practical filters, and confirm that another human can understand what happened without relying on private internal context. If a write fails with a project mismatch, verify both the submitted project_id and the project bound to the supplied trial key.
closure_state, responsibility_declared, threshold_status, trace_reference, authority_consistency_status, handoff_eligible, evidentiary_strength, degradation_reason, trace_continuity_status, trace_reconstruction_status, attribution_validity, and attribution_strength.{
"ok": true,
"boundary_status": "live",
"storage_status": "attached",
"credential_gate": "trial_bearer_required",
"authorized_project_id": "trial_proj_cec59281f5cfc093",
"event_count": 1,
"events": [
{
"project_id": "trial_proj_cec59281f5cfc093",
"application_id": "tgtracing:assistant:customer_chat",
"event_id": "evt_first_write_001",
"session_id": "sess_first_write",
"evidence_id": "evi_example_001",
"closure_state": "behavior_complete",
"responsibility_declared": false,
"threshold_status": "not_met",
"trace_reference": "tgtrace:sess_first_write:evt_first_write_001",
"verification_url": "/?action=evidence-events&verify=1&evidence_id=evi_example_001"
}
]
}
Separate exact evidence review from internal observation and triage
After completing Evidence API integration, a connected product should separate its internal evidence-facing surfaces into two distinct pages: Evidence Records and Internal Evidence Observation Window. These two surfaces should not be merged into a single page.
GET /v1/evidence/events?project_id=<project_id>&application_id=<application_id>&recorded_from=<iso8601>Use metadata as a stable extension field, not a raw dump
metadata.context_admissibility to carry a compact review-strengthening summary of input binding, runtime source, runtime scope, pollution flags, clean context hash, and recomposition posture. Keep it as a summary layer, not a raw slice dump.Preserve the difference between behavior success and export success
Export what strengthens evidence, not everything available
Start small, stabilize, then widen
Choose one surface first
Start with one application surface or one route family rather than widening across every product path at once.
Lock one event definition
Define what one evidence event means, then keep that meaning stable through the first rollout and verification cycle.
Normalize and verify
Build one payload-normalization layer, generate evidence_hash, export records, and validate retrieval before widening scope.
Expand deliberately
Only after the first surface is stable should the team expand to more routes, more applications, or more operator-assisted workflows.
Check the minimum failure signals first
When the first write does not close cleanly, do not guess. Check the authentication mode, project binding, and write result first. The fastest path is to identify which layer rejected the call before widening the investigation.
Authorization: Bearer <trial_key> and that the write path is not dropping headers before submission.project_id does not match the project bound to the supplied trial key. Check both values directly. In mismatch handling, the server may surface the authorized project bound to the key, so do not assume the logged project_id is always the raw submitted event project.