Integration Manual

This manual teaches implementation teams how to integrate CLARIXO correctly as a runtime control and visibility layer, verify the full chain, and avoid common integration mistakes.

The Contract Reference defines exact fields. The Skeleton Pack provides implementation-ready examples. This manual turns both into a practical integration flow: boundary setup, evidence-first verification, chain preservation, readiness checks, and operator-facing runtime visibility across runtime, queue, and AgentOps-style review surfaces.

Before You Integrate

Map your host system into the required runtime roles first

Most integration confusion starts before the first request is sent. Teams know their file names, but not which module is responsible for which CLARIXO runtime role. Start by mapping ownership.

Role 1

Assistant Surface Module

The user-facing entry point that collects the message and initiates the CLARIXO request.

Role 2

Chat Execution Endpoint

The backend execution path that validates shape, preserves metadata, dispatches to CLARIXO, and returns a normalized response.

Role 3

Operations Aggregation Endpoint

The internal endpoint that feeds overview metrics, queues, and breakdowns for operator-facing observation pages, including the per-item event identifiers needed for precise runtime-detail deep links.

Role 4

Runtime Diagnostic Page

The internal detail page that exposes exact event-level runtime telemetry, degraded causes, and diagnostic context for operators after they open Runtime Detail from a specific queue item.

Rule: do not start by copying field names into random files. First decide which host module owns each role, then align the contract to those real ownership boundaries. The goal is not to bolt on a model endpoint, but to integrate CLARIXO as a runtime control and visibility layer between the host application and external AI providers.
Minimum Setup Checklist

What must exist before a production integration is considered real

Integration identity
Define a stable source, module, scene, version, and environment. These values should not drift across requests.
Request shape
Ensure the host sends stable top-level fields such as action, message, integration, and context.
Metadata continuity
Preserve login state, channel, surface, session, page, and approved context fields from the assistant surface into the execution endpoint and downstream telemetry.
Evaluation / paid continuity
If the integration uses the self-serve evaluation path, make sure the API-facing access flow is understood clearly and that project continuation into paid usage does not require rebuilding the integration from zero.
Runtime visibility outputs
Plan where operator-facing fields such as integration_source, provider, ops_final_status, ops_needs_review, and a stable per-item event identifier will be surfaced. The observation layer should support overview, queue-detail, breakdown, and exact runtime-detail reading so operators can open Runtime Detail from a queue item and inspect the corresponding event-level payload. These surfaces form the basis of AgentOps-style monitoring and operator review workflows after integration is live.
Degraded handling
Decide how the host will render degraded but valid outcomes without pretending they are healthy.
Request Shape

Use the current top-level request contract

Put host attribution inside integration, and put request context in the top-level context object. Do not place lang or general context fields inside integration.

{
  "action": "message",
  "message": "Help the operator review this case.",
  "integration": {
    "source": "example_app",
    "module": "assistant",
    "scene": "customer_chat",
    "version": "v1",
    "environment": "production"
  },
  "context": {
    "lang": "en",
    "session_id": "host-session-id",
    "page": "assistant_page",
    "project_id": "trial_proj_example"
  }
}
Where context goes
Use top-level context for approved context keys such as lang, session_id, page, and project_id.
Where integration identity goes
Use integration.source, integration.module, integration.scene, integration.version, and integration.environment to describe the host surface and execution ownership.
integration_mode
integration.mode may be omitted in the request. CLARIXO normalizes clarixo to first_party and a non-empty third-party integration.source to third_party.
Common Failure Patterns

Where production integrations usually go wrong

Failure 1

Context gets overwritten mid-path

The frontend sends the right context, but a later merge replaces it with generic defaults. The request still looks valid, but attribution is wrong.

Failure 2

source / module / scene stop matching reality

Values are hard-coded once, then the product evolves and the metadata no longer represents the actual host role or user surface.

Failure 3

Operator pages have no trustworthy telemetry

The execution path works, but the overview page and runtime detail page cannot show stable final status, review state, or provider attribution.

Failure 4

Degraded success is mislabeled as healthy

A fallback reply still reaches the user, but the host UI hides that it was fallback, low-confidence, or watch-state output.

Failure 5

Protocol failure is collapsed into empty success

The upstream response shape is malformed, but the host swallows the failure and displays a blank or fake-normal reply.

Failure 6

Role ownership is implicit instead of explicit

Teams patch symptoms across files because nobody has declared which module owns surface, execution, aggregation, and runtime detail.

Important: many “CLARIXO is not working” reports are really host-path mistakes: context loss, bad routing, incorrect role ownership, or observability fields that never survive end to end.
Verification Flow

Verify from request entry to operator-facing outputs

Do not verify only the chat reply. Verify the full chain, including attribution, operator visibility, and readiness for runtime observation and AgentOps workflows.

Verification order:
1. Confirm which host module owns each required role
2. Confirm source / module / scene / version / environment
3. Confirm request top-level shape
4. Confirm context survives the full host path
5. Confirm response shape in healthy success
6. Confirm response shape in degraded success
7. Confirm protocol failures remain explicit
8. Confirm integration_source / integration_module / integration_scene remain attributable
9. Confirm provider / ops_final_status / ops_needs_review remain visible
10. Confirm overview page and runtime diagnostic page both show trustworthy operator data
Troubleshooting Matrix

When you see a symptom, start from the right layer

Recommended next step
Start with the Product page and Architecture page if the runtime model is not yet clear. Then use the API page to understand the runtime contract. Use Integration Preview to create or inspect a live project, and Pricing when moving into ongoing paid usage. For technical setup, follow Integration ManualCalibration ManualObservation Manual. Use this manual stack according to the capability layer you are entering next: Builder and Pro stay focused on runtime control and visibility foundation work; Business adds AgentOps reading, queue interpretation, and operator review surfaces; Control adds Approval Gateway and execution governance rollout; Enterprise extends into organization-wide governance, policy layering, ownership design, and deployment structure.

After integration is live, use observation and AgentOps-style surfaces to read runtime, queue, and operator review state.

Keep Troubleshooting and Optimization for later-stage operations. Supporting references include Contract Reference and Integration Skeletons.
Symptom
source / module / scene look generic or wrong.
Start at the Chat Execution Endpoint and inspect whether a later merge is overwriting request metadata.
Symptom
Healthy and degraded outputs look identical.
Start at the rendering logic and verify that degraded reasons and final status are not being flattened away.
Symptom
The request reaches the endpoint but behavior is inconsistent across surfaces.
Start by remapping role ownership. Different host surfaces may be reusing the same endpoint with conflicting assumptions.
Symptom
Operators cannot tell whether a turn needs review.
Verify whether ops_needs_review, final status fields, and the per-item event identifier are being preserved and surfaced into overview + queue detail + runtime detail pages. Operators should be able to move from a queue row into the exact Runtime Detail page for that event.
High-Impact Runtime Control

Approval Gateway and execution governance can affect live business flow

Approval-bound operator state can enter the live execution path and affect final runtime behavior. This is execution-relevant, not observation-only, and belongs to the Control capability layer rather than the basic integration layer.

Warning: This capability can affect live business flow.
When enabled, operator-governed state may alter final response text, review or escalation behavior, or blocking outcomes returned to real users. Do not enable or expand this capability in production without staged rollout, explicit ownership, audit visibility, and a clear Approval Gateway operating boundary.
What it is
Approval Gateway and execution governance allow operator-applied state to enter the live execution path before the final response exits.
Why it is different
Observer, runtime, and AgentOps review surfaces show what happened. Approval Gateway and execution governance can change what happens.
Capability layer
This is a Control-layer capability. Business-level AgentOps helps teams read runtime and review state; Control adds the governed execution boundary that can approve, block, escalate, or otherwise influence the live production path.
What it can affect
It can affect final response text, review or escalation behavior, operator-controlled blocking behavior, and runtime metadata that records whether an override was applied and why.
Activation conditions
This capability becomes relevant only when operator state exists for the active execution context, the runtime matches the relevant session or scope, the execution path reaches the governance hook, and the resulting intervention is permitted by the active runtime path.
Audit and reversal
Any operator-governed intervention should remain auditable and reversible. Operators should be able to determine whether an override was applied, why it occurred, and how the system can return to default behavior.
Rollout Stage

Stage 1: Observe Only

Start with standard observation and runtime visibility. Use AgentOps and Observer surfaces to identify review candidates without allowing operator state to influence the live execution path. This stage corresponds to the runtime control and Business-level reading surface, not the execution-governance layer.

Rollout Stage

Stage 2: Limited Execution Governance

Enable governed execution only for limited flows, lower-risk scenarios, or selected business paths. Confirm matching logic, metadata visibility, ownership, and Approval Gateway operating boundaries before broader use.

Rollout Stage

Stage 3: Production Execution Governance

Only move to broader production usage after controlled validation. Production usage should include clear approval boundaries, defined escalation rules, explicit accountability for scope expansion, and organization-ready ownership for wider deployment.

Use This Manual Together With

Integration is the first manual in the CLARIXO operating stack

Next Manual

Calibration Manual

Use it after the chain works to stabilize scene meaning, degraded interpretation, and operator review logic before broader rollout.

Reference

Contract Reference

Use it for exact request / response semantics, structured errors, degraded examples, and compatibility-facing field expectations.

Examples

Integration Skeleton Pack

Use it for implementation-ready role skeletons and minimal working examples for success, degraded, observability, and structured contract failure.