Three-Side AI Infrastructure

Start from the LLM-side, the User-side, or the Development Platform.

CLARIXO is structured across three product paths. Use the LLM-side when you need runtime control and governed execution, the User-side when you need behavior evidence and formal review, and the Development Platform when you want a bounded environment for building AI products under CLARIXO architecture.

LLM-side control before silent runtime drift
User-side evidence before disputes escalate
Development Platform for bounded AI product building
Self-binding governance
CLARIXO is also bound by the same rules it introduces, rather than standing above them.
runtime.boundary_logic=self_applied
evidence.traceability=enabled
approval.exception_layer=none
reviewable=true · traceable=true · constraint_bound=true
Choose the path that matches your current work, then expand across the other sides when your system needs deeper control, stronger evidence, or a dedicated build environment.
LLM-side

Control model-side execution in production.

Core Runtime API Routing, fallback visibility, guard behavior, confidence signals, and runtime observability.
AgentOps Runs, operator review, triage, queue handling, and operational workflow surfaces.
Approval / Governed Execution Approval, block, escalation, and commit-boundary execution control before live handoff continues.
Start here when you already chose the LLM-side path. Use LLM-side APIs when you want the broader side-selection and path comparison first.
Development Platform

Build bounded AI products under CLARIXO architecture.

CLARIXO OC Platform is the Development Platform entry for teams that want a dedicated build environment inside the broader CLARIXO product system.
Use this path when your next step is not only runtime control or evidence export, but structured product building, development continuity, and bounded implementation work.
Choose this when you want the Development Platform path inside CLARIXO, rather than starting from the LLM-side or User-side product surfaces.
User-side

Export behavior evidence and prepare formal review.

Evidence API Export user-facing AI behavior as traceable, provable, and auditable records without changing the runtime path first.
Audit Workspace Move retained evidence into grouped review, reviewer context, and formal handoff when responsibility becomes real.
Use this side when user-facing behavior must remain traceable, reviewable, and ready for formal case handling.
Runtime snapshot
OUTPUT PATH
intent → router → model selection → guard review → final output path
GUARD INFLUENCE
signal=policy-watch → evaluation=bounded → action=reviewable-pass → control=visible
CONTROL SIGNALS
fallback=standby · drift=low · confidence=90% · review_state=clear
Local Integration Quickstart

Connect from your own local environment and view a real Native API response.

This is the fastest way to verify that your local bridge is working, your request is reaching the API, and the returned result can be viewed locally. The first successful result is expected to be a dry-run response, not a live automatic file write.
1. Create your local files Create a local folder and add local_bridge.py and preview.html.
2. Run the bridge Run python local_bridge.py to create a local last_response.json from a real API call.
3. Start local preview Run python -m http.server 8000 and open http://127.0.0.1:8000/preview.html.
Minimum request example
{
  "task": "Generate a local implementation suggestion",
  "target_file": "app/example.py",
  "goal": "Show a real API response in local preview",
  "source_code": "print('hello world')"
}

What success looks like

Terminal output shows DONE
A local last_response.json is created
The browser preview shows real JSON
The first accepted response is a structured dry-run package

Common first error

If the API returns missing_source_code, add source_code to the payload and run the bridge again.

What this first result means

Your local bridge is connected, the request is reaching the Native API, and the system can already return a structured package. The first correct result is visible and reviewable before any live apply behavior is introduced.

Why It Matters

Most AI systems become risky in the runtime path before teams can explain what changed

The problem is usually not the model alone. It is the hidden runtime path around the model: routing drift, silent fallback, weakened confidence, unexpected guard intervention, and execution behavior that stays invisible until it becomes product, trust, cost, or operational risk.
ROUTING

See how execution was actually routed

Trace which path, model, or provider handled the request so runtime behavior can be inspected instead of assumed from the final answer alone.

FALLBACK

Catch silent path changes earlier

See when execution shifted to a different model, provider, or fallback route before hidden behavior changes become quality, cost, or reliability problems.

CONFIDENCE

Understand when decision quality weakened

Track where signals became weaker, less stable, or harder to trust so teams can investigate fragile outputs before they become production incidents.

GUARD + CONTROL

Review intervention before it becomes surprise behavior

Inspect when guard logic constrained, rerouted, paused, or escalated execution so teams can review how control behavior influenced the final output path.

Capability Direction

CLARIXO gives teams a standard path from visibility to review to governed execution

CLARIXO is not just another observability dashboard. It begins with runtime visibility, moves into operator-facing review when live cases need handling, and reaches governed execution when approval, block, escalation, or boundary re-checks become part of production reality.
Runtime Visibility Inspect routing, fallback, confidence, guard behavior, and execution signals before hidden runtime behavior becomes production risk.
Operator Review Move uncertain runtime cases into operational handling, next-action review, and operator workflows.
Governed Execution Apply explicit approval, block, escalation, and commit-boundary control before execution handoff continues.
Builder · establish runtime visibility first
Pro · deepen integration and live runtime usage
Business · add operator review when runtime cases need handling
Responsibility · move retained evidence into formal review when responsibility becomes real
Enterprise · apply formal control when execution boundaries become operational
Next Step

Choose the next path deliberately

Start with the path that matches your current work. Choose the LLM-side when you need runtime control, the User-side when you need evidence and review, or the Development Platform when you want a bounded environment for building and operating AI products inside CLARIXO.

User-side is the fastest evidence entry, LLM-side can now start directly from this page, and the Development Platform is the dedicated OC build environment inside CLARIXO.