InALign Guide

v0.9.1 — Everything you need to know about using InALign, from installation to advanced AI security analysis.

Quick Start

Get InALign running in 30 seconds. No signup, no account, no server required.

1. Install

Terminal
pip install inalign-mcp && inalign-install --local

This installs the package, creates ~/.inalign/ directory, and auto-configures your Claude Code settings.

2. Use Claude Code Normally

That's it. Every tool call, file access, and decision is now automatically recorded with SHA-256 cryptographic hashing. No code changes needed.

3. View Your Audit Trail

Terminal
# Generate and open an HTML report in your browser
inalign-report
Pro/Enterprise? Add your license key anytime: inalign-install --license YOUR_KEY — features unlock instantly, still 100% local.

How It Works

InALign runs as an MCP (Model Context Protocol) server alongside your AI agent. Every action creates a cryptographic record linked to the previous one, forming a tamper-proof chain.

1
Agent makes a tool call
2
InALign records action + SHA-256 hash
3
Hash links to previous record
4
Merkle root computed for verification

If anyone modifies a single record, every subsequent hash changes — making tampering mathematically detectable.

Data Storage

All data stays on your machine. Here's where InALign stores things:

~/.inalign/ provenance.db # Cryptographic hash chain (SQLite WAL) ontology.db # W3C PROV knowledge graph (SQLite) license.json # License info + cached validation usage.json # Monthly action counter signing_key # Ed25519 private key (local only) signing_key.pub # Ed25519 public key sessions/ *.json.gz # Full session transcripts (compressed) analysis/ *.html # AI analysis reports (Pro) exports/ *.json # OpenTelemetry OTLP exports ~/.claude/ settings.json # MCP server configuration
Fully Decentralized. No servers, no accounts, no data collection. Everything runs on your machine. Even AI analysis uses your API key directly.

inalign-install

Install and manage your InALign setup.

# Install free version with local SQLite storage
inalign-install --local

# Install with a Pro or Enterprise license key
inalign-install --license YOUR_LICENSE_KEY

# Activate or update license without reinstalling
inalign-install --activate YOUR_LICENSE_KEY

# Check current license status
inalign-install --status

# Remove InALign configuration
inalign-install --uninstall

The installer automatically:

inalign-report

Generate and view your audit dashboard in the browser.

# Generate report and open in browser (default port 8275)
inalign-report

# Custom port
inalign-report --port 9000

# Generate without opening browser
inalign-report --no-open

The report launches a React SPA dashboard at localhost:8275 with a full REST API backend. The dashboard has 4 main pages:

Overview
Session summary, risk distribution, verified chains, storage stats
Sessions
Session list with drill-down to provenance chain, session log, data flows, and security tabs
Security
GraphRAG risk analysis with MITRE ATT&CK mapped findings across all sessions
AI Analysis Pro
Zero-Trust (Ollama) or Advanced (Claude/OpenAI) LLM-powered security analysis

Data export is available in JSON and CSV formats from the dashboard. Session detail loads in 1.7s via lazy ontology loading.

inalign-analyze

Run deep AI security analysis on your sessions using your own LLM API key. Pro

# Analyze latest session with Anthropic Claude
inalign-analyze --api-key sk-ant-xxxxx --latest --save

# Analyze with OpenAI GPT-4o
inalign-analyze --provider openai --api-key sk-xxxxx --latest --save

# Analyze a specific session file
inalign-analyze session-20260215.json.gz --api-key sk-ant-xxxxx

# Output raw JSON (for programmatic use)
inalign-analyze --api-key sk-ant-xxxxx --latest --json

The AI analyzer:

Supported LLMs: anthropic (Claude Sonnet) and openai (GPT-4o).

Your API key, your cost. InALign never stores or transmits your API key. Analysis runs directly from your machine to the LLM provider.

inalign-ingest

Parse session logs and convert them into provenance chains and reports.

# Parse latest session log and generate a report
inalign-ingest --latest --save

# Parse a specific session file and save the report
inalign-ingest path/to/file.jsonl --save --output report.html

# Export as JSON (for programmatic use)
inalign-ingest --latest --json

The ingest command automatically:

Provenance Tools

These tools form the core audit trail. They're called automatically by your AI agent via the MCP protocol.

record_user_command Free
Record the user's original prompt/command that triggered agent actions. Called at the start of every task for audit logging.
Params: command (required), command_hash_only (optional, for privacy), user_id (optional)
record_action Free
Record any agent action in the provenance chain with cryptographic verification. Supports tool calls, decisions, file operations, and LLM requests.
Params: action_type (tool_call | decision | file_read | file_write | llm_request), action_name, inputs, outputs
get_provenance Free
Get the provenance chain summary for the current session. Shows recent records and Merkle root.
Params: format (summary | full | prov-jsonld)
verify_provenance Free
Verify the integrity of the entire provenance chain. Checks that no records have been tampered with using hash chain verification.
Params: none
verify_third_party Free
Generate third-party verifiable proof. Returns everything needed to independently verify the provenance chain without trusting InALign.
Params: session_id (optional)
list_sessions Free
List past audit sessions stored in local SQLite. Shows session history with record counts and timestamps.
Params: limit (default: 20)

Risk Analysis Tools

Detect threats and behavioral anomalies in agent sessions.

analyze_risk Free
Run GraphRAG pattern detection with 11 MITRE ATT&CK mapped patterns including data exfiltration, privilege escalation, prompt injection, reconnaissance, persistence, defense evasion, and chain integrity checks.
Params: session_id (optional)
get_behavior_profile Free
Get behavioral profile for a session. Shows tool usage patterns, timing analysis, and anomalies.
Params: session_id (optional)
get_agent_risk Pro
Get long-term risk profile for an agent across all sessions. Shows risk trends, common patterns, and tools used over time.
Params: agent_id (required)
get_user_risk Pro
Get risk profile for a user or team across all agents. Aggregates risk data for organization-level security overview.
Params: user_id (required)
list_agents_risk Pro
Get risk summary for all known agents. Useful for org-wide security dashboards and monitoring.
Params: limit (default: 20)

Security Policy Tools

Configure and simulate security policies for your agent environment.

get_policy Free
Get current security policy settings. Shows the active preset and all rule configurations.
Params: none
set_policy Free
Change the security policy preset. Three presets available for different security needs.
Params: preset (STRICT_ENTERPRISE | BALANCED | DEV_SANDBOX)
# Policy presets:
STRICT_ENTERPRISE  # Maximum security - strict rules for production
BALANCED           # Default - good balance of security and usability
DEV_SANDBOX        # Permissive - for development and testing
list_policies Free
List all available policy presets with their descriptions and rule configurations.
Params: none
simulate_policy Free
Simulate a policy against historical events. Shows how many actions would have been blocked, masked, or warned.
Params: preset (STRICT_ENTERPRISE | BALANCED | DEV_SANDBOX)

Report & Export Tools

Generate audit reports and certificates.

generate_audit_report Free
Generate a comprehensive audit report with provenance chain, integrity verification, and statistics.
Params: format (json | summary | prov-jsonld)
export_report Free
Export a standalone HTML audit dashboard viewable in any browser. Includes provenance visualization, session log, and data export buttons.
Params: output_path (optional, defaults to temp file)

Compliance & Frameworks

Map your agent activity against industry standards and regulatory frameworks.

generate_compliance_report Free
Generate EU AI Act compliance report. Maps provenance data to Articles 9, 12, 14, 15 requirements with PASS/PARTIAL/FAIL checklist.
Params: session_id (optional), format (json | html)
check_owasp_compliance Free
Run OWASP LLM Top 10 compliance check. Returns per-item PASS/WARN/FAIL scores for prompt injection, output handling, DoS, supply chain, sensitive info, plugin security, agency, and overreliance.
Params: session_id (optional)

Agent Permissions

Control which tools each agent is allowed to use with fine-grained allow/deny/audit rules.

get_permission_matrix Free
Get agent tool permission matrix. Shows allow/deny/audit settings per agent per tool.
Params: agent_id (optional, omit for all agents)
set_agent_permissions Free
Set tool permissions for an agent. Each tool can be set to allow, deny, or audit mode.
Params: agent_id (required), permissions (dict of tool → allow/deny/audit), default_permission (optional)

Drift Detection

Detect behavioral anomalies by comparing agent activity against historical baselines.

detect_drift Free
Detect behavioral drift in a session compared to historical baseline. Flags new tools, frequency spikes, and timing anomalies using z-score analysis.
Params: session_id (optional), agent_id (optional)
get_behavior_baseline Free
Get or build behavior baseline for an agent. Shows average tool usage, timing patterns, and known tools from historical sessions.
Params: agent_id (required)

OpenTelemetry Export

Export provenance data in OpenTelemetry format for integration with observability platforms.

export_otel Free
Export provenance data as OpenTelemetry (OTLP) JSON. Supports file export and optional push to an OTLP collector endpoint.
Params: session_id (optional), output_path (optional, default: ~/.inalign/exports/), endpoint (optional, OTLP collector URL)

Multi-Agent Topology

Track interactions between multiple agents and monitor token usage and costs.

track_agent_interaction Free
Record an interaction between two agents. Builds a multi-agent topology graph showing delegation, query, and response patterns.
Params: source_agent (required), target_agent (required), interaction_type (delegate | query | respond)
get_agent_topology Free
Get multi-agent interaction topology. Shows nodes (agents) and edges (interactions) as a graph.
Params: session_id (optional)
track_cost Free
Track token usage and API cost for an agent or session. Auto-computes cost from model pricing.
Params: model (required), input_tokens (required), output_tokens (required), provider (optional), agent_id (optional), session_id (optional)
get_cost_report Free
Get cost attribution report. Shows total cost, breakdown by agent and model.
Params: session_id (optional), agent_id (optional)

Knowledge Graph (Ontology)

Build and query W3C PROV-compliant knowledge graphs from your agent sessions for deep causal analysis. The v0.9.1 ontology defines 8 classes (Agent, Session, ToolCall, AIModelInvocation, Entity, Decision, Risk, Policy) and 13 relations covering W3C PROV core plus PROV-AGENT extensions.

ontology_populate Free
Build a W3C PROV knowledge graph from a session. Creates entities, activities, and derivation relationships for full traceability.
Params: session_id (optional), include_risks (optional)
ontology_query Free
Query the knowledge graph with multiple query types: neighbors, causal chains, and five competency questions covering access patterns, exfiltration paths, policy violations, impact analysis, and hash chain breaks.
Params: query_type (neighbors | causal_chain | cq1_access | cq2_exfiltration | cq3_violations | cq4_impact | cq5_hash_break), node_id (optional), depth (optional)
ontology_stats Free
Get statistics about the knowledge graph — node counts, edge counts, and type distributions.
Params: session_id (optional)
ontology_security_scan Free
Run a graph-powered security analysis with MITRE ATT&CK mapping. Leverages the knowledge graph structure for deeper threat detection than pattern matching alone.
Params: session_id (optional)

PROV-AGENT: AIModelInvocation

v0.9.1 introduces the PROV-AGENT ontology extension, adding first-class support for tracking LLM reasoning steps as knowledge graph nodes.

The core idea: every time the AI agent "thinks" (reasoning, planning, or generating output), InALign records it as an AIModelInvocation activity node — a W3C PROV Activity that represents a single LLM API call or reasoning step.

How It Works

1
User sends a prompt
2
Agent "thinks" (reasoning step)
3
AIModelInvocation node created
4
Linked to prompt & response via edges

New Relations

# PROV-AGENT relations added in v0.9.1
invokedModel  # ToolCall → AIModelInvocation
               # Links the tool call that triggered an LLM reasoning step

usedPrompt    # AIModelInvocation → Prompt Entity
               # Links the LLM call to the user prompt that triggered it

generated     # AIModelInvocation → Response Entity
               # Links the LLM call to the agent's output

This creates a complete Prompt → Reasoning → Action → Result causal chain in the knowledge graph, enabling competency queries like:

Automatic population. AIModelInvocation nodes are created automatically during ontology_populate — no manual recording needed. Every "thinking" block in the session log becomes a traceable graph node.

AI Analysis Modes

v0.9.1 offers two AI analysis modes for deep security review of your agent sessions. Both run entirely from your machine — InALign never stores or routes your data through any intermediary.

Zero-Trust Mode (Ollama)

--provider local Free
Run AI analysis using a local Ollama instance. No data ever leaves your machine — true zero-trust. Requires Ollama running locally.
Terminal
# 1. Install and start Ollama
ollama serve

# 2. Run zero-trust analysis (default model: llama3.2)
inalign-analyze --provider local --latest --save

# 3. Use a different local model
inalign-analyze --provider local --model mistral --latest --save
No API key needed. Zero-Trust mode connects directly to Ollama at localhost:11434. Your session data stays entirely on your machine — nothing is sent to any external service.

Advanced Mode (Claude / OpenAI)

--provider anthropic | openai Pro
Deep analysis using cloud LLMs (Claude Sonnet or GPT-4o). Uses your own API key — InALign never stores it. PII is automatically masked (14 patterns) before sending.
Terminal
# Analyze with Anthropic Claude
inalign-analyze --provider anthropic --api-key sk-ant-xxxxx --latest --save

# Analyze with OpenAI GPT-4o
inalign-analyze --provider openai --api-key sk-xxxxx --latest --save

Both modes are also available from the React SPA dashboard at localhost:8275 via the AI Analysis tab. The dashboard proxies requests through the local API server — your API key is sent directly to the provider from your machine.

Feature Zero-Trust (Ollama) Advanced (Claude/OpenAI)
Data leaves machine Never PII-masked summary only
API key required No Yes (yours)
Analysis depth Good Deep
PII masking N/A (local) 14 patterns
Plan required Free Pro
Speed Depends on hardware Fast

Plan Comparison

All plans run 100% locally. License keys unlock features — not servers.

Feature Free Pro ($29/mo) Enterprise ($99/mo)
Provenance recording & verification
Hash chain integrity check
React SPA dashboard (4 pages)
Security policy management
GraphRAG risk analysis (11 patterns)
Session log capture
Third-party verifiable proofs
EU AI Act compliance
OWASP LLM Top 10
Agent permission matrix
Behavioral drift detection
OpenTelemetry export
W3C PROV knowledge graph
Multi-agent topology
MITRE ATT&CK mapping
Causal chain analysis
AI security analysis (your API key)
Multi-session agent risk tracking
Org-level risk aggregation
PROV-JSON-LD export (W3C)
Blockchain anchoring (Polygon)
Custom policy engine
Team license management
Actions per month 1,000 50,000 Unlimited
Data retention 7 days 90 days 365 days
Agents 1 10 Unlimited

Example Workflows

Free: Basic Audit Trail

# 1. Install
pip install inalign-mcp && inalign-install --local

# 2. Use Claude Code normally - actions are auto-recorded
# 3. When done, view your audit trail:
inalign-report

# Opens a React SPA dashboard in your browser with:
#   - Overview (session stats, risk distribution, verified chains)
#   - Sessions (drill-down to provenance, session log, data flows)
#   - Security (GraphRAG risk analysis, MITRE ATT&CK findings)
#   - AI Analysis (Zero-Trust Ollama / Advanced Claude/OpenAI)

Pro: AI-Powered Security Analysis

# 1. Install with license
inalign-install --license ial_pro_xxxxx

# 2. Use Claude Code normally
# 3. Analyze your session with AI
inalign-analyze --api-key sk-ant-xxxxx --latest --save

# Output:
#   Risk Score: 25/100 (LOW)
#   Findings: 2 items
#   - Sensitive file access detected (severity: medium)
#   - Unusual tool chain pattern (severity: low)
#   Recommendations: ...

# 4. View the full report with AI analysis tab
inalign-report

Enterprise: Team Monitoring

# Admin: Install on each developer's machine
inalign-install --license ial_enterprise_xxxxx

# Monitor agent risk across the team
# (via MCP tools in Claude Code):
#   list_agents_risk   → see all agents' risk scores
#   get_user_risk      → team-level security posture
#   get_agent_risk     → individual agent trends

© 2026 InALign — AI Agent Governance Platform

GitHub · PyPI · Home