InALign Guide
v0.9.1 — Everything you need to know about using InALign, from installation to advanced AI security analysis.
Quick Start
Get InALign running in 30 seconds. No signup, no account, no server required.
1. Install
pip install inalign-mcp && inalign-install --local
This installs the package, creates ~/.inalign/ directory, and auto-configures your Claude Code settings.
2. Use Claude Code Normally
That's it. Every tool call, file access, and decision is now automatically recorded with SHA-256 cryptographic hashing. No code changes needed.
3. View Your Audit Trail
# Generate and open an HTML report in your browser
inalign-report
inalign-install --license YOUR_KEY
— features unlock instantly, still 100% local.
How It Works
InALign runs as an MCP (Model Context Protocol) server alongside your AI agent. Every action creates a cryptographic record linked to the previous one, forming a tamper-proof chain.
If anyone modifies a single record, every subsequent hash changes — making tampering mathematically detectable.
Data Storage
All data stays on your machine. Here's where InALign stores things:
inalign-install
Install and manage your InALign setup.
# Install free version with local SQLite storage inalign-install --local # Install with a Pro or Enterprise license key inalign-install --license YOUR_LICENSE_KEY # Activate or update license without reinstalling inalign-install --activate YOUR_LICENSE_KEY # Check current license status inalign-install --status # Remove InALign configuration inalign-install --uninstall
The installer automatically:
- Creates
~/.inalign/directory structure - Updates
~/.claude/settings.jsonwith MCP server config - Creates
CLAUDE.mdwith audit logging instructions - Validates and caches license key (if provided)
inalign-report
Generate and view your audit dashboard in the browser.
# Generate report and open in browser (default port 8275) inalign-report # Custom port inalign-report --port 9000 # Generate without opening browser inalign-report --no-open
The report launches a React SPA dashboard at localhost:8275 with a full REST API backend. The dashboard has 4 main pages:
Data export is available in JSON and CSV formats from the dashboard. Session detail loads in 1.7s via lazy ontology loading.
inalign-analyze
Run deep AI security analysis on your sessions using your own LLM API key. Pro
# Analyze latest session with Anthropic Claude inalign-analyze --api-key sk-ant-xxxxx --latest --save # Analyze with OpenAI GPT-4o inalign-analyze --provider openai --api-key sk-xxxxx --latest --save # Analyze a specific session file inalign-analyze session-20260215.json.gz --api-key sk-ant-xxxxx # Output raw JSON (for programmatic use) inalign-analyze --api-key sk-ant-xxxxx --latest --json
The AI analyzer:
- Masks PII before sending data — API keys, passwords, emails, SSH keys, JWT tokens, and 10 more patterns
- Traces causal chains — user intent → agent reasoning → action → result
- Detects threat patterns — data exfiltration, privilege escalation, command injection, supply chain attacks
- Scores risk from 0-100 with severity levels (LOW / MEDIUM / HIGH / CRITICAL)
- Generates actionable recommendations prioritized by impact
Supported LLMs: anthropic (Claude Sonnet) and openai (GPT-4o).
inalign-ingest
Parse session logs and convert them into provenance chains and reports.
# Parse latest session log and generate a report inalign-ingest --latest --save # Parse a specific session file and save the report inalign-ingest path/to/file.jsonl --save --output report.html # Export as JSON (for programmatic use) inalign-ingest --latest --json
The ingest command automatically:
- Converts session log events into SHA-256 hash-chained provenance records
- Maps events to provenance types (USER_INPUT, LLM_RESPONSE, TOOL_CALL, TOOL_RESULT, DECISION)
- Stores records in batch to
provenance.dbvia single transaction - Idempotent — re-running on the same session is safe (completes in ~0.05s)
Provenance Tools
These tools form the core audit trail. They're called automatically by your AI agent via the MCP protocol.
Risk Analysis Tools
Detect threats and behavioral anomalies in agent sessions.
Security Policy Tools
Configure and simulate security policies for your agent environment.
# Policy presets: STRICT_ENTERPRISE # Maximum security - strict rules for production BALANCED # Default - good balance of security and usability DEV_SANDBOX # Permissive - for development and testing
Report & Export Tools
Generate audit reports and certificates.
Compliance & Frameworks
Map your agent activity against industry standards and regulatory frameworks.
Agent Permissions
Control which tools each agent is allowed to use with fine-grained allow/deny/audit rules.
Drift Detection
Detect behavioral anomalies by comparing agent activity against historical baselines.
OpenTelemetry Export
Export provenance data in OpenTelemetry format for integration with observability platforms.
Multi-Agent Topology
Track interactions between multiple agents and monitor token usage and costs.
Knowledge Graph (Ontology)
Build and query W3C PROV-compliant knowledge graphs from your agent sessions for deep causal analysis. The v0.9.1 ontology defines 8 classes (Agent, Session, ToolCall, AIModelInvocation, Entity, Decision, Risk, Policy) and 13 relations covering W3C PROV core plus PROV-AGENT extensions.
PROV-AGENT: AIModelInvocation
v0.9.1 introduces the PROV-AGENT ontology extension, adding first-class support for tracking LLM reasoning steps as knowledge graph nodes.
The core idea: every time the AI agent "thinks" (reasoning, planning, or generating output), InALign records it as an AIModelInvocation activity node — a W3C PROV Activity that represents a single LLM API call or reasoning step.
How It Works
New Relations
# PROV-AGENT relations added in v0.9.1 invokedModel # ToolCall → AIModelInvocation # Links the tool call that triggered an LLM reasoning step usedPrompt # AIModelInvocation → Prompt Entity # Links the LLM call to the user prompt that triggered it generated # AIModelInvocation → Response Entity # Links the LLM call to the agent's output
This creates a complete Prompt → Reasoning → Action → Result causal chain in the knowledge graph, enabling competency queries like:
- Q6: "Which prompts led to sensitive file access?" — trace from Prompt entity through AIModelInvocation to ToolCall to Entity(.env)
- Q7: "Was this file accessed across multiple sessions?" — cross-session identity linking via
sameAsrelation
ontology_populate — no manual recording needed. Every "thinking" block in the session log becomes a traceable graph node.
AI Analysis Modes
v0.9.1 offers two AI analysis modes for deep security review of your agent sessions. Both run entirely from your machine — InALign never stores or routes your data through any intermediary.
Zero-Trust Mode (Ollama)
# 1. Install and start Ollama ollama serve # 2. Run zero-trust analysis (default model: llama3.2) inalign-analyze --provider local --latest --save # 3. Use a different local model inalign-analyze --provider local --model mistral --latest --save
localhost:11434. Your session data stays entirely on your machine — nothing is sent to any external service.
Advanced Mode (Claude / OpenAI)
# Analyze with Anthropic Claude inalign-analyze --provider anthropic --api-key sk-ant-xxxxx --latest --save # Analyze with OpenAI GPT-4o inalign-analyze --provider openai --api-key sk-xxxxx --latest --save
Both modes are also available from the React SPA dashboard at localhost:8275 via the AI Analysis tab. The dashboard proxies requests through the local API server — your API key is sent directly to the provider from your machine.
| Feature | Zero-Trust (Ollama) | Advanced (Claude/OpenAI) |
|---|---|---|
| Data leaves machine | Never | PII-masked summary only |
| API key required | No | Yes (yours) |
| Analysis depth | Good | Deep |
| PII masking | N/A (local) | 14 patterns |
| Plan required | Free | Pro |
| Speed | Depends on hardware | Fast |
Plan Comparison
All plans run 100% locally. License keys unlock features — not servers.
| Feature | Free | Pro ($29/mo) | Enterprise ($99/mo) |
|---|---|---|---|
| Provenance recording & verification | ✓ | ✓ | ✓ |
| Hash chain integrity check | ✓ | ✓ | ✓ |
| React SPA dashboard (4 pages) | ✓ | ✓ | ✓ |
| Security policy management | ✓ | ✓ | ✓ |
| GraphRAG risk analysis (11 patterns) | ✓ | ✓ | ✓ |
| Session log capture | ✓ | ✓ | ✓ |
| Third-party verifiable proofs | ✓ | ✓ | ✓ |
| EU AI Act compliance | ✓ | ✓ | ✓ |
| OWASP LLM Top 10 | ✓ | ✓ | ✓ |
| Agent permission matrix | ✓ | ✓ | ✓ |
| Behavioral drift detection | ✓ | ✓ | ✓ |
| OpenTelemetry export | ✓ | ✓ | ✓ |
| W3C PROV knowledge graph | ✓ | ✓ | ✓ |
| Multi-agent topology | ✓ | ✓ | ✓ |
| MITRE ATT&CK mapping | — | ✓ | ✓ |
| Causal chain analysis | — | ✓ | ✓ |
| AI security analysis (your API key) | — | ✓ | ✓ |
| Multi-session agent risk tracking | — | ✓ | ✓ |
| Org-level risk aggregation | — | ✓ | ✓ |
| PROV-JSON-LD export (W3C) | — | ✓ | ✓ |
| Blockchain anchoring (Polygon) | — | — | ✓ |
| Custom policy engine | — | — | ✓ |
| Team license management | — | — | ✓ |
| Actions per month | 1,000 | 50,000 | Unlimited |
| Data retention | 7 days | 90 days | 365 days |
| Agents | 1 | 10 | Unlimited |
Example Workflows
Free: Basic Audit Trail
# 1. Install pip install inalign-mcp && inalign-install --local # 2. Use Claude Code normally - actions are auto-recorded # 3. When done, view your audit trail: inalign-report # Opens a React SPA dashboard in your browser with: # - Overview (session stats, risk distribution, verified chains) # - Sessions (drill-down to provenance, session log, data flows) # - Security (GraphRAG risk analysis, MITRE ATT&CK findings) # - AI Analysis (Zero-Trust Ollama / Advanced Claude/OpenAI)
Pro: AI-Powered Security Analysis
# 1. Install with license inalign-install --license ial_pro_xxxxx # 2. Use Claude Code normally # 3. Analyze your session with AI inalign-analyze --api-key sk-ant-xxxxx --latest --save # Output: # Risk Score: 25/100 (LOW) # Findings: 2 items # - Sensitive file access detected (severity: medium) # - Unusual tool chain pattern (severity: low) # Recommendations: ... # 4. View the full report with AI analysis tab inalign-report
Enterprise: Team Monitoring
# Admin: Install on each developer's machine inalign-install --license ial_enterprise_xxxxx # Monitor agent risk across the team # (via MCP tools in Claude Code): # list_agents_risk → see all agents' risk scores # get_user_risk → team-level security posture # get_agent_risk → individual agent trends