Claude Code /insights: Your Personalized AI Usage Report
Table of Contents
- Why /insights exists
- What /insights does
- The five-phase analysis pipeline
- Phase 1: Lightweight scan
- Phase 2: Cache + parse
- Phase 3: Session facet extraction (SessionFacets)
- Phase 4: Cross-session aggregation (AggregatedData)
- Phase 5: Parallel insight generation
- The seven report sections
- 1. Project Areas
- 2. Interaction Style
- 3. What Works
- 4. Friction Analysis
- 5. Suggestions
- 6. On the Horizon
- 7. Fun Ending
- The summary header
- Where the report is saved
- Why Opus
- Where data comes from and lives
- Technical detail: lazy loading
- When to use it
- Closing thoughts
Why /insights exists
You use Claude Code every day — writing code, fixing bugs, refactoring. But have you ever stopped to ask: how exactly are you using it? Which workflows feel effortless? Where do you keep getting stuck?
Most people never think to reflect on this. You just use it, keep using it, friction points persist, and good habits never get locked in.
/insights answers those questions for you.
What /insights does
/insights is Claude Code’s session analysis command. It scans all your locally stored Claude Code sessions, analyzes your usage patterns with Claude Opus, and generates an HTML report.
/insights
After running it, you’ll see the progress message analyzing your sessions, followed by the analysis output and a path to the saved report.
The five-phase analysis pipeline
Behind /insights is a complete data processing pipeline — not just a simple number aggregator.
Phase 1: Lightweight scan
Claude Code stores all sessions under ~/.claude/projects/ as .jsonl files, organized by project and session ID. Phase 1 does a filesystem scan reading only metadata, without loading full session content — it’s fast.
Phase 2: Cache + parse
Analysis results are cached under ~/.claude/usage-data/:
session-meta/— statistical summary for each sessionfacets/— AI-extracted dimensions for each session
Only new sessions are re-parsed. A maximum of 200 sessions are analyzed, with the most recent ones prioritized when the limit is exceeded.
Phase 3: Session facet extraction (SessionFacets)
For each uncached session, Claude Opus is called to extract structured dimensions:
| Field | Meaning |
|---|---|
underlying_goal | What you actually wanted to accomplish |
outcome | Result (fully/mostly/partially achieved, or not achieved) |
brief_summary | Session summary |
goal_categories | Task classification — see table below |
user_satisfaction_counts | Your satisfaction signals (happy/satisfied/dissatisfied/frustrated) |
claude_helpfulness | How helpful Claude was (unhelpful → essential) |
friction_counts | Count of each friction type |
friction_detail | Specific description of friction |
primary_success | Key success factor |
user_instructions_to_claude | Instructions you gave Claude during the session |
Goal categories:
| Category | Description |
|---|---|
debug_investigate | Debugging / investigation |
implement_feature | Implementing a new feature |
fix_bug | Fixing a bug |
write_script_tool | Writing a script or tool |
refactor_code | Refactoring code |
configure_system | System configuration |
create_pr_commit | Creating a PR or commit |
analyze_data | Data analysis |
understand_codebase | Understanding a codebase |
write_tests | Writing tests |
write_docs | Writing documentation |
deploy_infra | Deployment / infrastructure |
One key extraction rule: only count actions the user explicitly initiated — not work Claude decided to do autonomously. “Help me implement the login feature” counts; Claude independently browsing a few extra files does not.
Friction categories:
| Category | Meaning |
|---|---|
misunderstood_request | Claude interpreted your intent incorrectly |
wrong_approach | Right goal, wrong solution method |
buggy_code | Generated code that didn’t work |
user_rejected_action | You stopped Claude mid-action |
excessive_changes | Over-engineered or changed too much |
Phase 4: Cross-session aggregation (AggregatedData)
Global statistics are aggregated across all sessions:
Basic stats:
- Total sessions, messages, and usage duration (hours)
- Total input/output token counts
- Days active, average messages per day
Code changes:
- Total lines added, removed, and files modified
Tool usage:
- Call count distribution for each tool
- Sessions that used Task Agent
- Sessions that used MCP
- Sessions that used Web Search / Web Fetch
Collaboration patterns:
multi_clauding: detects whether you ran multiple Claude Code sessions simultaneously — determined by timestamp overlap, recording overlap event count, sessions involved, and messages sent during overlap
Response times:
- Median and average time between Claude’s response and your next message
- Used to characterize whether you prefer rapid iteration or deliberate, thoughtful follow-ups
Time-of-day distribution:
- Records the hour of each user message, used to identify your peak usage hours
Phase 5: Parallel insight generation
The aggregated data and session summaries are fed to Claude Opus, which generates all report sections in parallel — each section is an independent API call with up to 8,192 output tokens.
The seven report sections
1. Project Areas
Identifies 4–5 categories of projects you worked on, each with a session count and 2–3 sentences describing what you worked on and how you used Claude Code for it.
2. Interaction Style
The most interesting section. Uses 2–3 paragraphs to analyze how you actually interact with Claude:
- Do you write detailed specs upfront, or iterate as you go?
- Do you interrupt Claude often, or let it finish before reviewing?
- What recurring interaction patterns show up?
Ends with a single sentence capturing your most distinctive interaction style.
3. What Works
Lists 3 workflows where you’re performing impressively well — with titles and descriptions written in second person (“you”), as if someone who knows your work is summarizing your strengths.
4. Friction Analysis
Lists 3 friction categories, each with:
- A sentence explaining what the friction is and what could be done differently
- 2 specific examples drawn from real sessions
This is one of the most valuable sections in the entire report — many habitual inefficiencies are invisible to you, but the session data captures them all.
5. Suggestions
Concrete suggestions across three dimensions:
CLAUDE.md additions: Based on instructions you’ve repeated across multiple sessions, these are rules worth hardcoding into your CLAUDE.md so you never have to say them again. For example, if you’ve told Claude “run the tests after making changes” in multiple sessions, that’s a prime candidate to add to CLAUDE.md.
Features to try: Selected from MCP, custom skills, Hooks, Headless mode, and Task Agents — the ones best suited to your current workflow, each with a copyable command or config snippet.
Usage pattern suggestions: Each suggestion comes with a ready-to-use prompt, making it easy to act on immediately.
6. On the Horizon
Based on your usage patterns, suggests 3 advanced directions you haven’t explored yet — autonomous workflows, parallel agents, test-driven development, and more — each with a “try this now” prompt.
7. Fun Ending
Finds one memorable or amusing moment from all your sessions, presented as a headline with brief context. Not a statistic — just a human touch to close the report.
The summary header
The report opens with a quantitative overview:
Sessions analyzed: 87 (of 142 scanned)
Total messages: 1,203
Total duration: 47.3 hours
Git commits: 234 | Git pushes: 89
Date range: 2026-01-15 → 2026-04-07
Where the report is saved
The HTML report is saved to:
~/.claude/data/report.html
After running /insights, the terminal outputs the file path. Open it in a browser to view the full formatted report.
Why Opus
/insights uses Claude Opus for all analysis — both Phase 3 facet extraction and Phase 5 insight generation.
The reason is straightforward: this task requires deep comprehension of large amounts of unstructured session data, pattern recognition, and causal inference across hundreds of conversations. That’s exactly what Opus is built for. Speed isn’t the priority here — report quality is.
Where data comes from and lives
/insights reads only local data. Nothing is uploaded to the cloud:
- Raw session data:
~/.claude/projects/<project-hash>/<session-id>.jsonl - Stats cache:
~/.claude/usage-data/session-meta/ - AI analysis cache:
~/.claude/usage-data/facets/ - Generated report:
~/.claude/data/report.html
The caching system means the second run is much faster — only new sessions need re-analysis; everything else is read from cache.
Technical detail: lazy loading
The /insights implementation file is 113 KB and includes heavy HTML rendering dependencies. To avoid slowing down Claude Code’s startup time, this module uses lazy loading: it’s only imported when you actually run /insights, and carries zero startup overhead the rest of the time.
When to use it
A few practical scenarios:
Monthly retrospective: See what types of work dominated your month, how effective you were, and where friction concentrated.
Optimize your CLAUDE.md: Use the suggestions section to identify instructions you keep repeating to Claude and hardcode them — stop explaining the same things in every session.
Discover blind spots: The features_to_try section might surface features you’ve never used but that fit your workflow well — MCP for database access, Hooks for auto-formatting, Headless mode for CI integration.
Track output: Git commits, line count changes, and session duration provide a useful record of productivity over time.
Closing thoughts
/insights does something genuinely interesting: it uses AI to analyze how you collaborate with AI.
It’s not just counting numbers. It’s trying to understand your actual working patterns — what flows, what blocks, where to go next. Across seven sections, it tells you what you’re doing well, surfaces friction you didn’t realize was there, and hands you prompts you can copy and try immediately.
Understanding how you use your tools is the prerequisite for using them well.
Related Articles
Claude Code Agent Loop: Dissecting the Heart of an AI Coding Assistant
How does Claude Code understand your requests, invoke tools, and self-recover step by step? A source-code deep dive into the Agent Loop's core architecture — streaming responses, parallel tool execution, auto-compaction, and error recovery.
Claude Code settings.json Explained (1): Where Config Files Live and Who Wins
A complete guide to Claude Code's configuration file system — five config sources, their file paths, priority rules, array merging vs value overriding, and enterprise managed settings delivery.
Claude Code settings.json Deep Dive (Part 2): The Permissions System
A thorough breakdown of Claude Code's permissions configuration — allow/deny/ask rule arrays, wildcard syntax, MCP tool permissions, defaultMode options, and additionalDirectories.
Claude Code settings.json Deep Dive (Part 3): The Hooks System
A thorough breakdown of Claude Code's hooks configuration — four hook types, core events (PreToolUse/PostToolUse/Stop/Notification), stdin/stdout protocol, exit code semantics, and practical examples.
Claude Code settings.json Deep Dive (4): env, Models, Auth, and Other Useful Fields
A comprehensive guide to the remaining settings.json fields in Claude Code — env variable injection, model configuration, authentication helpers, Git attribution, session cleanup, language and UI, thinking depth, auto-updates, memory system, and more.