Documentation

Everything you need to understand your AI collaboration — from quick start to deep dive

Getting Started

What is BetterPrompt?

BetterPrompt analyzes your AI sessions to reveal how you collaborate with AI. It examines your prompts, patterns, and working style to generate a personalized assessment — so you can see clearly what's working and where to grow.

Whether you're a PM, designer, founder, or builder of any kind: if you use Claude to get things done, BetterPrompt can help you use it better.

Three Steps to Your Assessment

1
Connect your AI toolInstall the Claude Code plugin once. It reads your local sessions from your machine and keeps the analysis inside Claude Code.
2
Get your assessmentThe tool analyzes your AI sessions in about a minute and generates a personalized report with your collaboration style and key insights.
3
Start improvingReview your strengths, identify patterns that slow you down, and get curated resources to level up your AI collaboration.

Run the Command

In Claude Code, install the plugin once:

bash
$ /plugin install betterprompt@betterprompt

Then ask Claude Code to analyze your coding sessions and generate a report. The plugin runs locally and can optionally sync the finished run to the dashboard later.

What Data is Analyzed

BetterPrompt reads your Claude AI sessions — the conversations you've had with Claude across your projects. It looks at how you ask questions, how you respond to Claude's output, and how you guide AI toward your goals.

Your session data stays on your machine. Only the analysis results (scores, insights, your report) are stored locally unless you explicitly sync them to a team server you control.

How It Works

Once you run the command, BetterPrompt works in two main stages.

Stage 1: Finding Your Best Sessions

The tool scans your AI sessions and picks the ones that best represent how you work — focusing on recent, substantive sessions where real collaboration happened. No AI is involved yet; this is purely about finding good signal.

Stage 2: Deep Analysis

Five specialized AI analysts examine your sessions in parallel, each focused on a different dimension of your collaboration:

Thinking QualityHow you plan and reason before acting
Communication PatternsHow clearly you express your intent to AI
Learning BehaviorHow you adapt when things don't go as planned
Context EfficiencyHow well you manage information flow with AI
Session OutcomeWhether your sessions actually achieve what you set out to do

The results are combined into your profile type, a personality-style narrative, and actionable focus areas — then delivered as a shareable report.

Advanced: Full pipeline breakdown (for the technically curious)

The pipeline runs across deterministic extraction, domain analysis, type classification, evidence verification, optional translation, and final report assembly inside Claude Code.

Phase 1: Data Extraction (CLI)

A memory-efficient deterministic algorithm identifies your highest-quality sessions. No LLM calls are made in this phase.

  1. File Discovery — Scans ~/.claude/projects/ for session files
  2. Pre-filter — Filters by size and recency to reduce candidates
  3. Quality Scoring — Scores top 100 candidates on message count, duration, tool diversity, recency
  4. Parse Selection — Parses the top 15 highest-quality sessions

Phase 1.5: Session Summarization

Generates concise one-line summaries for each session using a single batched LLM call. These summaries provide context for downstream workers and appear in your activity timeline.

Phase 2: Insight Workers

Five specialized workers run in parallel, each analyzing a distinct domain.ProjectSummarizer generates project-level summaries and WeeklyInsightGenerator produces weekly narrative highlights — both running in parallel. Total: 7 parallel LLM calls.

Phase 2.5: Type Classification

A single LLM call classifies your style into a 5×3 matrix (5 types × 3 control levels = 15 combinations) and generates an MBTI-style personality narrative describing your behavioral patterns.

Phase 2.75: Knowledge Matching

A deterministic resource matcher (no LLM) maps your identified strengths and growth areas to curated professional resources from a knowledge database.

Phase 2.8: Evidence Verification

An LLM-based verifier cross-checks evidence quotes against original session utterances, ensuring all referenced quotes are accurate and traceable.

Phase 3: Content Writer

Generates the top focus areas narrative from all prior insights. A single LLM call synthesizes worker outputs into actionable guidance.

Phase 4: Translator

Conditional translation for non-English users. Skipped entirely for English reports (0 calls), uses 1 LLM call for other languages.

Pipeline Overview

Local session logs --> Phase 1 extraction --> Session summaries
        |
  +-----------------------------------------------+
  |  5 domain skills + stage outputs              |
  |  ThinkingQuality ----+                        |
  |  CommunicationPatt --+                        |
  |  LearningBehavior ---+--> Canonical run       |
  |  ContextEfficiency --+                        |
  |  SessionOutcome -----+                        |
  +-----------------------------------------------+
        |
  Type classification --> Evidence verification
        |
  Content writer --> Optional translator
        |
   Final assembly --> Local report / Optional sync

Your Report

Your AI Collaboration Style

Your report starts with a personality-style profile: a combination of your primary working style and how you navigate ambiguity. Think of it like an MBTI for your AI collaboration habits.

The 5 Builder Styles

StyleDescription
ArchitectPlans extensively before acting — knows the destination before starting
AnalystVerifies and investigates thoroughly — never accepts output without checking
ConductorOrchestrates AI tools and workflows — focuses on directing, not doing
SpeedrunnerOptimizes for velocity — ships fast, iterates faster
TrendsetterExplores cutting-edge approaches — always pushing what's possible

The 3 Navigation Modes

ModeDescription
ExplorerOpen exploration — discovering solutions through experimentation
NavigatorBalanced navigation — exploration with a loose route in mind
CartographerStrategic mapping — charting the territory before advancing

The 6 Dimensions

Beyond your style profile, your report scores six dimensions that capture the nuanced aspects of how you work with AI:

DimensionWhat it measures
AI Collaboration MasteryPlanning quality, verification habits, instruction clarity
Context EngineeringHow well you manage what AI knows at any given moment
Tool MasteryBreadth and intentionality of AI tool usage
Burnout RiskSession length, frequency, and cognitive load patterns
AI Control IndexStrategic direction vs. reactive acceptance of AI output
Skill ResilienceYour independent capability alongside AI reliance

Our Mission

The Question

Are you getting better with AI—or just more dependent?

Our Philosophy

Technology moves fast. AI is making it exponential.

Here's the uncomfortable truth: when thinking becomes optional, we stop doing it. And when we stop thinking, we stop growing.

AI isn't the problem. Unconscious dependency is.

The builders who thrive won't be those who delegate everything—they'll be those who know when to think for themselves and when to let AI multiply their work. AI should be a force multiplier, not a replacement for judgment.

Our Vision

We're building a future where humans use tools—not the other way around.

Our goal is simple: help you see yourself clearly. How are you actually using AI? What patterns keep repeating? Are you thinking critically—or just accepting?

Self-awareness is the first step to growth. We start with builders at the frontier of human-AI collaboration — and we're just getting started.

Our Values

Think First

AI amplifies judgment. It doesn't replace the need for it.

Open Source

Our methodology is transparent. See exactly how we evaluate.

Privacy First

Your data stays yours. Analysis happens locally.

Technical Details

For builders who want to understand exactly what's happening under the hood — or who need to troubleshoot, audit, or extend BetterPrompt.

Session data format (JSONL structure)

BetterPrompt reads Claude Code session logs from ~/.claude/projects/. These are JSONL files containing your conversation history with Claude.

JSONL Message Types

Each session file contains message blocks of these types:

  • user — Your prompts and messages
  • assistant — Claude's responses
  • tool_use — Tool invocations (Read, Edit, Bash, etc.)
  • tool_result — Results from tool executions
  • queue-operation — Internal queue metadata (not analyzed)
  • file-history-snapshot — File state snapshots (not analyzed)
Path encoding

Claude Code encodes project paths by replacing / with -. For example, /Users/you/projects/myapp becomes -Users-you-projects-myapp.

BetterPrompt handles this automatically via encodeProjectPath and decodeProjectPath utilities.

Plugin commands reference

Install

bash
$ /plugin install betterprompt@betterprompt

Installs the BetterPrompt Claude Code plugin, including its MCP tools, hooks, and analysis skills.

Run Analysis

bash
$ Analyze my coding sessions and generate a report

Tells Claude Code to run the full local BetterPrompt pipeline and open your report.

Context Engineering dimension — scoring detail

The Context Engineering score evaluates four strategies:

  • WRITE — Proactively providing relevant context upfront
  • SELECT — Choosing which information to include or exclude
  • COMPRESS — Summarizing or condensing context to stay within limits
  • ISOLATE — Keeping concerns separate to avoid noise