feat(settings-audit): Add Claude Code settings optimizer skill

- Add new 00-claude-code-setting skill for token usage optimization
- Includes audit scripts for MCP servers, CLAUDE.md, and extensions
- Auto-fix capability with backup for serverInstructions and frontmatter
- Add YAML frontmatter to 17 command files
- Target: keep baseline under 30% of 200K context limit

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
2026-01-20 18:00:24 +07:00
parent 12ae63e0a7
commit 81afe587ce
28 changed files with 1547 additions and 1 deletions

View File

@@ -0,0 +1,79 @@
# Claude Code Settings Optimizer
Self-audit and optimize Claude Code configuration for maximum token efficiency.
## Purpose
- Analyze token usage across MCP servers, CLAUDE.md, and extensions
- Identify optimization opportunities
- Auto-fix common issues with backup safety
- Keep working context at 70%+ of 200K limit
## Quick Start
```bash
# Install
cd custom-skills/00-claude-code-setting/code
chmod +x install.sh
./install.sh
# Run audit
python3 scripts/run_audit.py
# Apply fixes
python3 scripts/auto_fix.py --apply
```
## What Gets Analyzed
| Component | Checks |
|-----------|--------|
| **MCP Servers** | serverInstructions presence, token estimates, load strategy |
| **CLAUDE.md** | Line count, token estimate, structure quality |
| **Commands** | Frontmatter, description, size limits |
| **Skills** | SKILL.md presence, size limits |
| **Agents** | Tool restrictions |
## Target Metrics
| Metric | Target | Max |
|--------|--------|-----|
| CLAUDE.md tokens | 2,000 | 3,000 |
| MCP tokens (with Tool Search) | 5,000 | 10,000 |
| Baseline total | <30% | <40% |
| Available for work | >70% | — |
## Files
```
00-claude-code-setting/
├── README.md
└── code/
├── CLAUDE.md # Skill directive
├── install.sh # Installation script
├── commands/
│ └── settings-audit.md # /settings-audit command
├── scripts/
│ ├── run_audit.py # Main orchestrator
│ ├── analyze_tokens.py # Token analysis
│ ├── analyze_extensions.py
│ └── auto_fix.py # Auto-fix with backup
└── references/
└── token-optimization.md
```
## Auto-Fix Capabilities
**Safe (automatic with backup):**
- Add serverInstructions to MCP servers
- Add frontmatter to commands
**Manual review required:**
- Disabling MCP servers
- Restructuring CLAUDE.md
- Removing extensions
## Requirements
- Python 3.8+
- PyYAML (optional, for better frontmatter parsing)

View File

@@ -0,0 +1,119 @@
# Claude Code Settings Optimizer
Self-audit and optimize Claude Code configuration for maximum token efficiency and performance.
## Core Focus
1. **Token Budget Management** - Keep baseline under 30% of 200K context
2. **MCP Tool Loading Strategy** - Essential tools always, others lazy-loaded
3. **CLAUDE.md Optimization** - Concise, structured, under 200 lines
4. **Context Preservation** - Maintain original prompt context throughout session
## Quick Commands
```bash
# Run full audit
python3 scripts/run_audit.py
# Check token usage only
python3 scripts/analyze_tokens.py
# Auto-fix with preview
python3 scripts/auto_fix.py
# Apply fixes
python3 scripts/auto_fix.py --apply
```
## Audit Scope
### 1. Token Usage Analysis
| Component | Target | Max | Action if Exceeded |
|-----------|--------|-----|-------------------|
| CLAUDE.md | 2,000 | 3,000 | Compress/split |
| MCP Servers | 5,000 | 10,000 | Enable Tool Search, add serverInstructions |
| Skills metadata | 500 | 1,000 | Reduce descriptions |
| **Working space** | **>140,000** | — | Goal: 70%+ available |
### 2. MCP Server Strategy
**Load Strategy Classification:**
| Strategy | Criteria | Action |
|----------|----------|--------|
| `always` | Essential for daily work (Playwright, Notion) | Keep loaded |
| `lazy` | Occasionally needed (GitHub, Slack) | Load on demand |
| `disable` | Rarely used or token-heavy (Zapier 50+ tools) | Turn off |
**Critical Check:** Every MCP server MUST have `serverInstructions` for Tool Search to work efficiently.
### 3. CLAUDE.md Health
Check for:
- Line count (warn >200)
- Token estimate (warn >3,000)
- Structure quality (headers, lists vs wall of text)
- Redundant information
- Information Claude already knows (don't repeat)
### 4. Extension Efficiency
- Commands: Has frontmatter? Under 100 lines?
- Skills: SKILL.md present? Under 500 lines?
- Agents: Tools restricted appropriately?
## Output Report
```markdown
# Claude Code Settings Audit
## Token Budget
- Baseline usage: X / 200,000 (Y%)
- Available for work: Z tokens
## Health Status
- Overall: [Good/Needs Attention/Critical]
- MCP Servers: X configured, Y missing instructions
- CLAUDE.md: X lines, ~Y tokens
## Findings
### Critical (must fix)
### Warnings (should fix)
### Passing (good)
## Recommendations
[Prioritized action items]
```
## Auto-Fix Capabilities
Safe fixes applied automatically (with backup):
- Add `serverInstructions` to MCP servers
- Add frontmatter to commands missing it
- Suggest CLAUDE.md compression
Manual review required:
- Disabling MCP servers
- Restructuring CLAUDE.md content
- Removing unused extensions
## Best Practices
### Token Optimization Principles
1. **CLAUDE.md**: Only include what Claude doesn't already know
2. **MCP Servers**: Use `serverInstructions` for Tool Search discovery
3. **Skills**: Keep SKILL.md brief, details in `references/`
4. **Context**: Run `/compact` at 70% usage, `/clear` between unrelated tasks
### Ideal Configuration
```
Context Budget: 200,000 tokens
├── System prompt: ~5,000 (fixed)
├── CLAUDE.md: ~2,000 (your control)
├── MCP tools: ~5,000 (with Tool Search)
├── Skills: ~500 (metadata only)
└── Available: ~187,500 (93.75%)
```

View File

@@ -0,0 +1,119 @@
---
description: Audit Claude Code settings for token efficiency. Analyzes MCP servers, CLAUDE.md, extensions and provides optimization recommendations.
argument-hint: [--fix] [--tokens-only]
allowed-tools: Read, Glob, Grep, Bash, Write
---
# Claude Code Settings Audit
Audit your Claude Code configuration to optimize token usage and performance.
## Arguments
- `--fix`: Apply safe fixes automatically (with backup)
- `--tokens-only`: Quick token budget check only
## Audit Process
### 1. Token Budget Check
Calculate current baseline token usage:
```bash
# Quick estimation
echo "=== Token Budget Estimation ==="
```
**Targets:**
- CLAUDE.md: <3,000 tokens (~200 lines)
- MCP with Tool Search: <10,000 tokens
- Working space: >70% of 200K context
### 2. MCP Server Analysis
Check locations:
- `~/.claude/settings.json`
- `./.claude/settings.json`
For each server, evaluate:
- [ ] Has `serverInstructions` (CRITICAL for Tool Search)
- [ ] Appropriate load strategy (always/lazy/disable)
- [ ] Token impact estimation
**Server Token Estimates:**
| Server | Tokens | Strategy |
|--------|--------|----------|
| Playwright | ~13,500 | always |
| Notion | ~5,000 | always |
| GitHub | ~18,000 | lazy |
| PostgreSQL | ~8,000 | lazy |
| Zapier | ~25,000+ | disable |
### 3. CLAUDE.md Analysis
Check:
- `~/.claude/CLAUDE.md` (global)
- `./CLAUDE.md` (project)
Evaluate:
- [ ] Line count (critical if >200)
- [ ] Token estimate (critical if >3,000)
- [ ] Good structure (headers, lists, not wall of text)
- [ ] No redundant information
- [ ] No information Claude already knows
### 4. Extensions Analysis
**Commands** (`~/.claude/commands/`):
- Has YAML frontmatter with description?
- Under 100 lines?
**Skills** (`~/.claude/skills/`):
- Has SKILL.md?
- Under 500 lines?
**Agents** (`~/.claude/agents/`):
- Tools appropriately restricted?
## Output Format
```markdown
# Settings Audit Report
Generated: [timestamp]
## Token Budget Summary
| Component | Tokens | % of 200K | Status |
|-----------|--------|-----------|--------|
| CLAUDE.md | X | Y% | OK/WARN/CRITICAL |
| MCP Servers | X | Y% | OK/WARN/CRITICAL |
| Extensions | X | Y% | OK/WARN/CRITICAL |
| **Baseline** | **X** | **Y%** | — |
| **Available** | **X** | **Y%** | — |
## Critical Issues
[Must fix immediately]
## Warnings
[Should address soon]
## Recommendations
1. [Prioritized action]
2. [...]
## Auto-Fix Available
[List of safe fixes that can be applied with --fix]
```
## Quick Fixes
If `--fix` is provided:
1. Add `serverInstructions` to MCP servers missing them
2. Add frontmatter to commands missing it
3. Create backup before any changes
## Execution
1. Run token budget analysis first
2. Analyze each component
3. Generate prioritized report
4. Offer auto-fix if issues found
5. Save report to `./settings-audit-report.md`

View File

@@ -0,0 +1,68 @@
#!/bin/bash
# Claude Code Settings Optimizer - Installation Script
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CLAUDE_DIR="${HOME}/.claude"
echo "Claude Code Settings Optimizer - Installer"
echo "==========================================="
echo ""
# Create directories
echo "Creating directories..."
mkdir -p "${CLAUDE_DIR}/commands"
mkdir -p "${CLAUDE_DIR}/scripts/settings-audit"
mkdir -p "${CLAUDE_DIR}/references"
# Install command
echo "Installing /settings-audit command..."
cp "${SCRIPT_DIR}/commands/settings-audit.md" "${CLAUDE_DIR}/commands/"
# Install scripts
echo "Installing scripts..."
cp "${SCRIPT_DIR}/scripts/"*.py "${CLAUDE_DIR}/scripts/settings-audit/"
chmod +x "${CLAUDE_DIR}/scripts/settings-audit/"*.py
# Install references
echo "Installing references..."
cp "${SCRIPT_DIR}/references/"*.md "${CLAUDE_DIR}/references/"
# Check Python
echo ""
echo "Checking Python..."
if command -v python3 &> /dev/null; then
echo " Python 3: $(python3 --version)"
else
echo " Warning: Python 3 not found"
fi
# Check PyYAML (optional)
if python3 -c "import yaml" 2>/dev/null; then
echo " PyYAML: installed"
else
echo " PyYAML: not installed (optional)"
echo " Install with: pip3 install pyyaml"
fi
echo ""
echo "Installation complete!"
echo ""
echo "Installed to:"
echo " Command: ${CLAUDE_DIR}/commands/settings-audit.md"
echo " Scripts: ${CLAUDE_DIR}/scripts/settings-audit/"
echo " References: ${CLAUDE_DIR}/references/"
echo ""
echo "Usage:"
echo " In Claude Code: /settings-audit"
echo ""
echo " Or run directly:"
echo " python3 ${CLAUDE_DIR}/scripts/settings-audit/run_audit.py"
echo ""
echo " Auto-fix (preview):"
echo " python3 ${CLAUDE_DIR}/scripts/settings-audit/auto_fix.py"
echo ""
echo " Auto-fix (apply):"
echo " python3 ${CLAUDE_DIR}/scripts/settings-audit/auto_fix.py --apply"
echo ""

View File

@@ -0,0 +1,159 @@
# Token Optimization Best Practices
## Context Window Budget
**Total Available:** 200,000 tokens
### Budget Allocation
| Component | Target | Max | Notes |
|-----------|--------|-----|-------|
| System prompt | ~5,000 | Fixed | Claude Code internal |
| CLAUDE.md | 2,000 | 3,000 | Under your control |
| MCP Servers | 5,000 | 10,000 | With Tool Search enabled |
| Skills metadata | 500 | 1,000 | Description + frontmatter only |
| **Working space** | **>140,000** | — | **Goal: 70%+ available** |
### Why 70%+ Available?
- Complex tasks need context for code, files, conversation
- Long sessions accumulate context
- Safety buffer prevents truncation
## MCP Server Optimization
### serverInstructions (CRITICAL)
Every MCP server MUST have `serverInstructions` for Tool Search to work:
```json
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["@anthropic-ai/mcp-playwright"],
"serverInstructions": "Browser automation. Use for: SEO audits, screenshots, page analysis. Keywords: browser, page, DOM, screenshot"
}
}
}
```
**Pattern:** `[What it does]. Use for: [use case 1], [use case 2]. Keywords: [keyword1], [keyword2]`
### Load Strategies
| Strategy | When to Use | Example Servers |
|----------|-------------|-----------------|
| **always** | Essential for daily work | Playwright, Notion, Filesystem |
| **lazy** | Occasionally needed | GitHub, PostgreSQL, Slack |
| **disable** | Rarely used or token-heavy | Zapier (25K+ tokens) |
### Token Estimates by Server
| Server | Approx. Tokens | Strategy |
|--------|----------------|----------|
| Playwright | 13,500 | always |
| Notion | 5,000 | always |
| GitHub | 18,000 | lazy |
| PostgreSQL | 8,000 | lazy |
| Zapier | 25,000+ | disable |
| Memory | 3,000 | lazy |
| Filesystem | 4,000 | always |
## CLAUDE.md Optimization
### Size Limits
- **Lines:** Under 200
- **Tokens:** Under 3,000 (~2,300 words)
### What to Include
1. **Role context** (1-2 sentences)
2. **Output preferences** (format, language)
3. **Domain standards** (brief, essential only)
4. **Available commands** (reference list)
5. **Quality checklist** (3-5 items)
### What to Exclude
- Self-descriptions ("You are Claude...")
- Information Claude already knows
- Verbose explanations
- Long code examples (use skills instead)
- Duplicate information
### Optimal Structure
```markdown
# Project Context
## Role
[1-2 sentences]
## Preferences
- Output: [format]
- Language: [preference]
## Standards
### Domain 1
- [Key point 1]
- [Key point 2]
## Commands
- /command1: [brief description]
- /command2: [brief description]
## Checklist
- [ ] Item 1
- [ ] Item 2
```
## Session Management
### During Work
1. **Check context:** Run `/context` before complex tasks
2. **Compact early:** Run `/compact` at 70% usage
3. **Clear between tasks:** Run `/clear` for unrelated work
### Weekly Maintenance
1. Run `/doctor` for MCP health
2. Review command usage patterns
3. Update CLAUDE.md with new patterns
4. Check for unused extensions
### Monthly Audit
1. Run full settings audit
2. Review and prune unused MCP servers
3. Update serverInstructions
4. Check for new optimization opportunities
## Quick Diagnostics
### High Token Usage Signs
- Slow response times
- Context getting truncated
- Claude "forgetting" earlier context
### Common Fixes
| Issue | Fix |
|-------|-----|
| CLAUDE.md too long | Move details to skills/references |
| MCP missing instructions | Add serverInstructions |
| Too many always-loaded MCPs | Switch some to lazy |
| Unused extensions | Remove or disable |
## Commands Reference
| Command | Purpose |
|---------|---------|
| `/context` | Check current context usage |
| `/compact` | Compress context |
| `/clear` | Clear conversation |
| `/doctor` | Check MCP server health |
| `/settings-audit` | Run this skill's audit |

View File

@@ -0,0 +1,232 @@
#!/usr/bin/env python3
"""
Extensions Analyzer
Analyzes Claude Code commands, skills, and agents.
"""
import json
import re
import sys
from pathlib import Path
try:
import yaml
HAS_YAML = True
except ImportError:
HAS_YAML = False
MAX_COMMAND_LINES = 100
MAX_SKILL_LINES = 500
class ExtensionsAnalyzer:
def __init__(self):
self.findings = {
"critical": [],
"warnings": [],
"passing": [],
"recommendations": []
}
self.commands = {}
self.skills = {}
self.agents = {}
def find_extension_dirs(self) -> dict:
"""Find extension directories."""
base_paths = [
Path.home() / ".claude",
Path.cwd() / ".claude",
]
dirs = {"commands": [], "skills": [], "agents": []}
for base in base_paths:
for ext_type in dirs.keys():
path = base / ext_type
if path.exists() and path.is_dir():
dirs[ext_type].append(path)
return dirs
def parse_frontmatter(self, content: str) -> dict | None:
"""Parse YAML frontmatter."""
if not content.startswith('---'):
return None
try:
end = content.find('---', 3)
if end == -1:
return None
yaml_content = content[3:end].strip()
if HAS_YAML:
return yaml.safe_load(yaml_content)
else:
# Basic parsing without yaml
result = {}
for line in yaml_content.split('\n'):
if ':' in line:
key, value = line.split(':', 1)
result[key.strip()] = value.strip()
return result
except Exception:
return None
def analyze_command(self, path: Path) -> dict:
"""Analyze a command file."""
try:
content = path.read_text()
except IOError:
return {"name": path.stem, "error": "Could not read"}
lines = len(content.split('\n'))
frontmatter = self.parse_frontmatter(content)
analysis = {
"name": path.stem,
"lines": lines,
"has_frontmatter": frontmatter is not None,
"has_description": frontmatter and "description" in frontmatter,
"issues": []
}
if not analysis["has_frontmatter"]:
analysis["issues"].append("Missing YAML frontmatter")
elif not analysis["has_description"]:
analysis["issues"].append("Missing description")
if lines > MAX_COMMAND_LINES:
analysis["issues"].append(f"Too long: {lines} lines (max {MAX_COMMAND_LINES})")
if not re.match(r'^[a-z][a-z0-9-]*$', analysis["name"]):
analysis["issues"].append("Name should be kebab-case")
return analysis
def analyze_skill(self, path: Path) -> dict:
"""Analyze a skill directory."""
skill_md = path / "SKILL.md"
if not skill_md.exists():
return {
"name": path.name,
"error": "Missing SKILL.md",
"issues": ["Missing SKILL.md"]
}
try:
content = skill_md.read_text()
except IOError:
return {"name": path.name, "error": "Could not read SKILL.md", "issues": []}
lines = len(content.split('\n'))
frontmatter = self.parse_frontmatter(content)
analysis = {
"name": path.name,
"lines": lines,
"has_frontmatter": frontmatter is not None,
"has_description": frontmatter and "description" in frontmatter,
"issues": []
}
if not analysis["has_frontmatter"]:
analysis["issues"].append("Missing frontmatter in SKILL.md")
if lines > MAX_SKILL_LINES:
analysis["issues"].append(f"Too long: {lines} lines (max {MAX_SKILL_LINES})")
return analysis
def analyze_agent(self, path: Path) -> dict:
"""Analyze an agent file."""
try:
content = path.read_text()
except IOError:
return {"name": path.stem, "error": "Could not read", "issues": []}
frontmatter = self.parse_frontmatter(content)
analysis = {
"name": path.stem,
"has_frontmatter": frontmatter is not None,
"tools_restricted": False,
"issues": []
}
if frontmatter:
tools = frontmatter.get("tools", "*")
analysis["tools_restricted"] = tools != "*" and tools
if not analysis["has_frontmatter"]:
analysis["issues"].append("Missing frontmatter")
if not analysis["tools_restricted"]:
analysis["issues"].append("Tools not restricted (consider limiting)")
return analysis
def analyze(self) -> dict:
"""Run full analysis."""
dirs = self.find_extension_dirs()
# Analyze commands
for cmd_dir in dirs["commands"]:
for cmd_file in cmd_dir.glob("*.md"):
analysis = self.analyze_command(cmd_file)
self.commands[analysis["name"]] = analysis
if analysis.get("issues"):
for issue in analysis["issues"]:
self.findings["warnings"].append(f"Command '{analysis['name']}': {issue}")
else:
self.findings["passing"].append(f"Command '{analysis['name']}': OK")
# Analyze skills
for skill_dir in dirs["skills"]:
for skill_path in skill_dir.iterdir():
if skill_path.is_dir():
analysis = self.analyze_skill(skill_path)
self.skills[analysis["name"]] = analysis
if analysis.get("issues"):
for issue in analysis["issues"]:
if "Missing SKILL.md" in issue:
self.findings["critical"].append(f"Skill '{analysis['name']}': {issue}")
else:
self.findings["warnings"].append(f"Skill '{analysis['name']}': {issue}")
else:
self.findings["passing"].append(f"Skill '{analysis['name']}': OK")
# Analyze agents
for agent_dir in dirs["agents"]:
for agent_file in agent_dir.glob("*.md"):
analysis = self.analyze_agent(agent_file)
self.agents[analysis["name"]] = analysis
if analysis.get("issues"):
for issue in analysis["issues"]:
self.findings["warnings"].append(f"Agent '{analysis['name']}': {issue}")
else:
self.findings["passing"].append(f"Agent '{analysis['name']}': OK")
return {
"commands_count": len(self.commands),
"skills_count": len(self.skills),
"agents_count": len(self.agents),
"commands": self.commands,
"skills": self.skills,
"agents": self.agents,
"findings": self.findings
}
def main():
analyzer = ExtensionsAnalyzer()
report = analyzer.analyze()
print(json.dumps(report, indent=2, default=str))
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,252 @@
#!/usr/bin/env python3
"""
Token Usage Analyzer
Analyzes MCP servers and CLAUDE.md for token efficiency.
"""
import json
import sys
from pathlib import Path
# Token estimates for known MCP servers
MCP_TOKEN_ESTIMATES = {
"playwright": 13500,
"puppeteer": 13500,
"notion": 5000,
"github": 18000,
"postgres": 8000,
"postgresql": 8000,
"bigquery": 10000,
"firecrawl": 6000,
"zapier": 25000,
"slack": 8000,
"linear": 6000,
"memory": 3000,
"filesystem": 4000,
"brave-search": 3000,
"fetch": 2000,
"sequential-thinking": 2000,
"chrome-devtools": 8000,
"dtm-agent": 5000,
}
# Load strategy recommendations
LOAD_STRATEGIES = {
"playwright": "always",
"puppeteer": "always",
"notion": "always",
"github": "lazy",
"postgres": "lazy",
"postgresql": "lazy",
"bigquery": "lazy",
"firecrawl": "lazy",
"zapier": "disable",
"slack": "lazy",
"linear": "lazy",
"memory": "lazy",
"filesystem": "always",
"chrome-devtools": "always",
}
TOKENS_PER_WORD = 1.3
MAX_CLAUDE_MD_LINES = 200
MAX_CLAUDE_MD_TOKENS = 3000
class TokenAnalyzer:
def __init__(self):
self.findings = {
"critical": [],
"warnings": [],
"passing": [],
"recommendations": []
}
self.mcp_servers = {}
self.claude_md_files = []
self.mcp_tokens = 0
self.claude_md_tokens = 0
def find_settings_files(self) -> list:
"""Find MCP settings files."""
locations = [
Path.home() / ".claude" / "settings.json",
Path.cwd() / ".claude" / "settings.json",
Path.cwd() / ".mcp.json",
]
return [p for p in locations if p.exists()]
def find_claude_md_files(self) -> list:
"""Find CLAUDE.md files."""
locations = [
Path.home() / ".claude" / "CLAUDE.md",
Path.cwd() / "CLAUDE.md",
Path.cwd() / ".claude" / "CLAUDE.md",
]
return [p for p in locations if p.exists()]
def estimate_server_tokens(self, name: str) -> int:
"""Estimate tokens for a server."""
name_lower = name.lower()
for key, tokens in MCP_TOKEN_ESTIMATES.items():
if key in name_lower:
return tokens
return 5000 # Default estimate
def get_load_strategy(self, name: str) -> str:
"""Get recommended load strategy."""
name_lower = name.lower()
for key, strategy in LOAD_STRATEGIES.items():
if key in name_lower:
return strategy
return "lazy" # Default to lazy for unknown
def analyze_mcp_servers(self):
"""Analyze MCP server configurations."""
settings_files = self.find_settings_files()
if not settings_files:
self.findings["warnings"].append("No MCP settings files found")
return
for settings_path in settings_files:
try:
with open(settings_path) as f:
settings = json.load(f)
except (json.JSONDecodeError, IOError) as e:
self.findings["warnings"].append(f"Could not parse {settings_path}: {e}")
continue
servers = settings.get("mcpServers", {})
for name, config in servers.items():
if not isinstance(config, dict):
continue
tokens = self.estimate_server_tokens(name)
has_instructions = "serverInstructions" in config
strategy = self.get_load_strategy(name)
self.mcp_servers[name] = {
"tokens": tokens,
"has_instructions": has_instructions,
"strategy": strategy,
"source": str(settings_path)
}
self.mcp_tokens += tokens
# Generate findings
if not has_instructions:
self.findings["critical"].append(
f"MCP '{name}': Missing serverInstructions (breaks Tool Search)"
)
else:
self.findings["passing"].append(f"MCP '{name}': Has serverInstructions")
if tokens > 15000 and strategy == "always":
self.findings["warnings"].append(
f"MCP '{name}': Heavy server (~{tokens:,} tokens), consider lazy loading"
)
def analyze_claude_md(self):
"""Analyze CLAUDE.md files."""
files = self.find_claude_md_files()
if not files:
self.findings["warnings"].append("No CLAUDE.md files found")
return
for path in files:
try:
content = path.read_text()
except IOError as e:
self.findings["warnings"].append(f"Could not read {path}: {e}")
continue
lines = len(content.split('\n'))
words = len(content.split())
tokens = int(words * TOKENS_PER_WORD)
self.claude_md_files.append({
"path": str(path),
"lines": lines,
"words": words,
"tokens": tokens
})
self.claude_md_tokens += tokens
# Generate findings
if tokens > MAX_CLAUDE_MD_TOKENS:
self.findings["critical"].append(
f"CLAUDE.md ({path.name}): ~{tokens:,} tokens exceeds {MAX_CLAUDE_MD_TOKENS:,} limit"
)
elif lines > MAX_CLAUDE_MD_LINES:
self.findings["warnings"].append(
f"CLAUDE.md ({path.name}): {lines} lines exceeds {MAX_CLAUDE_MD_LINES} recommended"
)
else:
self.findings["passing"].append(
f"CLAUDE.md ({path.name}): {lines} lines, ~{tokens:,} tokens - Good"
)
# Check structure
if '\n\n\n' in content:
self.findings["warnings"].append(
f"CLAUDE.md ({path.name}): Contains excessive whitespace"
)
# Check for common redundancy
content_lower = content.lower()
if "you are claude" in content_lower or "you are an ai" in content_lower:
self.findings["recommendations"].append(
f"CLAUDE.md ({path.name}): Remove self-descriptions Claude already knows"
)
def analyze(self) -> dict:
"""Run full analysis."""
self.analyze_mcp_servers()
self.analyze_claude_md()
total_tokens = self.mcp_tokens + self.claude_md_tokens
usage_pct = (total_tokens / 200000) * 100
# Overall recommendations
if usage_pct > 30:
self.findings["critical"].append(
f"Baseline uses {usage_pct:.1f}% of context - target is under 30%"
)
elif usage_pct > 20:
self.findings["warnings"].append(
f"Baseline uses {usage_pct:.1f}% of context - consider optimization"
)
missing_instructions = sum(
1 for s in self.mcp_servers.values() if not s.get("has_instructions")
)
if missing_instructions > 0:
self.findings["recommendations"].append(
f"Add serverInstructions to {missing_instructions} MCP server(s) for Tool Search"
)
return {
"total_tokens": total_tokens,
"mcp_tokens": self.mcp_tokens,
"claude_md_tokens": self.claude_md_tokens,
"mcp_count": len(self.mcp_servers),
"mcp_servers": self.mcp_servers,
"claude_md_files": self.claude_md_files,
"usage_percentage": round(usage_pct, 1),
"findings": self.findings
}
def main():
analyzer = TokenAnalyzer()
report = analyzer.analyze()
print(json.dumps(report, indent=2))
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,173 @@
#!/usr/bin/env python3
"""
Auto-Fix Script
Applies safe fixes to Claude Code configuration with backup.
"""
import json
import shutil
import sys
from datetime import datetime
from pathlib import Path
# serverInstructions templates for common MCP servers
SERVER_INSTRUCTIONS = {
"playwright": "Browser automation for web interaction. Use for: SEO audits, page analysis, screenshots, form testing, Core Web Vitals. Keywords: browser, page, screenshot, click, navigate, DOM, selector",
"puppeteer": "Chrome automation for web testing. Use for: SEO audits, page rendering, JavaScript site testing. Keywords: browser, chrome, headless, screenshot, page",
"notion": "Notion workspace integration. Use for: saving research, documentation, project notes, knowledge base. Keywords: notion, page, database, wiki, notes, save",
"github": "GitHub repository management. Use for: commits, PRs, issues, code review. Keywords: git, github, commit, pull request, issue, repository",
"postgres": "PostgreSQL database queries. Use for: data analysis, SQL queries, analytics. Keywords: sql, query, database, table, select, analytics",
"postgresql": "PostgreSQL database queries. Use for: data analysis, SQL queries, analytics. Keywords: sql, query, database, table, select, analytics",
"bigquery": "Google BigQuery for large-scale analysis. Use for: analytics queries, data warehouse. Keywords: bigquery, sql, analytics, data warehouse",
"firecrawl": "Web scraping and crawling. Use for: site crawling, content extraction, competitor analysis. Keywords: crawl, scrape, extract, spider, sitemap",
"slack": "Slack workspace integration. Use for: messages, notifications, team communication. Keywords: slack, message, channel, notification",
"linear": "Linear issue tracking. Use for: issue management, project tracking. Keywords: linear, issue, task, project, sprint",
"memory": "Persistent memory across sessions. Use for: storing preferences, context recall. Keywords: remember, memory, store, recall",
"filesystem": "Local file operations. Use for: file reading/writing, directory management. Keywords: file, directory, read, write, path",
"chrome-devtools": "Chrome DevTools for debugging. Use for: GTM debugging, network analysis, console logs. Keywords: devtools, chrome, debug, network, console",
}
class AutoFixer:
def __init__(self, dry_run: bool = True):
self.dry_run = dry_run
self.fixes = []
self.backup_dir = Path.home() / ".claude" / "backups" / datetime.now().strftime("%Y%m%d_%H%M%S")
def backup_file(self, path: Path) -> bool:
"""Create backup before modifying."""
if not path.exists():
return True
try:
backup_path = self.backup_dir / path.relative_to(Path.home())
backup_path.parent.mkdir(parents=True, exist_ok=True)
shutil.copy2(path, backup_path)
return True
except Exception as e:
print(f"Warning: Could not backup {path}: {e}", file=sys.stderr)
return False
def fix_mcp_instructions(self, settings_path: Path) -> list:
"""Add serverInstructions to MCP servers."""
fixes = []
try:
with open(settings_path) as f:
settings = json.load(f)
except (json.JSONDecodeError, IOError) as e:
return [f"Error reading {settings_path}: {e}"]
servers = settings.get("mcpServers", {})
modified = False
for name, config in servers.items():
if not isinstance(config, dict):
continue
if "serverInstructions" in config:
continue
# Find matching template
instructions = None
name_lower = name.lower()
for key, template in SERVER_INSTRUCTIONS.items():
if key in name_lower:
instructions = template
break
if not instructions:
instructions = f"External tool: {name}. Use when this functionality is needed."
if self.dry_run:
fixes.append(f"[DRY RUN] Would add serverInstructions to '{name}'")
else:
config["serverInstructions"] = instructions
modified = True
fixes.append(f"Added serverInstructions to '{name}'")
if modified and not self.dry_run:
self.backup_file(settings_path)
with open(settings_path, 'w') as f:
json.dump(settings, f, indent=2)
return fixes
def fix_command_frontmatter(self, cmd_path: Path) -> str | None:
"""Add frontmatter to command missing it."""
try:
content = cmd_path.read_text()
except IOError:
return None
if content.startswith('---'):
return None
new_content = f'''---
description: {cmd_path.stem.replace('-', ' ').title()} command
---
{content}'''
if self.dry_run:
return f"[DRY RUN] Would add frontmatter to {cmd_path.name}"
self.backup_file(cmd_path)
cmd_path.write_text(new_content)
return f"Added frontmatter to {cmd_path.name}"
def run(self) -> dict:
"""Apply all fixes."""
results = {"applied": [], "skipped": [], "errors": []}
# Fix MCP settings
settings_paths = [
Path.home() / ".claude" / "settings.json",
Path.cwd() / ".claude" / "settings.json"
]
for path in settings_paths:
if path.exists():
fixes = self.fix_mcp_instructions(path)
results["applied"].extend(fixes)
# Fix commands without frontmatter
cmd_dirs = [
Path.home() / ".claude" / "commands",
Path.cwd() / ".claude" / "commands"
]
for cmd_dir in cmd_dirs:
if cmd_dir.exists():
for cmd_file in cmd_dir.glob("*.md"):
fix = self.fix_command_frontmatter(cmd_file)
if fix:
results["applied"].append(fix)
return results
def main():
import argparse
parser = argparse.ArgumentParser(description="Auto-fix Claude Code settings")
parser.add_argument("--apply", action="store_true", help="Apply fixes (default is dry-run)")
args = parser.parse_args()
fixer = AutoFixer(dry_run=not args.apply)
results = fixer.run()
print(json.dumps(results, indent=2))
if fixer.dry_run:
print("\n[DRY RUN] No changes applied. Use --apply to apply fixes.", file=sys.stderr)
else:
print(f"\n[APPLIED] {len(results['applied'])} fixes.", file=sys.stderr)
if fixer.backup_dir.exists():
print(f"Backups: {fixer.backup_dir}", file=sys.stderr)
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,245 @@
#!/usr/bin/env python3
"""
Claude Code Settings Audit - Main Orchestrator
Analyzes configuration for token efficiency and optimization.
"""
import json
import sys
import subprocess
from pathlib import Path
from datetime import datetime
SCRIPT_DIR = Path(__file__).parent
CONTEXT_LIMIT = 200_000
def run_analyzer(script_name: str) -> dict:
"""Run an analyzer script and return its output."""
script_path = SCRIPT_DIR / script_name
if not script_path.exists():
return {"error": f"Script not found: {script_path}"}
try:
result = subprocess.run(
[sys.executable, str(script_path)],
capture_output=True,
text=True,
timeout=60
)
if result.returncode != 0 and not result.stdout:
return {"error": result.stderr or "Unknown error"}
return json.loads(result.stdout)
except subprocess.TimeoutExpired:
return {"error": "Analysis timed out"}
except json.JSONDecodeError as e:
return {"error": f"Invalid JSON: {e}"}
except Exception as e:
return {"error": str(e)}
def calculate_health(token_report: dict, extensions_report: dict) -> str:
"""Determine overall health status."""
total_tokens = token_report.get("total_tokens", 0)
usage_pct = (total_tokens / CONTEXT_LIMIT) * 100
critical_issues = len(token_report.get("findings", {}).get("critical", []))
critical_issues += len(extensions_report.get("findings", {}).get("critical", []))
if usage_pct > 30 or critical_issues > 2:
return "Critical"
elif usage_pct > 20 or critical_issues > 0:
return "Needs Attention"
return "Good"
def generate_report(token_report: dict, extensions_report: dict) -> str:
"""Generate markdown report."""
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
total_tokens = token_report.get("total_tokens", 0)
available = CONTEXT_LIMIT - total_tokens
usage_pct = (total_tokens / CONTEXT_LIMIT) * 100
available_pct = 100 - usage_pct
health = calculate_health(token_report, extensions_report)
health_emoji = {"Good": "🟢", "Needs Attention": "🟡", "Critical": "🔴"}[health]
# Collect all findings
all_critical = []
all_warnings = []
all_passing = []
all_recommendations = []
for report in [token_report, extensions_report]:
findings = report.get("findings", {})
all_critical.extend(findings.get("critical", []))
all_warnings.extend(findings.get("warnings", []))
all_passing.extend(findings.get("passing", []))
all_recommendations.extend(findings.get("recommendations", []))
report = f"""# Claude Code Settings Audit Report
**Generated:** {timestamp}
---
## Token Budget Summary
| Component | Tokens | % of 200K | Status |
|-----------|--------|-----------|--------|
| CLAUDE.md | {token_report.get('claude_md_tokens', 0):,} | {token_report.get('claude_md_tokens', 0)/CONTEXT_LIMIT*100:.1f}% | {'🟢' if token_report.get('claude_md_tokens', 0) < 3000 else '🔴'} |
| MCP Servers | {token_report.get('mcp_tokens', 0):,} | {token_report.get('mcp_tokens', 0)/CONTEXT_LIMIT*100:.1f}% | {'🟢' if token_report.get('mcp_tokens', 0) < 10000 else '🟡'} |
| **Baseline Total** | **{total_tokens:,}** | **{usage_pct:.1f}%** | {health_emoji} |
| **Available for Work** | **{available:,}** | **{available_pct:.1f}%** | — |
**Target:** Baseline under 30% (60,000 tokens), Available over 70%
---
## Overall Health: {health_emoji} {health}
- Critical Issues: {len(all_critical)}
- Warnings: {len(all_warnings)}
- Passing Checks: {len(all_passing)}
---
## MCP Server Analysis
**Servers:** {token_report.get('mcp_count', 0)} configured
"""
# MCP server details
mcp_servers = token_report.get("mcp_servers", {})
if mcp_servers:
report += "| Server | Tokens | Instructions | Strategy |\n"
report += "|--------|--------|--------------|----------|\n"
for name, info in mcp_servers.items():
instr = "" if info.get("has_instructions") else ""
tokens = info.get("tokens", 0)
strategy = info.get("strategy", "unknown")
report += f"| {name} | ~{tokens:,} | {instr} | {strategy} |\n"
report += "\n"
# CLAUDE.md analysis
report += f"""---
## CLAUDE.md Analysis
"""
claude_files = token_report.get("claude_md_files", [])
for cf in claude_files:
status = "🟢" if cf.get("tokens", 0) < 3000 else "🔴"
report += f"- **{cf.get('path', 'Unknown')}**: {cf.get('lines', 0)} lines, ~{cf.get('tokens', 0):,} tokens {status}\n"
if not claude_files:
report += "*No CLAUDE.md files found*\n"
# Extensions
report += f"""
---
## Extensions Analysis
- Commands: {extensions_report.get('commands_count', 0)}
- Skills: {extensions_report.get('skills_count', 0)}
- Agents: {extensions_report.get('agents_count', 0)}
"""
# Findings
if all_critical:
report += "---\n\n## ❌ Critical Issues\n\n"
for issue in all_critical:
report += f"- {issue}\n"
report += "\n"
if all_warnings:
report += "---\n\n## ⚠️ Warnings\n\n"
for warning in all_warnings[:10]:
report += f"- {warning}\n"
if len(all_warnings) > 10:
report += f"- *...and {len(all_warnings) - 10} more*\n"
report += "\n"
if all_passing:
report += "---\n\n## ✅ Passing\n\n"
for item in all_passing[:5]:
report += f"- {item}\n"
if len(all_passing) > 5:
report += f"- *...and {len(all_passing) - 5} more*\n"
report += "\n"
# Recommendations
if all_recommendations or all_critical:
report += "---\n\n## Recommendations\n\n"
priority = 1
for issue in all_critical[:3]:
report += f"{priority}. **Fix:** {issue}\n"
priority += 1
for rec in all_recommendations[:5]:
report += f"{priority}. {rec}\n"
priority += 1
report += "\n"
report += f"""---
## Next Steps
1. Run `python3 scripts/auto_fix.py` to preview fixes
2. Run `python3 scripts/auto_fix.py --apply` to apply fixes
3. Re-run audit to verify improvements
---
*Generated by Claude Code Settings Optimizer*
"""
return report
def main():
print("🔍 Running Claude Code Settings Audit...\n", file=sys.stderr)
print(" Analyzing tokens...", file=sys.stderr)
token_report = run_analyzer("analyze_tokens.py")
print(" Analyzing extensions...", file=sys.stderr)
extensions_report = run_analyzer("analyze_extensions.py")
print(" Generating report...\n", file=sys.stderr)
markdown_report = generate_report(token_report, extensions_report)
print(markdown_report)
# Save reports
output_dir = SCRIPT_DIR.parent if (SCRIPT_DIR.parent / "CLAUDE.md").exists() else Path.cwd()
report_path = output_dir / "settings-audit-report.md"
json_path = output_dir / "settings-audit-report.json"
full_report = {
"timestamp": datetime.now().isoformat(),
"tokens": token_report,
"extensions": extensions_report,
"total_baseline_tokens": token_report.get("total_tokens", 0),
"health": calculate_health(token_report, extensions_report)
}
try:
report_path.write_text(markdown_report)
json_path.write_text(json.dumps(full_report, indent=2, default=str))
print(f"📄 Report: {report_path}", file=sys.stderr)
print(f"📊 JSON: {json_path}", file=sys.stderr)
except IOError as e:
print(f"Warning: Could not save report: {e}", file=sys.stderr)
return 1 if full_report["health"] == "Critical" else 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -7,6 +7,26 @@ description: Automated Google Tag Manager audit and validation toolkit. Use when
Automated audit workflow for GTM containers using **Chrome DevTools MCP** for browser inspection and **DTM Agent MCP** for GTM API operations.
## Prerequisites
### Chrome GTM Debug Profile (Required)
Before starting any GTM audit, launch the dedicated Chrome GTM Debug profile:
```bash
chrome-gtm
```
This launches Chrome with remote debugging enabled on port 9222, which is required for the chrome-devtools MCP server to connect.
| Item | Value |
|------|-------|
| **Profile Location** | `~/Library/Application Support/Chrome-GTM-Debug` |
| **Debug Port** | 9222 |
| **Launch Script** | `~/Utilities/chrome-gtm-debug.sh` |
**Note**: This is a separate Chrome instance from your regular browser. Install GTM-related extensions (Tag Assistant, etc.) in this profile for debugging work.
## Required MCP Servers
This skill requires the following MCP servers to be configured:
@@ -16,6 +36,8 @@ This skill requires the following MCP servers to be configured:
| **chrome-devtools** | Browser debugging & inspection | `navigate_page`, `evaluate_script`, `list_network_requests`, `list_console_messages`, `click`, `hover`, `take_screenshot`, `performance_start_trace` |
| **dtm-agent** | GTM API operations | `dtm_status`, `dtm_list_tags`, `dtm_get_tag`, `dtm_list_triggers`, `dtm_get_trigger`, `dtm_list_variables`, `dtm_debug_performance`, `dtm_debug_preview` |
**Important**: The chrome-devtools MCP server requires Chrome to be running with `--remote-debugging-port=9222`. Always run `chrome-gtm` first before attempting to use chrome-devtools tools.
## Critical Workflow Rule
**ALWAYS use Chrome DevTools MCP FIRST before making GTM configuration changes.**
@@ -31,6 +53,13 @@ GTM triggering and parameter capturing issues are highly dependent on:
```
┌─────────────────────────────────────────────────────────────────┐
│ PHASE 0: SETUP │
├─────────────────────────────────────────────────────────────────┤
│ Run `chrome-gtm` to launch Chrome GTM Debug profile │
│ Verify: curl http://127.0.0.1:9222/json/version │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ PHASE 1: INSPECT (Chrome DevTools MCP) │
├─────────────────────────────────────────────────────────────────┤
│ 1. navigate_page → Load target URL │