Add 11-seo-comprehensive-audit orchestrator skill

New skill that runs a 6-stage SEO audit pipeline (Technical, On-Page,
Core Web Vitals, Schema, Local SEO, Search Console) and produces a
unified health score (0-100) with weighted categories. Includes Python
orchestrator script, slash command, and Notion integration for Korean
audit reports.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-13 02:37:51 +09:00
parent 7c5efea817
commit 930167dbbe
7 changed files with 777 additions and 0 deletions

View File

@@ -0,0 +1,75 @@
# SEO Comprehensive Audit
종합 SEO 감사 도구 - 6단계 파이프라인을 통한 통합 SEO 진단 및 건강 점수 산출.
## Overview
Orchestrates a unified 6-stage SEO audit by calling sub-skill scripts, merging results, computing a weighted health score (0-100), and pushing a summary report to Notion.
## Pipeline Stages
| # | Stage | Sub-Skill | Weight |
|---|-------|-----------|--------|
| 1 | Technical SEO | `12-seo-technical-audit` | 20% |
| 2 | On-Page SEO | `13-seo-on-page-audit` | 20% |
| 3 | Core Web Vitals | `14-seo-core-web-vitals` | 25% |
| 4 | Schema Validation | `16-seo-schema-validator` | 15% |
| 5 | Local SEO | `18-seo-local-audit` | 10% |
| 6 | Search Console | `15-seo-search-console` | 10% |
## Dual-Platform Structure
```
11-seo-comprehensive-audit/
├── code/ # Claude Code version
│ ├── CLAUDE.md # Action-oriented directive
│ ├── commands/
│ │ └── seo-comprehensive-audit.md # Slash command
│ └── scripts/
│ ├── seo_audit_orchestrator.py # Python orchestrator
│ └── requirements.txt
├── desktop/ # Claude Desktop version
│ ├── SKILL.md # MCP-based workflow
│ └── skill.yaml # Extended metadata
└── README.md
```
## Quick Start
### Claude Code
```bash
/seo-comprehensive-audit https://example.com
/seo-comprehensive-audit https://example.com --skip-local --skip-gsc
```
### Python Script
```bash
pip install -r code/scripts/requirements.txt
python code/scripts/seo_audit_orchestrator.py --url https://example.com
```
## Health Score Grading
| Score | Grade | Status |
|-------|-------|--------|
| 90-100 | A | Excellent |
| 80-89 | B+ | Good |
| 70-79 | B | Above Average |
| 60-69 | C | Needs Improvement |
| 40-59 | D | Poor |
| 0-39 | F | Critical |
## Notion Output
Summary reports are saved to the OurDigital SEO Audit Log database:
- **Title**: `종합 SEO 감사 보고서 - [domain] - YYYY-MM-DD`
- **Database ID**: `2c8581e5-8a1e-8035-880b-e38cefc2f3ef`
- Individual pages created for Critical/High findings
## Triggers
- comprehensive SEO, full SEO audit
- 종합 SEO 감사, 사이트 감사
- SEO health check, SEO 건강 점수

View File

@@ -0,0 +1,77 @@
# CLAUDE.md
## Overview
Comprehensive SEO audit orchestrator. Runs 6 sub-skill audits (Technical, On-Page, CWV, Schema, Local, GSC), merges results into a unified report with weighted health score (0-100), and pushes to Notion.
## Quick Start
```bash
# Install dependencies
pip install -r scripts/requirements.txt
# Run full audit
python scripts/seo_audit_orchestrator.py --url https://example.com
# Skip optional stages
python scripts/seo_audit_orchestrator.py --url https://example.com --skip-local --skip-gsc
# JSON output only (no Notion push)
python scripts/seo_audit_orchestrator.py --url https://example.com --json
```
## Scripts
| Script | Purpose |
|--------|---------|
| `seo_audit_orchestrator.py` | Orchestrate all 6 audit stages and compute health score |
## Pipeline
| Stage | Audit | Script |
|-------|-------|--------|
| 1 | Technical SEO | `12-seo-technical-audit/code/scripts/robots_checker.py`, `sitemap_validator.py` |
| 2 | On-Page SEO | `13-seo-on-page-audit/code/scripts/page_analyzer.py` |
| 3 | Core Web Vitals | `14-seo-core-web-vitals/code/scripts/pagespeed_client.py` |
| 4 | Schema Validation | `16-seo-schema-validator/code/scripts/schema_validator.py` |
| 5 | Local SEO | `18-seo-local-audit/` (prompt-driven) |
| 6 | Search Console | `15-seo-search-console/code/scripts/gsc_client.py` |
## Health Score Weights
| Category | Weight |
|----------|--------|
| Technical SEO | 20% |
| On-Page SEO | 20% |
| Core Web Vitals | 25% |
| Schema | 15% |
| Local SEO | 10% |
| Search Console | 10% |
## Notion Output (Required)
**IMPORTANT**: All audit reports MUST be saved to the OurDigital SEO Audit Log database.
### Database Configuration
| Field | Value |
|-------|-------|
| Database ID | `2c8581e5-8a1e-8035-880b-e38cefc2f3ef` |
| URL | https://www.notion.so/dintelligence/2c8581e58a1e8035880be38cefc2f3ef |
### Required Properties
| Property | Type | Description |
|----------|------|-------------|
| Issue | Title | `종합 SEO 감사 보고서 - [domain] - YYYY-MM-DD` |
| Site | URL | Audited website URL |
| Category | Select | Comprehensive Audit |
| Priority | Select | Based on health score |
| Found Date | Date | Audit date (YYYY-MM-DD) |
| Audit ID | Rich Text | Format: COMP-YYYYMMDD-NNN |
### Language Guidelines
- Report content in Korean (한국어)
- Keep technical English terms as-is (e.g., SEO Audit, Core Web Vitals, Schema Markup)
- URLs and code remain unchanged

View File

@@ -0,0 +1,151 @@
---
description: "Comprehensive SEO audit: technical, on-page, schema, CWV, local, and GSC in one unified report"
argument-hint: <url> [--skip-local] [--skip-gsc] [--skip-notion] [--json]
allowed-tools: Read, Write, Edit, Bash, Glob, Grep, WebFetch, WebSearch, Task
---
# Comprehensive SEO Audit
## Overview
Orchestrates a unified 6-stage SEO audit by calling sub-skill scripts sequentially, merging results, computing a weighted health score (0-100), and optionally pushing a summary to Notion.
## Usage
```bash
/seo-comprehensive-audit https://example.com
/seo-comprehensive-audit https://example.com --skip-local --skip-gsc
/seo-comprehensive-audit https://example.com --json
/seo-comprehensive-audit https://example.com --skip-notion
```
## Pipeline Stages
| # | Stage | Sub-Skill | Skip Flag |
|---|-------|-----------|-----------|
| 1 | Technical SEO | `12-seo-technical-audit` | — |
| 2 | On-Page SEO | `13-seo-on-page-audit` | — |
| 3 | Core Web Vitals | `14-seo-core-web-vitals` | — |
| 4 | Schema Validation | `16-seo-schema-validator` | — |
| 5 | Local SEO | `18-seo-local-audit` | `--skip-local` |
| 6 | Search Console | `15-seo-search-console` | `--skip-gsc` |
## Execution
### Option A: Python Orchestrator (Recommended)
```bash
# Find repo root
REPO_ROOT=$(git -C /path/to/our-claude-skills rev-parse --show-toplevel)
SKILLS="$REPO_ROOT/custom-skills"
# Run orchestrator
python "$SKILLS/11-seo-comprehensive-audit/code/scripts/seo_audit_orchestrator.py" \
--url https://example.com \
--skills-dir "$SKILLS"
```
### Option B: Manual Stage-by-Stage
Run each sub-skill script with `--json` flag, then synthesize:
```bash
# Stage 1: Technical SEO
python "$SKILLS/12-seo-technical-audit/code/scripts/robots_checker.py" --url $URL --json
python "$SKILLS/12-seo-technical-audit/code/scripts/sitemap_validator.py" --url "$URL/sitemap.xml" --json
# Stage 2: On-Page SEO
python "$SKILLS/13-seo-on-page-audit/code/scripts/page_analyzer.py" --url $URL --json
# Stage 3: Core Web Vitals
python "$SKILLS/14-seo-core-web-vitals/code/scripts/pagespeed_client.py" --url $URL --json
# Stage 4: Schema Validation
python "$SKILLS/16-seo-schema-validator/code/scripts/schema_validator.py" --url $URL --json
# Stage 5: Local SEO (prompt-driven, use WebFetch + WebSearch)
# Stage 6: Search Console (requires GSC API credentials)
```
## Health Score (Weighted 0-100)
| Category | Weight |
|----------|--------|
| Technical SEO | 20% |
| On-Page SEO | 20% |
| Core Web Vitals | 25% |
| Schema | 15% |
| Local SEO | 10% |
| Search Console | 10% |
Scores per category are normalized to 0-100. Skipped stages redistribute their weight proportionally.
## Output Format
```json
{
"url": "https://example.com",
"audit_date": "2025-01-15",
"health_score": 72,
"grade": "B",
"stages": {
"technical": {"score": 85, "issues": [], "weight": 0.20},
"on_page": {"score": 70, "issues": [], "weight": 0.20},
"core_web_vitals": {"score": 60, "issues": [], "weight": 0.25},
"schema": {"score": 80, "issues": [], "weight": 0.15},
"local_seo": {"score": 65, "issues": [], "weight": 0.10},
"search_console": {"score": 75, "issues": [], "weight": 0.10}
},
"critical_issues": [],
"recommendations": []
}
```
## Grading Scale
| Score | Grade | Status |
|-------|-------|--------|
| 90-100 | A | Excellent |
| 80-89 | B+ | Good |
| 70-79 | B | Above Average |
| 60-69 | C | Needs Improvement |
| 40-59 | D | Poor |
| 0-39 | F | Critical |
## Notion Output (Required)
**IMPORTANT**: Summary report MUST be saved to the OurDigital SEO Audit Log database.
### Database Configuration
| Field | Value |
|-------|-------|
| Database ID | `2c8581e5-8a1e-8035-880b-e38cefc2f3ef` |
### Summary Page Title
`종합 SEO 감사 보고서 - [domain] - YYYY-MM-DD`
### Required Properties
| Property | Type | Value |
|----------|------|-------|
| Issue | Title | `종합 SEO 감사 보고서 - [domain] - YYYY-MM-DD` |
| Site | URL | Audited website URL |
| Category | Select | `Comprehensive Audit` |
| Priority | Select | Based on health score (Critical if <40, High if <60, Medium if <80, Low if ≥80) |
| Found Date | Date | Audit date (YYYY-MM-DD) |
| Audit ID | Rich Text | `COMP-YYYYMMDD-001` |
### Individual Issues
Create one Notion page per Critical/High finding with:
- Appropriate Category (Technical SEO, On-page SEO, Performance, Schema/Structured Data, Local SEO)
- Priority matching the finding severity
- Audit ID linking back to the comprehensive audit
### Language Guidelines
- Report content in Korean (한국어)
- Keep technical English terms as-is (e.g., SEO Audit, Core Web Vitals, Schema Markup)
- URLs and code remain unchanged

View File

@@ -0,0 +1,5 @@
# 11-seo-comprehensive-audit dependencies
# Sub-skill dependencies are managed in their own requirements.txt files
requests>=2.31.0
python-dotenv>=1.0.0
rich>=13.7.0

View File

@@ -0,0 +1,348 @@
#!/usr/bin/env python3
"""
SEO Comprehensive Audit Orchestrator
Runs 6 sub-skill audits sequentially, merges results, and computes a weighted
health score (0-100).
Usage:
python seo_audit_orchestrator.py --url https://example.com
python seo_audit_orchestrator.py --url https://example.com --skip-local --skip-gsc
python seo_audit_orchestrator.py --url https://example.com --json
"""
import argparse
import json
import subprocess
import sys
from datetime import datetime
from pathlib import Path
from urllib.parse import urlparse
# Health score weights (must sum to 1.0)
WEIGHTS = {
"technical": 0.20,
"on_page": 0.20,
"core_web_vitals": 0.25,
"schema": 0.15,
"local_seo": 0.10,
"search_console": 0.10,
}
GRADE_THRESHOLDS = [
(90, "A", "Excellent"),
(80, "B+", "Good"),
(70, "B", "Above Average"),
(60, "C", "Needs Improvement"),
(40, "D", "Poor"),
(0, "F", "Critical"),
]
def get_repo_root():
"""Find the git repository root."""
try:
result = subprocess.run(
["git", "rev-parse", "--show-toplevel"],
capture_output=True, text=True, check=True
)
return Path(result.stdout.strip())
except subprocess.CalledProcessError:
# Fallback: walk up from this script's location
current = Path(__file__).resolve().parent
while current != current.parent:
if (current / ".git").exists():
return current
current = current.parent
print("Error: Could not find git repository root.", file=sys.stderr)
sys.exit(1)
def run_script(script_path, args, stage_name):
"""Run a sub-skill script and return parsed JSON output."""
if not script_path.exists():
return {
"status": "skipped",
"reason": f"Script not found: {script_path}",
"score": None,
"issues": [],
}
cmd = [sys.executable, str(script_path)] + args + ["--json"]
try:
result = subprocess.run(
cmd, capture_output=True, text=True, timeout=120
)
if result.returncode == 0 and result.stdout.strip():
return json.loads(result.stdout.strip())
else:
return {
"status": "error",
"reason": result.stderr.strip() or f"Exit code {result.returncode}",
"score": None,
"issues": [],
}
except subprocess.TimeoutExpired:
return {
"status": "timeout",
"reason": f"{stage_name} timed out after 120s",
"score": None,
"issues": [],
}
except json.JSONDecodeError as e:
return {
"status": "error",
"reason": f"Invalid JSON output: {e}",
"score": None,
"issues": [],
}
except Exception as e:
return {
"status": "error",
"reason": str(e),
"score": None,
"issues": [],
}
def extract_score(result):
"""Extract a 0-100 score from a sub-skill result."""
if isinstance(result, dict):
# Try common score fields
for key in ("score", "health_score", "overall_score"):
if key in result and isinstance(result[key], (int, float)):
return min(100, max(0, result[key]))
# Try nested core_web_vitals score
if "core_web_vitals" in result:
cwv = result["core_web_vitals"]
if isinstance(cwv, dict) and "score" in cwv:
return min(100, max(0, cwv["score"]))
return None
def extract_issues(result):
"""Extract issues list from a sub-skill result."""
if isinstance(result, dict):
issues = result.get("issues", [])
if isinstance(issues, list):
return issues
return []
def compute_health_score(stages, skipped):
"""Compute weighted health score, redistributing skipped stage weights."""
active_weights = {k: v for k, v in WEIGHTS.items() if k not in skipped}
total_active_weight = sum(active_weights.values())
if total_active_weight == 0:
return 0
weighted_sum = 0
for stage_name, weight in active_weights.items():
stage_data = stages.get(stage_name, {})
score = stage_data.get("score")
if score is not None:
normalized_weight = weight / total_active_weight
weighted_sum += score * normalized_weight
return round(weighted_sum, 1)
def get_grade(score):
"""Return grade and status for a given score."""
for threshold, grade, status in GRADE_THRESHOLDS:
if score >= threshold:
return grade, status
return "F", "Critical"
def get_priority(score):
"""Return Notion priority based on health score."""
if score < 40:
return "Critical"
elif score < 60:
return "High"
elif score < 80:
return "Medium"
return "Low"
def main():
parser = argparse.ArgumentParser(description="Comprehensive SEO Audit Orchestrator")
parser.add_argument("--url", required=True, help="URL to audit")
parser.add_argument("--skip-local", action="store_true", help="Skip Local SEO stage")
parser.add_argument("--skip-gsc", action="store_true", help="Skip Search Console stage")
parser.add_argument("--json", action="store_true", help="Output JSON only")
parser.add_argument("--skills-dir", help="Path to custom-skills directory")
args = parser.parse_args()
url = args.url
domain = urlparse(url).netloc or url
audit_date = datetime.now().strftime("%Y-%m-%d")
audit_id = f"COMP-{datetime.now().strftime('%Y%m%d')}-001"
# Resolve skills directory
if args.skills_dir:
skills_dir = Path(args.skills_dir)
else:
repo_root = get_repo_root()
skills_dir = repo_root / "custom-skills"
skipped = set()
if args.skip_local:
skipped.add("local_seo")
if args.skip_gsc:
skipped.add("search_console")
stages = {}
# Stage 1: Technical SEO
if not args.json:
print("[1/6] Running Technical SEO audit...")
robots_result = run_script(
skills_dir / "12-seo-technical-audit/code/scripts/robots_checker.py",
["--url", url], "robots_checker"
)
sitemap_url = f"{url.rstrip('/')}/sitemap.xml"
sitemap_result = run_script(
skills_dir / "12-seo-technical-audit/code/scripts/sitemap_validator.py",
["--url", sitemap_url], "sitemap_validator"
)
tech_score = extract_score(robots_result)
sitemap_score = extract_score(sitemap_result)
if tech_score is not None and sitemap_score is not None:
combined_tech_score = round((tech_score + sitemap_score) / 2)
elif tech_score is not None:
combined_tech_score = tech_score
elif sitemap_score is not None:
combined_tech_score = sitemap_score
else:
combined_tech_score = None
stages["technical"] = {
"score": combined_tech_score,
"weight": WEIGHTS["technical"],
"issues": extract_issues(robots_result) + extract_issues(sitemap_result),
"raw": {"robots": robots_result, "sitemap": sitemap_result},
}
# Stage 2: On-Page SEO
if not args.json:
print("[2/6] Running On-Page SEO audit...")
on_page_result = run_script(
skills_dir / "13-seo-on-page-audit/code/scripts/page_analyzer.py",
["--url", url], "page_analyzer"
)
stages["on_page"] = {
"score": extract_score(on_page_result),
"weight": WEIGHTS["on_page"],
"issues": extract_issues(on_page_result),
"raw": on_page_result,
}
# Stage 3: Core Web Vitals
if not args.json:
print("[3/6] Running Core Web Vitals audit...")
cwv_result = run_script(
skills_dir / "14-seo-core-web-vitals/code/scripts/pagespeed_client.py",
["--url", url], "pagespeed_client"
)
stages["core_web_vitals"] = {
"score": extract_score(cwv_result),
"weight": WEIGHTS["core_web_vitals"],
"issues": extract_issues(cwv_result),
"raw": cwv_result,
}
# Stage 4: Schema Validation
if not args.json:
print("[4/6] Running Schema validation...")
schema_result = run_script(
skills_dir / "16-seo-schema-validator/code/scripts/schema_validator.py",
["--url", url], "schema_validator"
)
stages["schema"] = {
"score": extract_score(schema_result),
"weight": WEIGHTS["schema"],
"issues": extract_issues(schema_result),
"raw": schema_result,
}
# Stage 5: Local SEO
if "local_seo" not in skipped:
if not args.json:
print("[5/6] Local SEO stage (prompt-driven, skipping in script mode)...")
stages["local_seo"] = {
"score": None,
"weight": WEIGHTS["local_seo"],
"issues": [],
"raw": {"status": "prompt_driven", "note": "Local SEO requires interactive analysis via 18-seo-local-audit"},
}
else:
if not args.json:
print("[5/6] Skipping Local SEO...")
# Stage 6: Search Console
if "search_console" not in skipped:
if not args.json:
print("[6/6] Running Search Console analysis...")
gsc_result = run_script(
skills_dir / "15-seo-search-console/code/scripts/gsc_client.py",
["--url", url], "gsc_client"
)
stages["search_console"] = {
"score": extract_score(gsc_result),
"weight": WEIGHTS["search_console"],
"issues": extract_issues(gsc_result),
"raw": gsc_result,
}
else:
if not args.json:
print("[6/6] Skipping Search Console...")
# Compute health score
health_score = compute_health_score(stages, skipped)
grade, status = get_grade(health_score)
priority = get_priority(health_score)
# Collect critical issues
critical_issues = []
for stage_name, stage_data in stages.items():
for issue in stage_data.get("issues", []):
if isinstance(issue, dict) and issue.get("type") in ("error", "critical"):
critical_issues.append({**issue, "stage": stage_name})
# Build output
output = {
"url": url,
"domain": domain,
"audit_date": audit_date,
"audit_id": audit_id,
"health_score": health_score,
"grade": grade,
"status": status,
"priority": priority,
"skipped_stages": list(skipped),
"stages": {
k: {
"score": v.get("score"),
"weight": v.get("weight"),
"issues": v.get("issues", []),
}
for k, v in stages.items()
},
"critical_issues": critical_issues,
"notion": {
"database_id": "2c8581e5-8a1e-8035-880b-e38cefc2f3ef",
"title": f"종합 SEO 감사 보고서 - {domain} - {audit_date}",
"category": "Comprehensive Audit",
"priority": priority,
"audit_id": audit_id,
},
}
print(json.dumps(output, ensure_ascii=False, indent=2))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,109 @@
---
name: seo-comprehensive-audit
description: |
Comprehensive SEO audit orchestrator. Runs 6-stage audit pipeline (Technical, On-Page, CWV, Schema, Local, GSC) and produces a unified report with weighted health score.
Triggers: comprehensive SEO, full SEO audit, 종합 SEO 감사, site audit, SEO health check.
---
# Comprehensive SEO Audit
## Purpose
Orchestrate a full-spectrum SEO audit by running 6 specialized analyses and synthesizing results into a unified health score and actionable report.
## Pipeline Stages
| # | Stage | Source Skill | Default |
|---|-------|-------------|---------|
| 1 | Technical SEO | 12-seo-technical-audit | Always |
| 2 | On-Page SEO | 13-seo-on-page-audit | Always |
| 3 | Core Web Vitals | 14-seo-core-web-vitals | Always |
| 4 | Schema Validation | 16-seo-schema-validator | Always |
| 5 | Local SEO | 18-seo-local-audit | Skippable |
| 6 | Search Console | 15-seo-search-console | Skippable |
## Workflow
### 1. Initialization
1. Receive target URL from user
2. Confirm which stages to run (all 6 by default)
3. Set up audit tracking ID: `COMP-YYYYMMDD-NNN`
### 2. Execute Stages (Sequential)
For each active stage:
1. Run the sub-skill analysis
2. Collect JSON results
3. Extract score and issues
### 3. Synthesis
1. Compute weighted health score (0-100)
2. Assign grade (A/B+/B/C/D/F)
3. Prioritize critical and high-severity findings
4. Generate recommendations
### 4. Notion Report
1. Create summary page: `종합 SEO 감사 보고서 - [domain] - YYYY-MM-DD`
2. Create individual pages for Critical/High findings
3. Database: `2c8581e5-8a1e-8035-880b-e38cefc2f3ef`
## Health Score Weights
| Category | Weight |
|----------|--------|
| Technical SEO | 20% |
| On-Page SEO | 20% |
| Core Web Vitals | 25% |
| Schema | 15% |
| Local SEO | 10% |
| Search Console | 10% |
Skipped stages redistribute weight proportionally.
## Output Format
```markdown
## 종합 SEO 감사 보고서: [domain]
**Health Score**: [score]/100 ([grade])
**Date**: YYYY-MM-DD
**Audit ID**: COMP-YYYYMMDD-NNN
### Stage Results
| Stage | Score | Issues |
|-------|-------|--------|
| Technical SEO | XX/100 | N issues |
| On-Page SEO | XX/100 | N issues |
| Core Web Vitals | XX/100 | N issues |
| Schema | XX/100 | N issues |
| Local SEO | XX/100 | N issues |
| Search Console | XX/100 | N issues |
### Critical Findings
1. [Finding with recommendation]
### Recommendations (Priority Order)
1. [Action item]
```
## MCP Tool Usage
### Firecrawl
```
mcp__firecrawl__scrape: Fetch page content for on-page and schema analysis
```
### Notion
```
mcp__notion__*: Create audit report pages in SEO database
```
### Perplexity
```
mcp__perplexity__search: Research best practices for recommendations
```
## Limitations
- Local SEO stage requires manual input for NAP/GBP data
- Search Console stage requires GSC API credentials
- Health score accuracy improves when all 6 stages are active

View File

@@ -0,0 +1,12 @@
# Skill metadata (extracted from SKILL.md frontmatter)
name: seo-comprehensive-audit
description: |
Comprehensive SEO audit orchestrator. Runs 6-stage pipeline (Technical, On-Page, CWV, Schema, Local, GSC) and produces a unified report with weighted health score (0-100).
Triggers: comprehensive SEO, full SEO audit, 종합 SEO 감사, site audit, SEO health check.
# Optional fields
allowed-tools:
- mcp__firecrawl__*
- mcp__perplexity__*
- mcp__notion__*