Fix SEO skills 19-34 bugs, add slash commands, enhance reference-curator (#3)

* Fix SEO skill 34 bugs, Korean labels, and transition Ahrefs refs to our-seo-agent

P0: Fix report_aggregator.py — wrong SKILL_REGISTRY[33] mapping, missing
CATEGORY_WEIGHTS for 7 categories, and break bug in health score parsing
that exited loop even on parse failure.

P1: Remove VIEW tab references from skill 20, expand skill 32 docs,
replace Ahrefs MCP references across all 16 skills (19-28, 31-34)
with our-seo-agent CLI data source references.

P2: Fix Korean labels in executive_report.py and dashboard_generator.py,
add tenacity to base requirements, sync skill 34 base_client.py with
canonical version from skill 12.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Add Claude Code slash commands for SEO skills 19-34 and fix stale paths

Create 14 new slash command files for skills 19-28, 31-34 so they
appear as /seo-* commands in Claude Code. Also fix stale directory
paths in 8 existing commands (skills 12-18, 29-30) that referenced
pre-renumbering skill directories.

Update .gitignore to track .claude/commands/ while keeping other
.claude/ files ignored.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Add 8 slash commands, enhance reference-curator with depth/output options

- Add slash commands: ourdigital-brand-guide, notion-writer, notebooklm-agent,
  notebooklm-automation, notebooklm-studio, notebooklm-research,
  reference-curator, multi-agent-guide
- Add --depth (light/standard/deep/full) with Firecrawl parameter mapping
- Add --output with ~/Documents/reference-library/ default and user confirmation
- Increase --max-sources default from 10 to 100
- Rename /reference-curator-pipeline to /reference-curator
- Simplify web-crawler-orchestrator label to web-crawler in docs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Andrew Yim
2026-02-24 14:12:57 +09:00
committed by GitHub
parent 59e5c519f5
commit 397fa2aa5d
33 changed files with 1699 additions and 48 deletions

View File

@@ -0,0 +1,71 @@
---
description: Set up multi-agent collaboration framework (Claude, Gemini, Codex) with guardrails
argument-hint: [--quick] [--full]
---
# Multi-Agent Guide
Set up multi-agent collaboration framework for projects where multiple AI agents work together.
## Triggers
- "set up multi-agent", "agent guardrails", "multi-agent collaboration"
## Quick Setup (Recommended)
Rapid deployment with minimal questions:
1. Assess project structure
2. Ask which agents participate (Claude/Gemini/Codex/Human)
3. Create framework files
4. Customize ownership matrix
## Files Created
```
your-project/
├── .agent-state/
│ ├── tasks.yaml # Task registry
│ └── locks.yaml # Lock registry
├── tools/
│ └── check-ownership.py # Ownership verification
├── MULTI_AGENT_FRAMEWORK.md # Consolidated rules
├── GEMINI.md # Sub-agent directive (if selected)
└── CODEX.md # Sub-agent directive (if selected)
```
## Agent Hierarchy
```
┌─────────────────┐
│ Claude Code │
│ (Lead Agent) │
└────────┬────────┘
┌──────────────┼──────────────┐
v v v
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Gemini │ │ Codex │ │ Human │
│(Research)│ │ (Speed) │ │ (Review) │
└──────────┘ └──────────┘ └──────────┘
```
## Commit Message Format
```
[Agent] type(scope): description
Examples:
[Claude] feat(core): implement new feature
[Gemini] docs(api): update API documentation
[Codex] test(models): add unit tests
```
## Post-Setup
1. Set agent identity: `export AGENT_AUTHOR=claude`
2. Review ownership matrix in `MULTI_AGENT_FRAMEWORK.md`
3. Install pre-commit hooks: `pre-commit install` (optional)
## Source
Full details: `custom-skills/91-multi-agent-guide/README.md`
Related commands: `custom-skills/91-multi-agent-guide/commands/`

View File

@@ -0,0 +1,62 @@
---
description: Q&A agent using NotebookLM's Gemini-powered analysis with source citations
---
# NotebookLM Agent
Q&A agent that answers questions using NotebookLM's Gemini-powered analysis with source citations.
## Prerequisites
```bash
pip install notebooklm-py
playwright install chromium
notebooklm login # One-time auth
```
## Commands
```bash
# List notebooks
notebooklm list
# Set context
notebooklm use <notebook_id>
# Ask questions
notebooklm ask "What are the key findings?"
notebooklm ask "Elaborate on point 2" # continues conversation
notebooklm ask "New topic" --new # new conversation
# With citations (JSON output)
notebooklm ask "Summarize" --json
# Query specific sources
notebooklm ask "Compare" -s source1 -s source2
```
## Autonomy
**Auto-run:** `list`, `status`, `source list`, `ask`
**Ask first:** `delete`, `source add`
## JSON Output Format
```json
{
"answer": "Response with [1] [2] citations",
"references": [
{"source_id": "...", "citation_number": 1, "cited_text": "..."}
]
}
```
## Error Recovery
| Error | Fix |
|-------|-----|
| No context | `notebooklm use <id>` |
| Auth error | `notebooklm login` |
## Source
Full details: `custom-skills/50-notebooklm-agent/code/CLAUDE.md`

View File

@@ -0,0 +1,57 @@
---
description: Programmatic control over NotebookLM notebooks, sources, and artifacts
---
# NotebookLM Automation
Complete programmatic control over NotebookLM notebooks, sources, and artifacts.
## Prerequisites
```bash
pip install notebooklm-py
playwright install chromium
notebooklm login
```
## Commands
### Notebooks
```bash
notebooklm list [--json]
notebooklm create "Title" [--json]
notebooklm rename <id> "New Name"
notebooklm delete <id>
notebooklm use <id>
```
### Sources
```bash
notebooklm source add "https://..." [--json]
notebooklm source add ./file.pdf
notebooklm source list [--json]
notebooklm source delete <id>
notebooklm source wait <id>
```
### Artifacts
```bash
notebooklm artifact list [--json]
notebooklm artifact wait <id>
notebooklm artifact delete <id>
```
## Environment Variables
| Variable | Purpose |
|----------|---------|
| `NOTEBOOKLM_HOME` | Custom config dir |
| `NOTEBOOKLM_AUTH_JSON` | Inline auth (CI/CD) |
## Autonomy
**Auto-run:** `list`, `status`, `create`, `use`, `source add`
**Ask first:** `delete`, `rename`
## Source
Full details: `custom-skills/51-notebooklm-automation/code/CLAUDE.md`

View File

@@ -0,0 +1,66 @@
---
description: NotebookLM research workflows - web research, Drive search, auto-import, source extraction
---
# NotebookLM Research
Research workflows: web research, Drive search, auto-import, source extraction.
## Prerequisites
```bash
pip install notebooklm-py
playwright install chromium
notebooklm login
```
## Research Commands
```bash
# Web research
notebooklm source add-research "topic"
notebooklm source add-research "topic" --mode deep --import-all
notebooklm source add-research "topic" --mode deep --no-wait
# Drive research
notebooklm source add-research "topic" --from drive
# Status and wait
notebooklm research status
notebooklm research wait --import-all
```
## Source Extraction
```bash
notebooklm source fulltext <id>
notebooklm source guide <id>
```
## Research Modes
| Mode | Sources | Time |
|------|---------|------|
| `fast` | 5-10 | seconds |
| `deep` | 20+ | 2-5 min |
## Subagent Pattern
```python
# Non-blocking deep research
notebooklm source add-research "topic" --mode deep --no-wait
# Spawn subagent to wait
Task(
prompt="Wait for research and import: notebooklm research wait -n {id} --import-all",
subagent_type="general-purpose"
)
```
## Autonomy
**Auto-run:** `research status`, `source fulltext`, `source guide`
**Ask first:** `source add-research`, `research wait --import-all`
## Source
Full details: `custom-skills/53-notebooklm-research/code/CLAUDE.md`

View File

@@ -0,0 +1,73 @@
---
description: Generate NotebookLM Studio content - audio, video, quizzes, flashcards, slides, mind maps
---
# NotebookLM Studio
Generate NotebookLM Studio content: audio, video, quizzes, flashcards, slides, infographics, mind maps.
## Prerequisites
```bash
pip install notebooklm-py
playwright install chromium
notebooklm login
```
## Generate Commands
```bash
# Audio
notebooklm generate audio "instructions"
notebooklm generate audio --format debate --length longer
# Video
notebooklm generate video --style whiteboard
# Quiz & Flashcards
notebooklm generate quiz --difficulty hard
notebooklm generate flashcards --quantity more
# Visual
notebooklm generate slide-deck --format detailed
notebooklm generate infographic --orientation portrait
notebooklm generate mind-map
# Data
notebooklm generate data-table "description"
notebooklm generate report --format study_guide
```
## Download Commands
```bash
notebooklm artifact list # Check status
notebooklm download audio ./podcast.mp3
notebooklm download video ./video.mp4
notebooklm download quiz --format markdown ./quiz.md
notebooklm download flashcards --format json ./cards.json
notebooklm download slide-deck ./slides.pdf
notebooklm download mind-map ./mindmap.json
```
## Styles & Formats
**Video:** `classic`, `whiteboard`, `kawaii`, `anime`, `pixel`, `watercolor`, `neon`, `paper`, `sketch`
**Audio:** `deep-dive`, `brief`, `critique`, `debate`
## Timing
| Type | Time |
|------|------|
| Mind map | Instant |
| Quiz | 5-15 min |
| Audio | 10-20 min |
| Video | 15-45 min |
## Autonomy
**Auto-run:** `artifact list`
**Ask first:** `generate *`, `download *`
## Source
Full details: `custom-skills/52-notebooklm-studio/code/CLAUDE.md`

View File

@@ -0,0 +1,63 @@
---
description: Push markdown content to Notion pages or databases
---
# Notion Writer
Push markdown content to Notion pages or databases via the Notion API.
## Triggers
- "write to Notion", "export to Notion", "노션에 쓰기"
## Capabilities
| Feature | Input | Output |
|---------|-------|--------|
| Page Content Append | Markdown + Page URL | Appended blocks |
| Page Content Replace | Markdown + Page URL | Replaced content |
| Database Row Create | Markdown + DB URL + Title | New database row |
| Connection Test | API token | Connection status |
## Environment
- `NOTION_TOKEN` / `NOTION_API_KEY` - Notion integration token (required)
## Scripts
```bash
cd ~/Projects/our-claude-skills/custom-skills/32-notion-writer/code/scripts
# Test connection
python notion_writer.py --test
# Page info
python notion_writer.py --page PAGE_URL --info
# Write to page (append)
python notion_writer.py --page PAGE_URL --file content.md
# Replace page content
python notion_writer.py --page PAGE_URL --file content.md --replace
# Create database row
python notion_writer.py --database DB_URL --title "New Entry" --file content.md
# From stdin
cat report.md | python notion_writer.py --page PAGE_URL --stdin
```
## Markdown Support
Headings, bulleted/numbered lists, to-do items, quotes, code blocks (with language), dividers, paragraphs.
## API Limits
| Limit | Value |
|-------|-------|
| Blocks per request | 100 |
| Text per block | 2,000 chars |
| Requests/sec | ~3 |
The script automatically batches large content.
## Source
Full details: `custom-skills/32-notion-writer/code/CLAUDE.md`

View File

@@ -0,0 +1,69 @@
---
description: OurDigital brand standards, writing style, and visual identity reference
---
# OurDigital Brand Guide
Reference skill for OurDigital brand standards, writing style, and visual identity.
## Triggers
- "ourdigital brand guide", "our brand guide"
- "ourdigital style check", "our style check"
## Brand Foundation
| Element | Content |
|---------|---------|
| **Brand Name** | OurDigital Clinic |
| **Tagline** | 우리 디지털 클리닉 \| Your Digital Health Partner |
| **Mission** | 디지털 마케팅 클리닉 for SMBs, 자영업자, 프리랜서, 비영리단체 |
| **Promise** | 진단-처방-측정 가능한 성장 |
### Core Values
| 가치 | English | 클리닉 메타포 |
|------|---------|--------------|
| 데이터 중심 | Data-driven | 정밀 검사 |
| 실행 지향 | In-Action | 실행 가능한 처방 |
| 마케팅 과학 | Marketing Science | 근거 중심 의학 |
## Channel Tone Matrix
| Channel | Domain | Personality | Tone |
|---------|--------|-------------|------|
| Main Hub | ourdigital.org | Professional & Confident | Data-driven, Solution-oriented |
| Blog | blog.ourdigital.org | Analytical & Personal | Educational, Thought-provoking |
| Journal | journal.ourdigital.org | Conversational & Poetic | Reflective, Cultural Observer |
| OurStory | ourstory.day | Intimate & Reflective | Authentic, Personal Journey |
## Writing Style
### Korean: 철학-기술 융합체, 역설 활용, 수사적 질문, 우울한 낙관주의
### English: Philosophical-Technical Hybridization, Paradox as Device, Rhetorical Questions, Melancholic Optimism
**Do's:** Use paradox, ask rhetorical questions, connect tech to human implications, blend Korean/English naturally
**Don'ts:** Avoid purely declarative tone, don't separate tech from cultural impact, avoid simplistic optimism
## Visual Identity
| Token | Color | HEX | Usage |
|-------|-------|-----|-------|
| --d-black | D.Black | #221814 | Footer, dark backgrounds |
| --d-olive | D.Olive | #cedc00 | Primary accent, CTA buttons |
| --d-green | D.Green | #287379 | Links hover, secondary accent |
| --d-blue | D.Blue | #0075c0 | Links |
| --d-beige | D.Beige | #f2f2de | Light text on dark |
| --d-gray | D.Gray | #ebebeb | Alt backgrounds |
**Typography:** Korean: Noto Sans KR | English: Noto Sans, Inter | Grid: 12-column responsive
## Brand Compliance Check
1. **Tone Match**: Does it match the channel's personality?
2. **Value Alignment**: Reflects Data-driven, In-Action, Marketing Science?
3. **Philosophy Check**: Precision + Empathy + Evidence present?
4. **Language Style**: Appropriate blend of Korean/English terms?
5. **Visual Consistency**: Uses approved color palette?
## Source
Full details: `custom-skills/01-ourdigital-brand-guide/desktop/SKILL.md`

View File

@@ -0,0 +1,233 @@
---
description: Full reference curation pipeline - discovery, crawl, store, distill, review, export with configurable depth
argument-hint: <topic|urls|manifest> [--depth light|standard|deep|full] [--output ~/Documents/reference-library/] [--max-sources 100] [--auto-approve] [--export-format project_files]
allowed-tools: WebSearch, WebFetch, Read, Write, Bash, Grep, Glob, Task
---
# Reference Curator Pipeline
Full-stack orchestration of the 6-skill reference curation workflow.
## Input Modes
| Mode | Input Example | Pipeline Start |
|------|---------------|----------------|
| **Topic** | `"Claude system prompts"` | reference-discovery |
| **URLs** | `https://docs.anthropic.com/...` | web-crawler (skip discovery) |
| **Manifest** | `./manifest.json` | web-crawler (resume) |
## Arguments
- `<input>`: Required. Topic string, URL(s), or manifest file path
- `--depth`: Crawl depth level (default: `standard`). See Depth Levels below
- `--output`: Output directory path (default: `~/Documents/reference-library/`)
- `--max-sources`: Max sources to discover (default: 100)
- `--max-pages`: Max pages per source to crawl (default varies by depth)
- `--auto-approve`: Auto-approve scores above threshold
- `--threshold`: Approval threshold (default: 0.85)
- `--max-iterations`: Max QA loop iterations per document (default: 3)
- `--export-format`: `project_files`, `fine_tuning`, `jsonl` (default: project_files)
- `--include-subdomains`: Include subdomains in site mapping (default: false)
- `--follow-external`: Follow external links found in content (default: false)
## Output Directory
**IMPORTANT: Never store output in any claude-skills, `.claude/`, or skill-related directory.**
The `--output` argument sets the base path for all pipeline output. If omitted, the default is `~/Documents/reference-library/`.
### Directory Structure
```
{output}/
├── {topic-slug}/ # One folder per pipeline run
│ ├── README.md # Index with table of contents
│ ├── 00-page-name.md # Individual page files
│ ├── 01-page-name.md
│ ├── ...
│ ├── {topic-slug}-complete.md # Combined bundle (all pages)
│ └── manifest.json # Crawl metadata
├── pipeline_state/ # Resume state (auto-managed)
│ └── run_XXX/state.json
└── exports/ # Fine-tuning / JSONL exports
```
### Resolution Rules
1. If `--output` is provided, use that path exactly
2. If not provided, use `~/Documents/reference-library/`
3. **Before writing any files**, check if the output directory exists
4. If the directory does NOT exist, **ask the user for permission** before creating it:
- Show the full resolved path that will be created
- Wait for explicit user approval
- Only then run `mkdir -p <path>`
5. The topic slug is derived from the input (URL domain+path or topic string)
### Examples
```bash
# Uses default: ~/Documents/reference-library/glossary-for-wordpress/
/reference-curator https://docs.codeat.co/glossary/
# Custom path: /tmp/research/mcp-docs/
/reference-curator "MCP servers" --output /tmp/research
# Explicit home subfolder: ~/Projects/client-docs/api-reference/
/reference-curator https://api.example.com/docs --output ~/Projects/client-docs
```
## Depth Levels
### `--depth light`
Fast scan for quick reference. Main content only, minimal crawling.
| Parameter | Value |
|-----------|-------|
| `onlyMainContent` | `true` |
| `formats` | `["markdown"]` |
| `maxDiscoveryDepth` | 1 |
| `max-pages` default | 20 |
| Map limit | 50 |
| `deduplicateSimilarURLs` | `true` |
| Follow sub-links | No |
| JS rendering wait | None |
**Best for:** Quick lookups, single-page references, API docs you already know.
### `--depth standard` (default)
Balanced crawl. Main content with links for cross-referencing.
| Parameter | Value |
|-----------|-------|
| `onlyMainContent` | `true` |
| `formats` | `["markdown", "links"]` |
| `maxDiscoveryDepth` | 2 |
| `max-pages` default | 50 |
| Map limit | 100 |
| `deduplicateSimilarURLs` | `true` |
| Follow sub-links | Same-domain only |
| JS rendering wait | None |
**Best for:** Documentation sites, plugin guides, knowledge bases.
### `--depth deep`
Thorough crawl. Full page content including sidebars, nav, and related pages.
| Parameter | Value |
|-----------|-------|
| `onlyMainContent` | `false` |
| `formats` | `["markdown", "links", "html"]` |
| `maxDiscoveryDepth` | 3 |
| `max-pages` default | 150 |
| Map limit | 300 |
| `deduplicateSimilarURLs` | `true` |
| Follow sub-links | Same-domain + linked resources |
| JS rendering wait | `waitFor: 3000` |
| `includeSubdomains` | `true` |
**Best for:** Complete product documentation, research material, sites with sidebars/code samples hidden behind tabs.
### `--depth full`
Exhaustive crawl. Everything captured including raw HTML, screenshots, and external references.
| Parameter | Value |
|-----------|-------|
| `onlyMainContent` | `false` |
| `formats` | `["markdown", "html", "rawHtml", "links"]` |
| `maxDiscoveryDepth` | 5 |
| `max-pages` default | 500 |
| Map limit | 1000 |
| `deduplicateSimilarURLs` | `false` |
| Follow sub-links | All (same-domain + external references) |
| JS rendering wait | `waitFor: 5000` |
| `includeSubdomains` | `true` |
| Screenshots | Capture for JS-heavy pages |
**Best for:** Archival, migration references, preserving sites, training data collection.
## Depth Comparison
```
light ████░░░░░░░░░░░░ Speed: fastest Pages: ~20 Content: main only
standard ████████░░░░░░░░ Speed: fast Pages: ~50 Content: main + links
deep ████████████░░░░ Speed: moderate Pages: ~150 Content: full page + HTML
full ████████████████ Speed: slow Pages: ~500 Content: everything + raw
```
## Pipeline Stages
```
1. reference-discovery (topic mode only)
2. web-crawler ← depth controls this stage
3. content-repository
4. content-distiller <--------+
5. quality-reviewer |
+-- APPROVE -> export |
+-- REFACTOR -----------------+
+-- DEEP_RESEARCH -> crawler -+
+-- REJECT -> archive
6. markdown-exporter
```
## Crawl Execution by Depth
When executing the crawl (Stage 2), apply the depth settings to the Firecrawl tools:
### Site Mapping (`firecrawl_map`)
```
firecrawl_map:
url: <target>
limit: {depth.map_limit}
includeSubdomains: {depth.includeSubdomains}
```
### Page Scraping (`firecrawl_scrape`)
```
firecrawl_scrape:
url: <page>
formats: {depth.formats}
onlyMainContent: {depth.onlyMainContent}
waitFor: {depth.waitFor} # deep/full only
excludeTags: ["nav", "footer"] # light/standard only
```
### Batch Crawling (`firecrawl_crawl`) - for deep/full only
```
firecrawl_crawl:
url: <target>
maxDiscoveryDepth: {depth.maxDiscoveryDepth}
limit: {depth.max_pages}
deduplicateSimilarURLs: {depth.dedup}
scrapeOptions:
formats: {depth.formats}
onlyMainContent: {depth.onlyMainContent}
waitFor: {depth.waitFor}
```
## Example Usage
```bash
# Quick scan of a single doc page
/reference-curator https://docs.example.com/api --depth light
# Standard documentation crawl (default)
/reference-curator "Glossary for WordPress" --max-sources 5
# Deep crawl capturing full page content and HTML
/reference-curator https://docs.codeat.co/glossary/ --depth deep
# Full archival crawl with all formats
/reference-curator https://docs.anthropic.com --depth full --max-pages 300
# Deep crawl with auto-approval and fine-tuning export
/reference-curator "MCP servers" --depth deep --auto-approve --export-format fine_tuning
```
## Related Sub-commands
Individual stages available at: `custom-skills/90-reference-curator/commands/`
- `/reference-discovery`, `/web-crawler`, `/content-repository`
- `/content-distiller`, `/quality-reviewer`, `/markdown-exporter`
## Source
Full details: `custom-skills/90-reference-curator/README.md`

View File

@@ -0,0 +1,63 @@
---
description: AI search visibility - citations, brand radar, share of voice tracking
---
# SEO AI Visibility
Track brand visibility in AI-generated search answers with citation analysis and share of voice monitoring.
## Triggers
- "AI visibility", "AI search", "AI citations", "AI share of voice"
## Capabilities
1. **AI Impressions Tracking** - How often brand appears in AI answers
2. **AI Mentions Monitoring** - Brand mention frequency across AI engines
3. **Share of Voice** - AI search SOV vs competitors with trend analysis
4. **Citation Analysis** - Which domains and pages AI engines cite
5. **AI Response Analysis** - How the brand appears in AI-generated answers
6. **Competitor Comparison** - Side-by-side AI visibility benchmarking
## Scripts
```bash
# AI visibility overview
python custom-skills/27-seo-ai-visibility/code/scripts/ai_visibility_tracker.py \
--target example.com --json
# With competitor comparison
python custom-skills/27-seo-ai-visibility/code/scripts/ai_visibility_tracker.py \
--target example.com --competitor comp1.com --competitor comp2.com --json
# Historical trend (impressions/mentions)
python custom-skills/27-seo-ai-visibility/code/scripts/ai_visibility_tracker.py \
--target example.com --history --json
# Share of voice analysis
python custom-skills/27-seo-ai-visibility/code/scripts/ai_visibility_tracker.py \
--target example.com --sov --json
# AI citation analysis
python custom-skills/27-seo-ai-visibility/code/scripts/ai_citation_analyzer.py \
--target example.com --json
# Cited domains analysis
python custom-skills/27-seo-ai-visibility/code/scripts/ai_citation_analyzer.py \
--target example.com --cited-domains --json
# Cited pages analysis
python custom-skills/27-seo-ai-visibility/code/scripts/ai_citation_analyzer.py \
--target example.com --cited-pages --json
# AI response content analysis
python custom-skills/27-seo-ai-visibility/code/scripts/ai_citation_analyzer.py \
--target example.com --responses --json
```
## Output
- AI impressions and mentions with trend indicators
- Share of voice percentage vs competitors
- Cited domains and pages ranked by frequency
- AI response samples showing brand context
- Recommendations for improving AI visibility
- Reports saved to Notion SEO Audit Log (Category: AI Search Visibility)

View File

@@ -0,0 +1,53 @@
---
description: SEO competitor intelligence and benchmarking
---
# SEO Competitor Intelligence
Competitor profiling, benchmarking, and threat scoring for comprehensive SEO competitive analysis.
## Triggers
- "competitor analysis", "competitive intel", "경쟁사 분석"
## Capabilities
1. **Competitor Profiling** - Auto-discover competitors, build profile cards (DR, traffic, keywords, backlinks, content volume)
2. **Head-to-Head Matrix** - Comparison across all SEO dimensions
3. **Keyword Overlap** - Shared, unique, and gap keyword analysis
4. **Threat Scoring** - 0-100 score based on growth trajectory, keyword overlap, DR gap
5. **Korean Market** - Naver Blog/Cafe presence detection for competitors
6. **Competitive Monitoring** - Traffic trends, keyword movement, content velocity over time
7. **Market Share** - Organic traffic-based market share estimation
## Scripts
```bash
# Auto-discover and profile competitors
python custom-skills/31-seo-competitor-intel/code/scripts/competitor_profiler.py \
--target https://example.com --json
# Specify competitors manually
python custom-skills/31-seo-competitor-intel/code/scripts/competitor_profiler.py \
--target https://example.com --competitor https://comp1.com --competitor https://comp2.com --json
# Include Korean market analysis
python custom-skills/31-seo-competitor-intel/code/scripts/competitor_profiler.py \
--target https://example.com --korean-market --json
# 30-day competitive monitoring
python custom-skills/31-seo-competitor-intel/code/scripts/competitive_monitor.py \
--target https://example.com --period 30 --json
# Traffic trend comparison (90 days)
python custom-skills/31-seo-competitor-intel/code/scripts/competitive_monitor.py \
--target https://example.com --scope traffic --period 90 --json
```
## Output
- Competitor profile cards with DR, traffic, keywords, referring domains
- Head-to-head comparison matrix
- Keyword overlap analysis (shared/unique/gap)
- Threat scores (0-100) per competitor
- Traffic trend and market share charts
- Alerts for significant competitive movements
- Saved to Notion SEO Audit Log (Category: Competitor Intelligence, Audit ID: COMP-YYYYMMDD-NNN)

View File

@@ -0,0 +1,56 @@
---
description: Content audit, decay detection, gap analysis, and brief generation
---
# SEO Content Strategy
Content inventory, performance scoring, decay detection, topic gap analysis, cluster mapping, and SEO content brief generation.
## Triggers
- "content strategy", "content audit", "콘텐츠 전략"
## Capabilities
1. **Content Audit** - Inventory via sitemap crawl with performance scoring
2. **Content Decay Detection** - Identify pages losing traffic over time
3. **Content Type Classification** - Blog, product, service, landing, resource
4. **Topic Gap Analysis** - Find missing topics vs competitors with cluster mapping
5. **Editorial Calendar** - Priority-scored publishing calendar from gap analysis
6. **Content Brief Generation** - SEO briefs with H2/H3 outlines, keyword targets, and word count recommendations
7. **Korean Content Analysis** - Naver Blog format and review/후기 content patterns
## Scripts
```bash
# Full content audit
python custom-skills/23-seo-content-strategy/code/scripts/content_auditor.py \
--url https://example.com --json
# Detect decaying content
python custom-skills/23-seo-content-strategy/code/scripts/content_auditor.py \
--url https://example.com --decay --json
# Filter by content type
python custom-skills/23-seo-content-strategy/code/scripts/content_auditor.py \
--url https://example.com --type blog --json
# Content gap analysis with topic clusters
python custom-skills/23-seo-content-strategy/code/scripts/content_gap_analyzer.py \
--target https://example.com --competitor https://comp1.com --clusters --json
# Generate content brief for keyword
python custom-skills/23-seo-content-strategy/code/scripts/content_brief_generator.py \
--keyword "치과 임플란트 비용" --url https://example.com --json
# Brief with competitor analysis
python custom-skills/23-seo-content-strategy/code/scripts/content_brief_generator.py \
--keyword "dental implant cost" --url https://example.com --competitors 5 --json
```
## Output
- Content inventory with page count by type and average performance score
- Decaying content list with traffic trend data
- Topic gaps and cluster map with pillar/cluster pages
- Editorial calendar with priority scores
- Content briefs with outline, keywords, word count targets, and internal link suggestions
- Reports saved to Notion SEO Audit Log (Category: Content Strategy, ID: CONTENT-YYYYMMDD-NNN)

View File

@@ -0,0 +1,54 @@
---
description: Crawl budget optimization and log analysis
---
# SEO Crawl Budget
Server access log analysis, bot profiling, and crawl budget waste identification.
## Triggers
- "crawl budget", "log analysis", "크롤 예산"
## Capabilities
1. **Log Parsing** - Parse Nginx, Apache, CloudFront access logs (streaming for >1GB files)
2. **Bot Identification** - Googlebot, Yeti/Naver, Bingbot, Daumoa/Kakao, and others by User-Agent
3. **Per-Bot Profiling** - Crawl frequency, depth distribution, status codes, crawl patterns
4. **Waste Detection** - Parameter URLs, low-value pages, redirect chains, soft 404s, duplicate URLs
5. **Orphan Pages** - Pages in sitemap but never crawled, crawled but not in sitemap
6. **Optimization Plan** - robots.txt suggestions, URL parameter handling, noindex recommendations
## Scripts
```bash
# Parse Nginx access log
python custom-skills/32-seo-crawl-budget/code/scripts/log_parser.py \
--log-file /var/log/nginx/access.log --json
# Parse Apache log, filter by Googlebot
python custom-skills/32-seo-crawl-budget/code/scripts/log_parser.py \
--log-file /var/log/apache2/access.log --format apache --bot googlebot --json
# Parse gzipped log in streaming mode
python custom-skills/32-seo-crawl-budget/code/scripts/log_parser.py \
--log-file access.log.gz --streaming --json
# Full crawl budget analysis with sitemap comparison
python custom-skills/32-seo-crawl-budget/code/scripts/crawl_budget_analyzer.py \
--log-file access.log --sitemap https://example.com/sitemap.xml --json
# Waste identification only
python custom-skills/32-seo-crawl-budget/code/scripts/crawl_budget_analyzer.py \
--log-file access.log --scope waste --json
# Orphan page detection
python custom-skills/32-seo-crawl-budget/code/scripts/crawl_budget_analyzer.py \
--log-file access.log --sitemap https://example.com/sitemap.xml --scope orphans --json
```
## Output
- Bot request counts, status code distribution, top crawled URLs per bot
- Crawl waste breakdown (parameter URLs, redirects, soft 404s, duplicates)
- Orphan page lists (in sitemap not crawled, crawled not in sitemap)
- Efficiency score (0-100) with optimization recommendations
- Saved to Notion SEO Audit Log (Category: Crawl Budget, Audit ID: CRAWL-YYYYMMDD-NNN)

View File

@@ -0,0 +1,56 @@
---
description: E-commerce SEO audit and product schema validation
---
# SEO E-Commerce
Product page SEO audit, product schema validation, category taxonomy analysis, and Korean marketplace presence checking.
## Triggers
- "e-commerce SEO", "product SEO", "이커머스 SEO"
## Capabilities
1. **Product Page Audit** - Titles, meta descriptions, image alt text, H1 structure
2. **Category Taxonomy Analysis** - Depth, breadcrumb implementation, faceted navigation
3. **Duplicate Content Detection** - Parameter URLs, product variants, pagination issues
4. **Pagination SEO** - Validate rel=prev/next, canonical tags, infinite scroll handling
5. **Product Schema Validation** - Product, Offer, AggregateRating, Review, BreadcrumbList
6. **Rich Result Eligibility** - Required and optional property completeness checks
7. **Korean Marketplace Presence** - Naver Smart Store, Coupang, Gmarket, 11번가 detection
## Scripts
```bash
# Full e-commerce SEO audit
python custom-skills/24-seo-ecommerce/code/scripts/ecommerce_auditor.py \
--url https://example.com --json
# Product page audit only
python custom-skills/24-seo-ecommerce/code/scripts/ecommerce_auditor.py \
--url https://example.com --scope products --json
# Category taxonomy analysis
python custom-skills/24-seo-ecommerce/code/scripts/ecommerce_auditor.py \
--url https://example.com --scope categories --json
# Korean marketplace presence check
python custom-skills/24-seo-ecommerce/code/scripts/ecommerce_auditor.py \
--url https://example.com --korean-marketplaces --json
# Validate product schema on single page
python custom-skills/24-seo-ecommerce/code/scripts/product_schema_checker.py \
--url https://example.com/product/123 --json
# Batch validate from sitemap (sample 50 pages)
python custom-skills/24-seo-ecommerce/code/scripts/product_schema_checker.py \
--sitemap https://example.com/product-sitemap.xml --sample 50 --json
```
## Output
- Product page issue list by severity (critical, high, medium, low)
- Category structure analysis (depth, breadcrumbs, faceted nav issues)
- Schema validation results (pages with/without schema, common errors)
- Rich result eligibility assessment
- Korean marketplace presence status (Naver Smart Store, Coupang, Gmarket)
- Reports saved to Notion SEO Audit Log (Category: E-Commerce SEO, ID: ECOM-YYYYMMDD-NNN)

View File

@@ -20,11 +20,11 @@ Keyword strategy and content architecture for gateway pages.
```bash
# Analyze keyword
python custom-skills/17-seo-gateway-architect/code/scripts/keyword_analyzer.py \
python custom-skills/29-seo-gateway-architect/code/scripts/keyword_analyzer.py \
--topic "눈 성형"
# With location targeting
python custom-skills/17-seo-gateway-architect/code/scripts/keyword_analyzer.py \
python custom-skills/29-seo-gateway-architect/code/scripts/keyword_analyzer.py \
--topic "눈 성형" --market "강남" --output strategy.json
```

View File

@@ -20,10 +20,10 @@ Generate SEO-optimized gateway pages from templates.
```bash
# Generate with sample data
python custom-skills/18-seo-gateway-builder/code/scripts/generate_pages.py
python custom-skills/30-seo-gateway-builder/code/scripts/generate_pages.py
# Custom configuration
python custom-skills/18-seo-gateway-builder/code/scripts/generate_pages.py \
python custom-skills/30-seo-gateway-builder/code/scripts/generate_pages.py \
--config config/services.json \
--locations config/locations.json \
--output ./pages

View File

@@ -20,15 +20,15 @@ Google Search Console data retrieval and analysis.
```bash
# Get search performance
python custom-skills/16-seo-search-console/code/scripts/gsc_client.py \
python custom-skills/15-seo-search-console/code/scripts/gsc_client.py \
--site https://example.com --days 28
# Query analysis
python custom-skills/16-seo-search-console/code/scripts/gsc_client.py \
python custom-skills/15-seo-search-console/code/scripts/gsc_client.py \
--site https://example.com --report queries --limit 100
# Page performance
python custom-skills/16-seo-search-console/code/scripts/gsc_client.py \
python custom-skills/15-seo-search-console/code/scripts/gsc_client.py \
--site https://example.com --report pages --output pages_report.json
```

View File

@@ -0,0 +1,59 @@
---
description: International SEO - hreflang validation, content parity, multi-language audit
---
# SEO International Audit
Multi-language and multi-region SEO audit with hreflang validation and content parity analysis.
## Triggers
- "international SEO", "hreflang", "multi-language SEO", "다국어 SEO"
## Capabilities
1. **Hreflang Validation** - Bidirectional links, self-referencing, x-default, ISO code checks
2. **URL Structure Analysis** - ccTLD vs subdomain vs subdirectory with recommendations
3. **Content Parity** - Page count, key page availability, freshness comparison across languages
4. **Language Detection** - HTML lang attribute, Content-Language header, actual content analysis
5. **Redirect Logic Audit** - IP-based and Accept-Language redirect behavior
6. **Korean Expansion** - Priority markets (ko->ja, ko->zh, ko->en), CJK URL encoding, Naver/Baidu considerations
## Scripts
```bash
# Hreflang validation
python custom-skills/26-seo-international/code/scripts/hreflang_validator.py \
--url https://example.com --json
# With sitemap-based discovery
python custom-skills/26-seo-international/code/scripts/hreflang_validator.py \
--url https://example.com --sitemap https://example.com/sitemap.xml --json
# Check specific pages from file
python custom-skills/26-seo-international/code/scripts/hreflang_validator.py \
--urls-file pages.txt --json
# Full international audit
python custom-skills/26-seo-international/code/scripts/international_auditor.py \
--url https://example.com --json
# URL structure analysis only
python custom-skills/26-seo-international/code/scripts/international_auditor.py \
--url https://example.com --scope structure --json
# Content parity check only
python custom-skills/26-seo-international/code/scripts/international_auditor.py \
--url https://example.com --scope parity --json
# Korean expansion focus
python custom-skills/26-seo-international/code/scripts/international_auditor.py \
--url https://example.com --korean-expansion --json
```
## Output
- Hreflang error report (missing bidirectional, self-reference, x-default)
- URL structure recommendation
- Content parity matrix across languages (page count, freshness)
- Redirect logic assessment (forced vs suggested)
- International SEO score
- Reports saved to Notion SEO Audit Log (Category: International SEO)

View File

@@ -0,0 +1,50 @@
---
description: Keyword strategy and research for SEO campaigns
---
# SEO Keyword Strategy
Keyword expansion, intent classification, clustering, and competitor gap analysis. Supports Korean market with Naver autocomplete.
## Triggers
- "keyword research", "keyword strategy", "키워드 리서치"
## Capabilities
1. **Keyword Expansion** - Seed keyword expansion with matching, related, and suggested terms
2. **Intent Classification** - Classify keywords as informational, navigational, commercial, or transactional
3. **Topic Clustering** - Group keywords into topic clusters with volume aggregation
4. **Korean Suffix Expansion** - Expand with Korean suffixes (추천, 가격, 후기, 잘하는곳, 부작용, 전후)
5. **Volume Comparison** - Compare search volume across Korea vs global markets
6. **Keyword Gap Analysis** - Find keywords competitors rank for but target does not
## Scripts
```bash
# Basic keyword research
python custom-skills/19-seo-keyword-strategy/code/scripts/keyword_researcher.py \
--keyword "치과 임플란트" --country kr --json
# Korean market with suffix expansion
python custom-skills/19-seo-keyword-strategy/code/scripts/keyword_researcher.py \
--keyword "치과 임플란트" --country kr --korean-suffixes --json
# Volume comparison Korea vs global
python custom-skills/19-seo-keyword-strategy/code/scripts/keyword_researcher.py \
--keyword "dental implant" --country kr --compare-global --json
# Keyword gap vs competitor
python custom-skills/19-seo-keyword-strategy/code/scripts/keyword_gap_analyzer.py \
--target https://example.com --competitor https://competitor.com --json
# Multiple competitors with minimum volume filter
python custom-skills/19-seo-keyword-strategy/code/scripts/keyword_gap_analyzer.py \
--target https://example.com --competitor https://comp1.com \
--competitor https://comp2.com --min-volume 100 --json
```
## Output
- Keyword list with volume, KD, CPC, intent, and cluster assignment
- Topic clusters with aggregated volume
- Gap keywords with opportunity scores
- Reports saved to Notion SEO Audit Log (Category: Keyword Research, ID: KW-YYYYMMDD-NNN)

View File

@@ -0,0 +1,57 @@
---
description: Knowledge Graph & Entity SEO - Knowledge Panel, PAA, FAQ rich results
---
# SEO Knowledge Graph
Entity SEO analysis for Knowledge Panel presence, People Also Ask monitoring, and FAQ rich results tracking.
## Triggers
- "knowledge graph", "entity SEO", "Knowledge Panel", "PAA monitoring"
## Capabilities
1. **Knowledge Panel Detection** - Check entity presence in Google Knowledge Graph
2. **Entity Attribute Analysis** - Name, type, description, logo, social profiles, completeness score
3. **Wikipedia/Wikidata Check** - Article and QID presence verification
4. **Naver Presence** - Encyclopedia and knowledge iN (지식iN) coverage
5. **PAA Monitoring** - People Also Ask tracking for brand queries
6. **FAQ Rich Results** - FAQPage schema presence and SERP appearance tracking
7. **Entity Markup Audit** - Organization/Person/LocalBusiness schema and sameAs validation
## Scripts
```bash
# Knowledge Graph analysis
python custom-skills/28-seo-knowledge-graph/code/scripts/knowledge_graph_analyzer.py \
--entity "Samsung Electronics" --json
# Korean entity check
python custom-skills/28-seo-knowledge-graph/code/scripts/knowledge_graph_analyzer.py \
--entity "삼성전자" --language ko --json
# Include Wikipedia/Wikidata check
python custom-skills/28-seo-knowledge-graph/code/scripts/knowledge_graph_analyzer.py \
--entity "Samsung" --wiki --json
# Full entity SEO audit
python custom-skills/28-seo-knowledge-graph/code/scripts/entity_auditor.py \
--url https://example.com --entity "Brand Name" --json
# PAA monitoring
python custom-skills/28-seo-knowledge-graph/code/scripts/entity_auditor.py \
--url https://example.com --entity "Brand Name" --paa --json
# FAQ rich result tracking
python custom-skills/28-seo-knowledge-graph/code/scripts/entity_auditor.py \
--url https://example.com --entity "Brand Name" --faq --json
```
## Output
- Knowledge Panel detection with attribute completeness score
- Wikipedia/Wikidata presence status
- Naver encyclopedia and knowledge iN coverage
- PAA questions list for brand keywords
- FAQ rich result tracking
- Entity schema audit (Organization, sameAs links)
- Reports saved to Notion SEO Audit Log (Category: Knowledge Graph & Entity SEO)

View File

@@ -0,0 +1,63 @@
---
description: SEO KPI framework - unified metrics, health scores, ROI estimation
---
# SEO KPI Framework
Unified KPI aggregation across all SEO dimensions with health scores, baselines, and ROI estimation.
## Triggers
- "SEO KPI", "SEO performance", "health score", "SEO ROI"
## Capabilities
1. **KPI Aggregation** - Unified metrics across 7 dimensions (traffic, rankings, engagement, technical, content, links, local)
2. **Health Score** - Weighted 0-100 score with trend indicators
3. **Baseline & Targets** - Establish baselines and set 30/60/90-day targets
4. **Performance Reporting** - Period-over-period comparison (MoM, QoQ, YoY)
5. **Executive Summary** - Top wins, concerns, and recommendations
6. **ROI Estimation** - Organic traffic cost valuation
## Scripts
```bash
# Aggregate KPIs
python custom-skills/25-seo-kpi-framework/code/scripts/kpi_aggregator.py \
--url https://example.com --json
# Set baseline
python custom-skills/25-seo-kpi-framework/code/scripts/kpi_aggregator.py \
--url https://example.com --set-baseline --json
# Compare against baseline
python custom-skills/25-seo-kpi-framework/code/scripts/kpi_aggregator.py \
--url https://example.com --baseline baseline.json --json
# With ROI estimation
python custom-skills/25-seo-kpi-framework/code/scripts/kpi_aggregator.py \
--url https://example.com --roi --json
# Monthly performance report
python custom-skills/25-seo-kpi-framework/code/scripts/performance_reporter.py \
--url https://example.com --period monthly --json
# Quarterly report
python custom-skills/25-seo-kpi-framework/code/scripts/performance_reporter.py \
--url https://example.com --period quarterly --json
# Custom date range
python custom-skills/25-seo-kpi-framework/code/scripts/performance_reporter.py \
--url https://example.com --from 2025-01-01 --to 2025-03-31 --json
# Executive summary only
python custom-skills/25-seo-kpi-framework/code/scripts/performance_reporter.py \
--url https://example.com --period monthly --executive --json
```
## Output
- Unified KPI dashboard with health score (0-100)
- 7-dimension breakdown (traffic, rankings, engagement, technical, content, links, local)
- Trend indicators (up/down/stable) per dimension
- 30/60/90-day targets with progress tracking
- Executive summary with top wins and concerns
- Reports saved to Notion SEO Audit Log (Category: SEO KPI & Performance)

View File

@@ -0,0 +1,58 @@
---
description: Backlink audit, toxic link detection, and link gap analysis
---
# SEO Link Building
Backlink profile analysis, toxic link detection, competitor link gap identification, and Korean platform link mapping.
## Triggers
- "backlink audit", "link building", "링크 분석"
## Capabilities
1. **Backlink Profile Audit** - DR, referring domains, dofollow ratio
2. **Anchor Text Distribution** - Branded, exact-match, partial-match, generic, naked URL breakdown
3. **Toxic Link Detection** - PBN patterns, spammy domains, link farm identification
4. **Link Velocity Tracking** - New and lost referring domains over time
5. **Broken Backlink Recovery** - Find broken backlinks for reclamation
6. **Korean Platform Mapping** - Naver Blog, Naver Cafe, Tistory, Brunch link analysis
7. **Link Gap Analysis** - Find domains linking to competitors but not target
## Scripts
```bash
# Full backlink audit
python custom-skills/22-seo-link-building/code/scripts/backlink_auditor.py \
--url https://example.com --json
# Check link velocity
python custom-skills/22-seo-link-building/code/scripts/backlink_auditor.py \
--url https://example.com --velocity --json
# Find broken backlinks for recovery
python custom-skills/22-seo-link-building/code/scripts/backlink_auditor.py \
--url https://example.com --broken --json
# Korean platform link analysis
python custom-skills/22-seo-link-building/code/scripts/backlink_auditor.py \
--url https://example.com --korean-platforms --json
# Link gap vs competitor
python custom-skills/22-seo-link-building/code/scripts/link_gap_finder.py \
--target https://example.com --competitor https://comp1.com --json
# Multiple competitors with minimum DR filter
python custom-skills/22-seo-link-building/code/scripts/link_gap_finder.py \
--target https://example.com --competitor https://comp1.com \
--competitor https://comp2.com --min-dr 30 --json
```
## Output
- Domain Rating, backlink stats, dofollow ratio
- Anchor text distribution percentages
- Toxic link list with detection reason
- Link velocity (new/lost last 30 days)
- Korean platform backlink counts
- Gap domains scored by DR, traffic, and relevance
- Reports saved to Notion SEO Audit Log (Category: Link Building, ID: LINK-YYYYMMDD-NNN)

View File

@@ -0,0 +1,57 @@
---
description: Site migration planning and post-migration monitoring
---
# SEO Migration Planner
Pre-migration risk assessment, redirect mapping, and post-migration traffic/indexation monitoring.
## Triggers
- "site migration", "domain move", "사이트 이전"
## Capabilities
1. **URL Inventory** - Full URL capture via Firecrawl crawl with status codes
2. **Traffic Baseline** - Per-page traffic and keyword baseline via our-seo-agent
3. **Redirect Mapping** - Old URL to new URL mapping with per-URL risk scoring
4. **Risk Assessment** - Per-URL risk based on traffic, backlinks, keyword rankings
5. **Pre-Migration Checklist** - Automated checklist generation
6. **Post-Migration Monitoring** - Traffic comparison, redirect health, indexation tracking
7. **Migration Types** - Domain move, platform change, URL restructure, HTTPS, subdomain consolidation
## Scripts
```bash
# Domain move planning
python custom-skills/33-seo-migration-planner/code/scripts/migration_planner.py \
--domain https://example.com --type domain-move --new-domain https://new-example.com --json
# Platform migration (e.g., WordPress to headless)
python custom-skills/33-seo-migration-planner/code/scripts/migration_planner.py \
--domain https://example.com --type platform --json
# URL restructuring
python custom-skills/33-seo-migration-planner/code/scripts/migration_planner.py \
--domain https://example.com --type url-restructure --json
# HTTPS migration
python custom-skills/33-seo-migration-planner/code/scripts/migration_planner.py \
--domain http://example.com --type https --json
# Post-launch traffic comparison
python custom-skills/33-seo-migration-planner/code/scripts/migration_monitor.py \
--domain https://new-example.com --migration-date 2025-01-15 --baseline baseline.json --json
# Quick redirect health check
python custom-skills/33-seo-migration-planner/code/scripts/migration_monitor.py \
--domain https://new-example.com --migration-date 2025-01-15 --json
```
## Output
- URL inventory with traffic/keyword baselines
- Redirect map (source -> target, status code, priority)
- Risk assessment (high/medium/low risk URL counts, overall risk level)
- Pre-migration checklist
- Post-migration: traffic delta, broken redirects, ranking changes, recovery timeline
- Alerts for traffic drops >20%
- Saved to Notion SEO Audit Log (Category: SEO Migration, Audit ID: MIGR-YYYYMMDD-NNN)

View File

@@ -20,11 +20,11 @@ On-page SEO analysis for meta tags, headings, content, and links.
```bash
# Full page analysis
python custom-skills/11-seo-on-page-audit/code/scripts/page_analyzer.py \
python custom-skills/13-seo-on-page-audit/code/scripts/page_analyzer.py \
--url https://example.com/page
# Multiple pages
python custom-skills/11-seo-on-page-audit/code/scripts/page_analyzer.py \
python custom-skills/13-seo-on-page-audit/code/scripts/page_analyzer.py \
--urls urls.txt --output report.json
```

View File

@@ -0,0 +1,55 @@
---
description: Keyword rank monitoring with visibility scores and alerts
---
# SEO Position Tracking
Monitor keyword rankings, detect position changes with threshold alerts, and calculate visibility scores.
## Triggers
- "rank tracking", "position monitoring", "순위 추적"
## Capabilities
1. **Position Tracking** - Retrieve current ranking positions for tracked keywords
2. **Change Detection** - Detect position changes with configurable threshold alerts
3. **Visibility Scoring** - Calculate visibility scores weighted by search volume
4. **Brand/Non-Brand Segments** - Segment keywords into brand vs non-brand
5. **Competitor Comparison** - Compare rank positions against competitors
6. **Ranking Reports** - Period-over-period trend analysis with top movers
## Scripts
```bash
# Get current positions
python custom-skills/21-seo-position-tracking/code/scripts/position_tracker.py \
--target https://example.com --json
# With change threshold alerts (flag moves of +-5 or more)
python custom-skills/21-seo-position-tracking/code/scripts/position_tracker.py \
--target https://example.com --threshold 5 --json
# Filter by brand segment
python custom-skills/21-seo-position-tracking/code/scripts/position_tracker.py \
--target https://example.com --segment brand --json
# Compare with competitor
python custom-skills/21-seo-position-tracking/code/scripts/position_tracker.py \
--target https://example.com --competitor https://comp1.com --json
# 30-day ranking report
python custom-skills/21-seo-position-tracking/code/scripts/ranking_reporter.py \
--target https://example.com --period 30 --json
# Quarterly report with competitor comparison
python custom-skills/21-seo-position-tracking/code/scripts/ranking_reporter.py \
--target https://example.com --competitor https://comp1.com --period 90 --json
```
## Output
- Position distribution (top 3/10/20/50/100)
- Change summary (improved, declined, stable, new, lost)
- Threshold alerts for significant position changes
- Visibility score and trend over time
- Brand vs non-brand segment breakdown
- Reports saved to Notion SEO Audit Log (Category: Position Tracking, ID: RANK-YYYYMMDD-NNN)

View File

@@ -0,0 +1,59 @@
---
description: SEO reporting dashboard and executive reports
---
# SEO Reporting Dashboard
Aggregate all SEO skill outputs into executive reports and interactive HTML dashboards.
## Triggers
- "SEO report", "SEO dashboard", "보고서"
## Capabilities
1. **Report Aggregation** - Collect and normalize outputs from skills 11-33 into unified structure
2. **Cross-Skill Health Score** - Weighted scores across technical, on-page, performance, content, links, keywords
3. **HTML Dashboard** - Self-contained Chart.js dashboard with gauge, line, bar, pie, and radar charts
4. **Executive Report** - Korean-language summaries tailored to audience (C-level, marketing, technical)
5. **Priority Issues** - Top issues ranked across all audit dimensions
6. **Trend Analysis** - Period-over-period comparison narrative with audit timeline
## Scripts
```bash
# Aggregate all skill outputs for a domain
python custom-skills/34-seo-reporting-dashboard/code/scripts/report_aggregator.py \
--domain https://example.com --json
# Aggregate with date range filter
python custom-skills/34-seo-reporting-dashboard/code/scripts/report_aggregator.py \
--domain https://example.com --from 2025-01-01 --to 2025-03-31 --json
# Generate HTML dashboard
python custom-skills/34-seo-reporting-dashboard/code/scripts/dashboard_generator.py \
--report aggregated_report.json --output dashboard.html
# C-level executive summary (Korean)
python custom-skills/34-seo-reporting-dashboard/code/scripts/executive_report.py \
--report aggregated_report.json --audience c-level --output report.md
# Marketing team report
python custom-skills/34-seo-reporting-dashboard/code/scripts/executive_report.py \
--report aggregated_report.json --audience marketing --output report.md
# Technical team report
python custom-skills/34-seo-reporting-dashboard/code/scripts/executive_report.py \
--report aggregated_report.json --audience technical --output report.md
```
## Output
- Aggregated JSON report with overall health score, category scores, top issues/wins
- Self-contained HTML dashboard (responsive, no external dependencies except Chart.js CDN)
- Korean executive summary in Markdown (tailored by audience level)
- Saved to Notion SEO Audit Log (Category: SEO Dashboard, Audit ID: DASH-YYYYMMDD-NNN)
## Workflow
1. Run audits with individual skills (11-33)
2. Aggregate with `report_aggregator.py`
3. Generate dashboard and/or executive report
4. Share HTML dashboard or Markdown report with stakeholders

View File

@@ -19,11 +19,11 @@ Generate JSON-LD structured data markup from templates.
```bash
# Generate from template
python custom-skills/14-seo-schema-generator/code/scripts/schema_generator.py \
python custom-skills/17-seo-schema-generator/code/scripts/schema_generator.py \
--type LocalBusiness --output schema.json
# With custom data
python custom-skills/14-seo-schema-generator/code/scripts/schema_generator.py \
python custom-skills/17-seo-schema-generator/code/scripts/schema_generator.py \
--type Article \
--data '{"headline": "My Article", "author": "John Doe"}' \
--output article-schema.json

View File

@@ -20,15 +20,15 @@ JSON-LD structured data validation and analysis.
```bash
# Validate page schema
python custom-skills/13-seo-schema-validator/code/scripts/schema_validator.py \
python custom-skills/16-seo-schema-validator/code/scripts/schema_validator.py \
--url https://example.com
# Validate local file
python custom-skills/13-seo-schema-validator/code/scripts/schema_validator.py \
python custom-skills/16-seo-schema-validator/code/scripts/schema_validator.py \
--file schema.json
# Batch validation
python custom-skills/13-seo-schema-validator/code/scripts/schema_validator.py \
python custom-skills/16-seo-schema-validator/code/scripts/schema_validator.py \
--urls urls.txt --output validation_report.json
```

View File

@@ -0,0 +1,46 @@
---
description: Google and Naver SERP feature detection and competitor mapping
---
# SEO SERP Analysis
Detect SERP features, map competitor positions, and score feature opportunities for Google and Naver.
## Triggers
- "SERP analysis", "SERP features", "검색 결과 분석"
## Capabilities
1. **Google SERP Feature Detection** - Featured snippet, PAA, knowledge panel, local pack, video carousel, ads, image pack, site links
2. **Competitor Position Mapping** - Map competitor domains and positions per keyword
3. **Content Type Distribution** - Analyze content types in results (blog, product, service, news, video)
4. **Opportunity Scoring** - Score SERP feature opportunities for target site
5. **Intent Validation** - Validate search intent from SERP composition
6. **Naver SERP Analysis** - Section detection (블로그, 카페, 지식iN, 스마트스토어, 브랜드존, 숏폼, 인플루언서)
## Scripts
```bash
# Google SERP analysis
python custom-skills/20-seo-serp-analysis/code/scripts/serp_analyzer.py \
--keyword "치과 임플란트" --country kr --json
# Multiple keywords from file
python custom-skills/20-seo-serp-analysis/code/scripts/serp_analyzer.py \
--keywords-file keywords.txt --country kr --json
# Naver SERP analysis
python custom-skills/20-seo-serp-analysis/code/scripts/naver_serp_analyzer.py \
--keyword "치과 임플란트" --json
# Naver multiple keywords
python custom-skills/20-seo-serp-analysis/code/scripts/naver_serp_analyzer.py \
--keywords-file keywords.txt --json
```
## Output
- SERP feature presence map with ad counts
- Competitor positions with domain, URL, title, and content type
- Opportunity score and intent signals
- Naver section priority mapping and content type distribution
- Reports saved to Notion SEO Audit Log (Category: SERP Analysis, ID: SERP-YYYYMMDD-NNN)

View File

@@ -19,15 +19,15 @@ Technical SEO audit for robots.txt and sitemap validation.
```bash
# Check robots.txt
python custom-skills/10-seo-technical-audit/code/scripts/robots_checker.py \
python custom-skills/12-seo-technical-audit/code/scripts/robots_checker.py \
--url https://example.com
# Validate sitemap
python custom-skills/10-seo-technical-audit/code/scripts/sitemap_validator.py \
python custom-skills/12-seo-technical-audit/code/scripts/sitemap_validator.py \
--url https://example.com/sitemap.xml
# Crawl sitemap URLs
python custom-skills/10-seo-technical-audit/code/scripts/sitemap_crawler.py \
python custom-skills/12-seo-technical-audit/code/scripts/sitemap_crawler.py \
--sitemap https://example.com/sitemap.xml --output report.json
```

View File

@@ -20,15 +20,15 @@ Google PageSpeed Insights and Core Web Vitals analysis.
```bash
# Analyze single URL
python custom-skills/15-seo-core-web-vitals/code/scripts/pagespeed_client.py \
python custom-skills/14-seo-core-web-vitals/code/scripts/pagespeed_client.py \
--url https://example.com
# Mobile and desktop
python custom-skills/15-seo-core-web-vitals/code/scripts/pagespeed_client.py \
python custom-skills/14-seo-core-web-vitals/code/scripts/pagespeed_client.py \
--url https://example.com --strategy both
# Batch analysis
python custom-skills/15-seo-core-web-vitals/code/scripts/pagespeed_client.py \
python custom-skills/14-seo-core-web-vitals/code/scripts/pagespeed_client.py \
--urls urls.txt --output vitals_report.json
```