Files
our-claude-skills/custom-skills/19-seo-technical-audit/desktop/SKILL.md
Andrew Yim 0bc24d00b9 feat: Add OurDigital custom skills package (10 skills)
Complete implementation of OurDigital skills with dual-platform support
(Claude Desktop + Claude Code) following standardized structure.

Skills created:
- 01-ourdigital-brand-guide: Brand reference & style guidelines
- 02-ourdigital-blog: Korean blog drafts (blog.ourdigital.org)
- 03-ourdigital-journal: English essays (journal.ourdigital.org)
- 04-ourdigital-research: Research prompts & workflows
- 05-ourdigital-document: Notion-to-presentation pipeline
- 06-ourdigital-designer: Visual/image prompt generation
- 07-ourdigital-ad-manager: Ad copywriting & keyword research
- 08-ourdigital-trainer: Training materials & workshop planning
- 09-ourdigital-backoffice: Quotes, proposals, cost analysis
- 10-ourdigital-skill-creator: Meta skill for creating new skills

Features:
- YAML frontmatter with "ourdigital" or "our" prefix triggers
- Standardized directory structure (code/, desktop/, shared/, docs/)
- Shared environment setup (_ourdigital-shared/)
- Comprehensive reference documentation
- Cross-skill integration support

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 16:50:17 +07:00

2.5 KiB

SEO Technical Audit

Purpose

Analyze crawlability fundamentals: robots.txt rules, XML sitemap structure, and URL accessibility. Identify issues blocking search engine crawlers.

Core Capabilities

  1. Robots.txt Analysis - Parse rules, check blocked resources
  2. Sitemap Validation - Verify XML structure, URL limits, dates
  3. URL Accessibility - Check HTTP status, redirects, broken links

MCP Tool Usage

Firecrawl for Page Data

mcp__firecrawl__scrape: Fetch robots.txt and sitemap content
mcp__firecrawl__crawl: Check multiple URLs accessibility

Perplexity for Best Practices

mcp__perplexity__search: Research current SEO recommendations

Workflow

1. Robots.txt Check

  1. Fetch [domain]/robots.txt using Firecrawl
  2. Parse User-agent rules and Disallow patterns
  3. Identify blocked resources (CSS, JS, images)
  4. Check for Sitemap declarations
  5. Report critical issues

2. Sitemap Validation

  1. Locate sitemap (from robots.txt or /sitemap.xml)
  2. Validate XML syntax
  3. Check URL count (max 50,000)
  4. Verify lastmod date formats
  5. For sitemap index: parse child sitemaps

3. URL Accessibility Sampling

  1. Extract URLs from sitemap
  2. Sample 50-100 URLs for large sites
  3. Check HTTP status codes
  4. Identify redirects and broken links
  5. Report 4xx/5xx errors

Output Format

## Technical SEO Audit: [domain]

### Robots.txt Analysis
- Status: [Valid/Invalid/Missing]
- Sitemap declared: [Yes/No]
- Critical blocks: [List]

### Sitemap Validation
- URLs found: [count]
- Syntax: [Valid/Errors]
- Issues: [List]

### URL Accessibility (sampled)
- Checked: [count] URLs
- Success (2xx): [count]
- Redirects (3xx): [count]
- Errors (4xx/5xx): [count]

### Recommendations
1. [Priority fixes]

Common Issues

Issue Impact Fix
No sitemap in robots.txt Medium Add Sitemap: directive
Blocking CSS/JS High Allow Googlebot access
404s in sitemap High Remove or fix URLs
Missing lastmod Low Add dates for freshness signals

Limitations

  • Cannot access password-protected sitemaps
  • Large sitemaps (10,000+ URLs) require sampling
  • Does not check render-blocking issues (use Core Web Vitals skill)

Notion Output (Required)

All audit reports MUST be saved to OurDigital SEO Audit Log:

  • Database ID: 2c8581e5-8a1e-8035-880b-e38cefc2f3ef
  • Properties: Issue (title), Site (url), Category, Priority, Found Date, Audit ID
  • Language: Korean with English technical terms
  • Audit ID Format: [TYPE]-YYYYMMDD-NNN