Major refactoring of ourdigital-custom-skills with new numbering system: ## Structure Changes - Each skill now has code/ (Claude Code) and desktop/ (Claude Desktop) versions - New progressive numbering: 01-09 General, 10-19 SEO, 20-29 GTM, 30-39 OurDigital, 40-49 Jamie ## Skill Reorganization - 01-notion-organizer (from 02) - 10-18: SEO tools split into focused skills (technical, on-page, local, schema, vitals, gsc, gateway) - 20-21: GTM audit and manager - 30-32: OurDigital designer, research, presentation - 40-41: Jamie brand editor and audit ## New Files - .claude/commands/: Slash command definitions for all skills - CLAUDE.md: Updated with new skill structure documentation - REFACTORING_PLAN.md: Migration documentation - COMPATIBILITY_REPORT.md, SKILLS_COMPARISON.md: Analysis docs ## Removed - Old skill directories (02-05, 10-14, 20-21 old numbering) - Consolidated into new structure with _archive/ for reference 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2.5 KiB
2.5 KiB
name: seo-technical-audit
version: 1.0.0
description: Technical SEO auditor for crawlability fundamentals. Triggers: robots.txt, sitemap validation, crawlability, indexing check, technical SEO.
allowed-tools: mcp__firecrawl__, mcp__perplexity__, mcp__notion__*
SEO Technical Audit
Purpose
Analyze crawlability fundamentals: robots.txt rules, XML sitemap structure, and URL accessibility. Identify issues blocking search engine crawlers.
Core Capabilities
- Robots.txt Analysis - Parse rules, check blocked resources
- Sitemap Validation - Verify XML structure, URL limits, dates
- URL Accessibility - Check HTTP status, redirects, broken links
MCP Tool Usage
Firecrawl for Page Data
mcp__firecrawl__scrape: Fetch robots.txt and sitemap content
mcp__firecrawl__crawl: Check multiple URLs accessibility
Perplexity for Best Practices
mcp__perplexity__search: Research current SEO recommendations
Workflow
1. Robots.txt Check
- Fetch
[domain]/robots.txtusing Firecrawl - Parse User-agent rules and Disallow patterns
- Identify blocked resources (CSS, JS, images)
- Check for Sitemap declarations
- Report critical issues
2. Sitemap Validation
- Locate sitemap (from robots.txt or
/sitemap.xml) - Validate XML syntax
- Check URL count (max 50,000)
- Verify lastmod date formats
- For sitemap index: parse child sitemaps
3. URL Accessibility Sampling
- Extract URLs from sitemap
- Sample 50-100 URLs for large sites
- Check HTTP status codes
- Identify redirects and broken links
- Report 4xx/5xx errors
Output Format
## Technical SEO Audit: [domain]
### Robots.txt Analysis
- Status: [Valid/Invalid/Missing]
- Sitemap declared: [Yes/No]
- Critical blocks: [List]
### Sitemap Validation
- URLs found: [count]
- Syntax: [Valid/Errors]
- Issues: [List]
### URL Accessibility (sampled)
- Checked: [count] URLs
- Success (2xx): [count]
- Redirects (3xx): [count]
- Errors (4xx/5xx): [count]
### Recommendations
1. [Priority fixes]
Common Issues
| Issue | Impact | Fix |
|---|---|---|
| No sitemap in robots.txt | Medium | Add Sitemap: directive |
| Blocking CSS/JS | High | Allow Googlebot access |
| 404s in sitemap | High | Remove or fix URLs |
| Missing lastmod | Low | Add dates for freshness signals |
Limitations
- Cannot access password-protected sitemaps
- Large sitemaps (10,000+ URLs) require sampling
- Does not check render-blocking issues (use Core Web Vitals skill)