## Summary - Add portable installation tool (`install.sh`) for cross-machine setup - Add Claude.ai export files with proper YAML frontmatter - Add multi-agent-guide v2.0 with consolidated framework template - Rename `00-claude-code-setting` → `00-our-settings-audit` (avoid reserved word) - Add YAML frontmatter to 25+ SKILL.md files for Claude Desktop compatibility ## Commits Included - `93f604a` feat: Add portable installation tool for cross-machine setup - `9b84104` feat: Add Claude.ai export for portable skill installation - `f7ab973` fix: Add YAML frontmatter to Claude.ai export files - `3fed49a` feat(multi-agent-guide): Add v2.0 with consolidated framework - `3be26ef` refactor: Rename settings-audit skill and add YAML frontmatter Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2.7 KiB
2.7 KiB
name, description
| name | description |
|---|---|
| seo-technical-audit | Technical SEO analyzer for robots.txt, sitemap, and crawlability fundamentals. Triggers: technical SEO, robots.txt, sitemap validation, crawlability, URL accessibility. |
SEO Technical Audit
Purpose
Analyze crawlability fundamentals: robots.txt rules, XML sitemap structure, and URL accessibility. Identify issues blocking search engine crawlers.
Core Capabilities
- Robots.txt Analysis - Parse rules, check blocked resources
- Sitemap Validation - Verify XML structure, URL limits, dates
- URL Accessibility - Check HTTP status, redirects, broken links
MCP Tool Usage
Firecrawl for Page Data
mcp__firecrawl__scrape: Fetch robots.txt and sitemap content
mcp__firecrawl__crawl: Check multiple URLs accessibility
Perplexity for Best Practices
mcp__perplexity__search: Research current SEO recommendations
Workflow
1. Robots.txt Check
- Fetch
[domain]/robots.txtusing Firecrawl - Parse User-agent rules and Disallow patterns
- Identify blocked resources (CSS, JS, images)
- Check for Sitemap declarations
- Report critical issues
2. Sitemap Validation
- Locate sitemap (from robots.txt or
/sitemap.xml) - Validate XML syntax
- Check URL count (max 50,000)
- Verify lastmod date formats
- For sitemap index: parse child sitemaps
3. URL Accessibility Sampling
- Extract URLs from sitemap
- Sample 50-100 URLs for large sites
- Check HTTP status codes
- Identify redirects and broken links
- Report 4xx/5xx errors
Output Format
## Technical SEO Audit: [domain]
### Robots.txt Analysis
- Status: [Valid/Invalid/Missing]
- Sitemap declared: [Yes/No]
- Critical blocks: [List]
### Sitemap Validation
- URLs found: [count]
- Syntax: [Valid/Errors]
- Issues: [List]
### URL Accessibility (sampled)
- Checked: [count] URLs
- Success (2xx): [count]
- Redirects (3xx): [count]
- Errors (4xx/5xx): [count]
### Recommendations
1. [Priority fixes]
Common Issues
| Issue | Impact | Fix |
|---|---|---|
| No sitemap in robots.txt | Medium | Add Sitemap: directive |
| Blocking CSS/JS | High | Allow Googlebot access |
| 404s in sitemap | High | Remove or fix URLs |
| Missing lastmod | Low | Add dates for freshness signals |
Limitations
- Cannot access password-protected sitemaps
- Large sitemaps (10,000+ URLs) require sampling
- Does not check render-blocking issues (use Core Web Vitals skill)
Notion Output (Required)
All audit reports MUST be saved to OurDigital SEO Audit Log:
- Database ID:
2c8581e5-8a1e-8035-880b-e38cefc2f3ef - Properties: Issue (title), Site (url), Category, Priority, Found Date, Audit ID
- Language: Korean with English technical terms
- Audit ID Format: [TYPE]-YYYYMMDD-NNN