New skills: - Skill 33: Site migration planner with redirect mapping and monitoring - Skill 34: Reporting dashboard with HTML charts and Korean executive reports Bug fixes (Skill 34 - report_aggregator.py): - Add audit_type fallback for skill identification (was only using audit_id prefix) - Extract health scores from nested data dict (technical_score, onpage_score, etc.) - Support subdomain matching in domain filter (blog.ourdigital.org matches ourdigital.org) - Skip self-referencing DASH- aggregated reports Bug fixes (Skill 20 - naver_serp_analyzer.py): - Remove VIEW tab selectors (removed by Naver in 2026) - Add new section detectors: books (도서), shortform (숏폼), influencer (인플루언서) Improvements (Skill 34 - dashboard/executive report): - Add Korean category labels for Chart.js charts (기술 SEO, 온페이지, etc.) - Add Korean trend labels (개선 중 ↑, 안정 →, 하락 중 ↓) - Add English→Korean issue description translation layer (20 common patterns) Documentation improvements: - Add Korean triggers to 4 skill descriptions (19, 25, 28, 31) - Expand Skill 32 SKILL.md from 40→143 lines (was 6/10, added workflow, output format, limitations) - Add output format examples to Skills 27 and 28 SKILL.md - Add limitations sections to Skills 27 and 28 - Update README.md, CLAUDE.md, AGENTS.md for skills 33-34 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
925 B
925 B
Firecrawl
MCP tool documentation for URL inventory crawling and redirect verification
Available Commands
firecrawl_crawl- Crawl entire site to capture all URLs and status codes for migration inventoryfirecrawl_scrape- Scrape individual pages to verify redirect health (status codes, chains, final URL)
Configuration
- Requires Firecrawl MCP server configured in Claude Desktop
- API access via
mcp__firecrawl__*tool prefix
Examples
# Crawl full site for URL inventory
mcp__firecrawl__firecrawl_crawl(url="https://example.com", limit=5000, scrapeOptions={"formats": ["links"]})
# Verify a redirect
mcp__firecrawl__firecrawl_scrape(url="https://old-example.com/page", formats=["links"])
Notes
- Crawl limit defaults to 5,000 URLs per run
- For larger sites, run multiple crawls with path-based filtering
- Redirect verification returns status_code, final_url, and redirect_chain