6 modular skills for curating, processing, and exporting reference docs: - reference-discovery: Search and validate authoritative sources - web-crawler-orchestrator: Multi-backend crawling (Firecrawl/Node/aiohttp/Scrapy) - content-repository: MySQL storage with version tracking - content-distiller: Summarization and key concept extraction - quality-reviewer: QA loop with approve/refactor/research routing - markdown-exporter: Structured output for Claude Projects or fine-tuning Cross-machine installation support: - Environment-based config (~/.reference-curator.env) - Commands tracked in repo, symlinked during install - install.sh with --minimal, --check, --uninstall modes - Firecrawl MCP as default (always available) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
3.0 KiB
3.0 KiB
description, argument-hint, allowed-tools
| description | argument-hint | allowed-tools |
|---|---|---|
| Review distilled content quality. Multi-criteria scoring with decision routing (approve/refactor/deep_research/reject). | <distill-id|all-pending> [--auto-approve] [--threshold 0.85] | Read, Write, Bash, Glob, Grep |
Quality Reviewer
Review distilled content for quality and route decisions.
Arguments
<distill-id|all-pending>: Specific distill ID or review all pending--auto-approve: Auto-approve scores above threshold--threshold: Approval threshold (default: 0.85)
Review Criteria
Scoring Dimensions
| Criterion | Weight | Checks |
|---|---|---|
| Accuracy | 25% | Factual correctness, up-to-date info, proper attribution |
| Completeness | 20% | Covers key concepts, includes examples, addresses edge cases |
| Clarity | 20% | Clear structure, concise language, logical flow |
| Prompt Engineering Quality | 25% | Demonstrates techniques, shows before/after, actionable |
| Usability | 10% | Easy to reference, searchable keywords, appropriate length |
Score Calculation
score = (
accuracy * 0.25 +
completeness * 0.20 +
clarity * 0.20 +
prompt_eng_quality * 0.25 +
usability * 0.10
)
Decision Thresholds
| Score | Decision | Action |
|---|---|---|
| ≥ 0.85 | APPROVE | Ready for export |
| 0.60-0.84 | REFACTOR | Re-distill with feedback |
| 0.40-0.59 | DEEP_RESEARCH | Gather more sources |
| < 0.40 | REJECT | Archive (low quality) |
Review Process
1. Load Distilled Content
source ~/.envrc
mysql -u $MYSQL_USER -p"$MYSQL_PASSWORD" reference_library -e \
"SELECT * FROM distilled_content WHERE distill_id = $ID"
2. Evaluate Each Criterion
Score 0.0 to 1.0 for each dimension.
3. Generate Assessment
{
"accuracy": 0.90,
"completeness": 0.85,
"clarity": 0.95,
"prompt_engineering_quality": 0.88,
"usability": 0.82,
"overall_score": 0.88,
"decision": "approve",
"feedback": "Well-structured with clear examples...",
"refactor_instructions": null
}
4. Log Review
INSERT INTO review_logs
(distill_id, review_round, reviewer_type, quality_score,
assessment, decision, feedback, refactor_instructions)
VALUES
(?, 1, 'claude_review', ?, ?, ?, ?, ?);
5. Update Status
UPDATE distilled_content
SET review_status = 'approved'
WHERE distill_id = ?;
Decision Routing
APPROVE → markdown-exporter Content is ready for export.
REFACTOR → content-distiller Re-distill with specific feedback:
{"refactor_instructions": "Add more code examples for the API authentication section"}
DEEP_RESEARCH → web-crawler Need more sources:
{"research_queries": ["Claude API authentication examples", "Anthropic SDK best practices"]}
REJECT → Archive Mark as rejected, optionally note reason.
Example Usage
/quality-reviewer 15
/quality-reviewer all-pending --auto-approve
/quality-reviewer 42 --threshold 0.80