6 modular skills for curating, processing, and exporting reference docs: - reference-discovery: Search and validate authoritative sources - web-crawler-orchestrator: Multi-backend crawling (Firecrawl/Node/aiohttp/Scrapy) - content-repository: MySQL storage with version tracking - content-distiller: Summarization and key concept extraction - quality-reviewer: QA loop with approve/refactor/research routing - markdown-exporter: Structured output for Claude Projects or fine-tuning Cross-machine installation support: - Environment-based config (~/.reference-curator.env) - Commands tracked in repo, symlinked during install - install.sh with --minimal, --check, --uninstall modes - Firecrawl MCP as default (always available) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
3.0 KiB
3.0 KiB
Quality Reviewer
QA loop for reference library content. Scores distilled materials, routes decisions, and provides actionable feedback.
Trigger Keywords
"review content", "quality check", "QA review", "assess distilled content", "check reference quality"
Decision Flow
[Distilled Content]
│
▼
┌─────────────────┐
│ Score Criteria │ → accuracy, completeness, clarity, PE quality, usability
└─────────────────┘
│
├── ≥ 0.85 → APPROVE → markdown-exporter
├── 0.60-0.84 → REFACTOR → content-distiller
├── 0.40-0.59 → DEEP_RESEARCH → web-crawler
└── < 0.40 → REJECT → archive
Scoring Criteria
| Criterion | Weight | Checks |
|---|---|---|
| Accuracy | 0.25 | Factual correctness, up-to-date, attribution |
| Completeness | 0.20 | Key concepts, examples, edge cases |
| Clarity | 0.20 | Structure, concise language, logical flow |
| PE Quality | 0.25 | Techniques, before/after, explains why |
| Usability | 0.10 | Easy reference, searchable, appropriate length |
Workflow
Step 1: Load Pending Reviews
python scripts/load_pending_reviews.py --output pending.json
Step 2: Score Content
python scripts/score_content.py --distill-id 123 --output assessment.json
Step 3: Calculate Final Score
python scripts/calculate_score.py --assessment assessment.json
Step 4: Route Decision
python scripts/route_decision.py --distill-id 123 --score 0.78
Outputs:
approve→ Ready for exportrefactor→ Return to distiller with instructionsdeep_research→ Need more sources (queries generated)reject→ Archive with reason
Step 5: Log Review
python scripts/log_review.py --distill-id 123 --decision refactor --instructions "Add more examples"
PE Quality Checklist
When scoring prompt_engineering_quality:
- Demonstrates specific techniques (CoT, few-shot, etc.)
- Shows before/after examples
- Explains why techniques work
- Provides actionable patterns
- Includes edge cases and failure modes
- References authoritative sources
Auto-Approve Rules
Tier 1 sources with score ≥ 0.80 may auto-approve:
# In config
quality:
auto_approve_tier1_sources: true
auto_approve_min_score: 0.80
Scripts
scripts/load_pending_reviews.py- Get pending reviewsscripts/score_content.py- Multi-criteria scoringscripts/calculate_score.py- Weighted average calculationscripts/route_decision.py- Decision routing logicscripts/log_review.py- Log review to databasescripts/generate_feedback.py- Generate refactor instructions
Integration
| From | Action | To |
|---|---|---|
| content-distiller | Distilled content | → |
| → | APPROVE | markdown-exporter |
| → | REFACTOR + instructions | content-distiller |
| → | DEEP_RESEARCH + queries | web-crawler-orchestrator |