# Quality Reviewer QA loop for reference library content. Scores distilled materials, routes decisions, and provides actionable feedback. ## Trigger Keywords "review content", "quality check", "QA review", "assess distilled content", "check reference quality" ## Decision Flow ``` [Distilled Content] │ ▼ ┌─────────────────┐ │ Score Criteria │ → accuracy, completeness, clarity, PE quality, usability └─────────────────┘ │ ├── ≥ 0.85 → APPROVE → markdown-exporter ├── 0.60-0.84 → REFACTOR → content-distiller ├── 0.40-0.59 → DEEP_RESEARCH → web-crawler └── < 0.40 → REJECT → archive ``` ## Scoring Criteria | Criterion | Weight | Checks | |-----------|--------|--------| | **Accuracy** | 0.25 | Factual correctness, up-to-date, attribution | | **Completeness** | 0.20 | Key concepts, examples, edge cases | | **Clarity** | 0.20 | Structure, concise language, logical flow | | **PE Quality** | 0.25 | Techniques, before/after, explains why | | **Usability** | 0.10 | Easy reference, searchable, appropriate length | ## Workflow ### Step 1: Load Pending Reviews ```bash python scripts/load_pending_reviews.py --output pending.json ``` ### Step 2: Score Content ```bash python scripts/score_content.py --distill-id 123 --output assessment.json ``` ### Step 3: Calculate Final Score ```bash python scripts/calculate_score.py --assessment assessment.json ``` ### Step 4: Route Decision ```bash python scripts/route_decision.py --distill-id 123 --score 0.78 ``` Outputs: - `approve` → Ready for export - `refactor` → Return to distiller with instructions - `deep_research` → Need more sources (queries generated) - `reject` → Archive with reason ### Step 5: Log Review ```bash python scripts/log_review.py --distill-id 123 --decision refactor --instructions "Add more examples" ``` ## PE Quality Checklist When scoring `prompt_engineering_quality`: - [ ] Demonstrates specific techniques (CoT, few-shot, etc.) - [ ] Shows before/after examples - [ ] Explains *why* techniques work - [ ] Provides actionable patterns - [ ] Includes edge cases and failure modes - [ ] References authoritative sources ## Auto-Approve Rules Tier 1 sources with score ≥ 0.80 may auto-approve: ```yaml # In config quality: auto_approve_tier1_sources: true auto_approve_min_score: 0.80 ``` ## Scripts - `scripts/load_pending_reviews.py` - Get pending reviews - `scripts/score_content.py` - Multi-criteria scoring - `scripts/calculate_score.py` - Weighted average calculation - `scripts/route_decision.py` - Decision routing logic - `scripts/log_review.py` - Log review to database - `scripts/generate_feedback.py` - Generate refactor instructions ## Integration | From | Action | To | |------|--------|-----| | content-distiller | Distilled content | → | | → | APPROVE | markdown-exporter | | → | REFACTOR + instructions | content-distiller | | → | DEEP_RESEARCH + queries | web-crawler-orchestrator |