Files
our-claude-skills/custom-skills/90-reference-curator/05-quality-reviewer/code/CLAUDE.md
Andrew Yim 6d7a6d7a88 feat(reference-curator): Add portable skill suite for reference documentation curation
6 modular skills for curating, processing, and exporting reference docs:
- reference-discovery: Search and validate authoritative sources
- web-crawler-orchestrator: Multi-backend crawling (Firecrawl/Node/aiohttp/Scrapy)
- content-repository: MySQL storage with version tracking
- content-distiller: Summarization and key concept extraction
- quality-reviewer: QA loop with approve/refactor/research routing
- markdown-exporter: Structured output for Claude Projects or fine-tuning

Cross-machine installation support:
- Environment-based config (~/.reference-curator.env)
- Commands tracked in repo, symlinked during install
- install.sh with --minimal, --check, --uninstall modes
- Firecrawl MCP as default (always available)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-29 00:20:27 +07:00

104 lines
3.0 KiB
Markdown

# Quality Reviewer
QA loop for reference library content. Scores distilled materials, routes decisions, and provides actionable feedback.
## Trigger Keywords
"review content", "quality check", "QA review", "assess distilled content", "check reference quality"
## Decision Flow
```
[Distilled Content]
┌─────────────────┐
│ Score Criteria │ → accuracy, completeness, clarity, PE quality, usability
└─────────────────┘
├── ≥ 0.85 → APPROVE → markdown-exporter
├── 0.60-0.84 → REFACTOR → content-distiller
├── 0.40-0.59 → DEEP_RESEARCH → web-crawler
└── < 0.40 → REJECT → archive
```
## Scoring Criteria
| Criterion | Weight | Checks |
|-----------|--------|--------|
| **Accuracy** | 0.25 | Factual correctness, up-to-date, attribution |
| **Completeness** | 0.20 | Key concepts, examples, edge cases |
| **Clarity** | 0.20 | Structure, concise language, logical flow |
| **PE Quality** | 0.25 | Techniques, before/after, explains why |
| **Usability** | 0.10 | Easy reference, searchable, appropriate length |
## Workflow
### Step 1: Load Pending Reviews
```bash
python scripts/load_pending_reviews.py --output pending.json
```
### Step 2: Score Content
```bash
python scripts/score_content.py --distill-id 123 --output assessment.json
```
### Step 3: Calculate Final Score
```bash
python scripts/calculate_score.py --assessment assessment.json
```
### Step 4: Route Decision
```bash
python scripts/route_decision.py --distill-id 123 --score 0.78
```
Outputs:
- `approve` → Ready for export
- `refactor` → Return to distiller with instructions
- `deep_research` → Need more sources (queries generated)
- `reject` → Archive with reason
### Step 5: Log Review
```bash
python scripts/log_review.py --distill-id 123 --decision refactor --instructions "Add more examples"
```
## PE Quality Checklist
When scoring `prompt_engineering_quality`:
- [ ] Demonstrates specific techniques (CoT, few-shot, etc.)
- [ ] Shows before/after examples
- [ ] Explains *why* techniques work
- [ ] Provides actionable patterns
- [ ] Includes edge cases and failure modes
- [ ] References authoritative sources
## Auto-Approve Rules
Tier 1 sources with score ≥ 0.80 may auto-approve:
```yaml
# In config
quality:
auto_approve_tier1_sources: true
auto_approve_min_score: 0.80
```
## Scripts
- `scripts/load_pending_reviews.py` - Get pending reviews
- `scripts/score_content.py` - Multi-criteria scoring
- `scripts/calculate_score.py` - Weighted average calculation
- `scripts/route_decision.py` - Decision routing logic
- `scripts/log_review.py` - Log review to database
- `scripts/generate_feedback.py` - Generate refactor instructions
## Integration
| From | Action | To |
|------|--------|-----|
| content-distiller | Distilled content | → |
| → | APPROVE | markdown-exporter |
| → | REFACTOR + instructions | content-distiller |
| → | DEEP_RESEARCH + queries | web-crawler-orchestrator |