Skip to content

Agent Critique Command

The agent critique command provides comprehensive research critique and peer review analysis of papers, examining methodology, experiments, claims, and reproducibility.

Basic Usage

scoutml agent critique ARXIV_ID [OPTIONS]

Examples

Full Critique

# Complete critique of all aspects
scoutml agent critique 1810.04805

Focused Critique

# Critique specific aspects
scoutml agent critique 2010.11929 \
  --aspects methodology \
  --aspects experiments

Options

Option Type Default Description
--aspects TEXT all Aspects to critique (can specify multiple): methodology/experiments/claims/reproducibility
--output CHOICE rich Output format: rich/json
--export PATH None Export critique to file

Critique Aspects

Methodology

scoutml agent critique 2103.00020 --aspects methodology

Examines: - Research design - Theoretical foundation - Approach validity - Assumptions made - Potential biases

Experiments

scoutml agent critique 2103.00020 --aspects experiments

Analyzes: - Experimental setup - Evaluation metrics - Baseline comparisons - Statistical significance - Ablation studies

Claims

scoutml agent critique 2103.00020 --aspects claims

Evaluates: - Main assertions - Evidence support - Generalization scope - Limitations acknowledgment - Claim validity

Reproducibility

scoutml agent critique 2103.00020 --aspects reproducibility

Assesses: - Implementation details - Hyperparameter specification - Data availability - Code accessibility - Computational requirements

Critique Components

1. Summary

  • Paper overview
  • Main contributions
  • Research context

2. Strengths

  • Novel contributions
  • Strong methodology
  • Convincing results
  • Clear presentation

3. Weaknesses

  • Methodological issues
  • Experimental gaps
  • Unclear aspects
  • Missing comparisons

4. Detailed Analysis

  • Section-by-section review
  • Technical accuracy
  • Logical flow
  • Evidence quality

5. Recommendations

  • Improvement suggestions
  • Future work directions
  • Additional experiments
  • Clarifications needed

6. Overall Assessment

  • Significance rating
  • Technical quality
  • Clarity score
  • Impact potential

Use Cases

Paper Review

# Before reading a paper
scoutml agent critique 2301.08727 \
  --export critique.md

Research Validation

# Validate methodology
scoutml agent critique 1906.08237 \
  --aspects methodology \
  --aspects reproducibility

Literature Analysis

# Analyze multiple papers
for paper in 1810.04805 2005.14165 1910.10683; do
    scoutml agent critique $paper \
        --aspects claims \
        --export "critique_${paper}.md"
done

Teaching Critical Analysis

# Educational critique
scoutml agent critique 1706.03762 \
  --output rich

Advanced Usage

Comparative Critiques

# Critique related papers
papers=("2010.11929" "2102.05918" "2105.08050")

for paper in "${papers[@]}"; do
    echo "=== Critique of $paper ==="
    scoutml agent critique "$paper" \
        --aspects methodology \
        --aspects experiments \
        --output json > "critique_${paper}.json"
done

Reproducibility Focus

# Check implementation feasibility
scoutml agent critique 2103.00020 \
  --aspects reproducibility \
  --output rich

# Follow up with implementation
scoutml agent implement 2103.00020

Claims Verification

# Verify bold claims
scoutml agent critique 2301.12345 \
  --aspects claims \
  --aspects experiments \
  --export claims_analysis.md

Output Examples

Rich Output (Default)

Displays structured critique with: - Color-coded sections - Severity indicators - Formatted lists - Clear hierarchy

JSON Output

scoutml agent critique 1810.04805 --output json

Returns:

{
  "paper": {
    "arxiv_id": "1810.04805",
    "title": "BERT: Pre-training of Deep..."
  },
  "critique": {
    "strengths": [
      "Novel bidirectional pre-training approach",
      "Strong empirical results across tasks"
    ],
    "weaknesses": [
      "Computational requirements not fully analyzed",
      "Limited analysis of what model learns"
    ],
    "methodology": {
      "score": 8.5,
      "issues": [...],
      "strengths": [...]
    },
    "overall_assessment": {
      "significance": "high",
      "technical_quality": "excellent",
      "recommendation": "strong accept"
    }
  }
}

Interpretation Guide

Strength Indicators

  • Major Strength: Significant contribution
  • Moderate Strength: Solid aspect
  • Minor Strength: Nice addition

Weakness Severity

  • Critical: Fundamental flaw
  • Major: Significant issue
  • Moderate: Should address
  • Minor: Nice to fix

Overall Ratings

  • Significance: Impact on field
  • Technical Quality: Methodology rigor
  • Clarity: Presentation quality
  • Reproducibility: Implementation feasibility

Best Practices

When to Use

  1. Before implementing - Check feasibility
  2. Paper selection - Choose best papers
  3. Research planning - Learn from critiques
  4. Teaching - Demonstrate critical thinking

Aspect Selection

  1. Use all aspects for comprehensive review
  2. Focus on methodology for theoretical papers
  3. Emphasize experiments for empirical work
  4. Check reproducibility before implementing

Acting on Critiques

  1. Don't dismiss papers based on weaknesses
  2. Learn from strengths for your work
  3. Address weaknesses in your implementation
  4. Use for improvement ideas

Common Workflows

Pre-Implementation Check

# 1. Critique the paper
scoutml agent critique 2103.00020 \
  --aspects reproducibility \
  --aspects methodology

# 2. If promising, get implementation
scoutml agent implement 2103.00020

# 3. Address limitations
scoutml agent solve-limitations 2103.00020

Paper Quality Assessment

# Batch critique for paper selection
cat paper_list.txt | while read paper_id; do
    scoutml agent critique "$paper_id" \
        --output json \
        --export "critiques/${paper_id}.json"
done

# Analyze results
jq '.overall_assessment.significance' critiques/*.json

Research Improvement

# Learn from top papers
scoutml search "your topic" --sota-only --limit 5 --output json | \
  jq -r '.[].arxiv_id' | \
  xargs -I {} scoutml agent critique {} --aspects methodology

Tips for Using Critiques

Reading Critiques

  1. Start with summary - Get overview
  2. Check major weaknesses - Understand limitations
  3. Note strengths - Learn best practices
  4. Read recommendations - Future directions

Learning from Critiques

  1. Pattern recognition - Common issues
  2. Methodology insights - What works
  3. Experimental design - Best practices
  4. Writing clarity - Presentation tips