Skip to content

Review Command

The review command generates AI-synthesized literature reviews on any research topic, analyzing trends, key contributions, and research directions.

Basic Usage

scoutml review TOPIC [OPTIONS]

Examples

Simple Review

# Basic literature review
scoutml review "federated learning"

# With year constraints
scoutml review "vision transformers" --year-min 2021

Comprehensive Review

# Detailed review with filters
scoutml review "self-supervised learning" \
  --year-min 2020 \
  --year-max 2023 \
  --min-citations 50 \
  --limit 100 \
  --export ssl_review.md

Options

Option Type Default Description
--year-min INTEGER None Minimum publication year
--year-max INTEGER None Maximum publication year
--min-citations INTEGER 0 Minimum citation count
--limit INTEGER 50 Max papers to analyze
--output CHOICE rich Output format: rich/markdown/json
--export PATH None Export review to file

Review Components

The AI-generated review includes:

Executive Summary

  • Topic overview
  • Key themes identified
  • Major breakthroughs
  • Current state of research

Historical Development

  • Timeline of major contributions
  • Evolution of approaches
  • Paradigm shifts

Key Papers Analysis

  • Seminal works
  • Recent advances
  • High-impact contributions

Methods and Techniques

  • Common approaches
  • Novel techniques
  • Comparative analysis

Applications and Domains

  • Real-world applications
  • Cross-domain adaptations
  • Industry adoption

Open Challenges

  • Unresolved problems
  • Current limitations
  • Future directions

Conclusions

  • Research trends
  • Promising directions
  • Recommendations

Output Formats

Rich Format (Default)

scoutml review "few-shot learning"

Interactive terminal display with: - Formatted sections - Highlighted key points - Color-coded information - Hierarchical structure

Markdown Format

scoutml review "few-shot learning" --output markdown --export review.md

Perfect for: - Documentation - Blog posts - Research proposals - Sharing with colleagues

JSON Format

scoutml review "few-shot learning" --output json

Structured data containing: - Section breakdowns - Paper references - Key findings - Statistical analysis

Topic Selection

Effective Topics

Good Topics:

# Specific techniques
scoutml review "contrastive self-supervised learning"

# Emerging fields
scoutml review "neural radiance fields"

# Application areas
scoutml review "transformers for time series"

Too Broad:

# Avoid overly general topics
scoutml review "machine learning"  # Too broad
scoutml review "deep learning"     # Too general

Multi-aspect Topics

# Intersection of fields
scoutml review "federated learning privacy preservation"

# Method + application
scoutml review "graph neural networks drug discovery"

# Problem-specific
scoutml review "catastrophic forgetting continual learning"

Advanced Usage

Comprehensive Literature Survey

# Full survey with maximum coverage
scoutml review "multimodal learning" \
  --year-min 2018 \
  --limit 200 \
  --min-citations 10 \
  --output markdown \
  --export multimodal_survey_2024.md

Recent Developments Only

# Focus on latest research
scoutml review "diffusion models" \
  --year-min 2023 \
  --limit 50 \
  --output markdown \
  --export diffusion_recent.md

High-Impact Analysis

# Only highly cited papers
scoutml review "neural architecture search" \
  --min-citations 100 \
  --limit 30 \
  --output rich

Use Cases

PhD Literature Review

# Comprehensive review for thesis
scoutml review "your thesis topic" \
  --year-min 2015 \
  --limit 150 \
  --output markdown \
  --export thesis_litreview.md

Grant Proposals

# Background section for grants
scoutml review "quantum machine learning" \
  --year-min 2020 \
  --min-citations 20 \
  --output markdown \
  --export grant_background.md

Technology Assessment

# Evaluate technology maturity
scoutml review "federated learning production systems" \
  --year-min 2021 \
  --output rich

Course Preparation

# Teaching material preparation
scoutml review "attention mechanisms" \
  --limit 75 \
  --output markdown \
  --export lecture_notes.md

Review Quality Tips

Optimal Parameters

  1. Paper Count: 50-100 for comprehensive reviews
  2. Time Range: 3-5 years for current state
  3. Citations: Adjust based on field maturity
  4. Topic Specificity: Not too broad, not too narrow

Iterative Refinement

# Start broad
scoutml review "reinforcement learning" --limit 30

# Refine based on findings
scoutml review "model-based reinforcement learning" --limit 50

# Focus further
scoutml review "world models reinforcement learning" --limit 75

Example Outputs

Executive Summary Example

# Literature Review: Self-Supervised Learning in Computer Vision

## Executive Summary

Self-supervised learning has emerged as a dominant paradigm in computer vision,
eliminating the need for labeled data. Key developments include contrastive
methods (SimCLR, MoCo), clustering approaches (SwAV), and masked prediction
(MAE). The field has seen rapid progress with methods achieving near-supervised
performance on ImageNet...

Key Papers Section

## Key Papers

### Foundational Works
1. **Momentum Contrast (MoCo)** - He et al., 2020
   - Introduced momentum encoder for contrastive learning
   - Citations: 5000+
   - Impact: Established new baseline for self-supervised learning

2. **SimCLR** - Chen et al., 2020
   - Simplified contrastive learning framework
   - Citations: 4500+
   - Impact: Showed importance of data augmentation...

Best Practices

Topic Selection

  1. Be specific but not too narrow
  2. Include method/application combination
  3. Consider temporal aspects (recent vs historical)

Parameter Tuning

  1. Start with defaults (50 papers)
  2. Increase limit for comprehensive reviews
  3. Use citations to filter quality
  4. Constrain years for focused analysis

Export Strategy

  1. Always export important reviews
  2. Use markdown for editing/sharing
  3. Keep JSON for further analysis
  4. Version control your reviews

Troubleshooting

Poor Quality Review

If review seems shallow: 1. Increase paper limit (100+) 2. Broaden year range 3. Lower citation threshold 4. Refine topic description

Too Broad/Unfocused

If review lacks coherence: 1. Narrow topic scope 2. Add method/application constraints 3. Focus on recent years 4. Increase citation threshold