Skip to content

Compare Command

The compare command provides AI-powered side-by-side comparison of multiple research papers, highlighting key differences and similarities.

Basic Usage

scoutml compare PAPER_ID1 PAPER_ID2 [PAPER_ID3...] [OPTIONS]

Examples

Compare Two Papers

# Compare BERT and GPT
scoutml compare 1810.04805 2005.14165

Compare Multiple Papers

# Compare transformer variants
scoutml compare 1810.04805 2005.14165 1910.10683 2010.11929

Compare from File

# Create file with paper IDs
cat > papers.txt << EOF
1810.04805
2005.14165
1910.10683
EOF

# Compare all papers in file
scoutml compare --from-file papers.txt

Options

Option Type Default Description
--from-file PATH None Read paper IDs from file (one per line)
--output CHOICE rich Output format: rich/json/markdown
--export PATH None Export comparison to file

Comparison Aspects

The AI analyzes papers across multiple dimensions:

Key Contributions

  • Main innovations
  • Novel techniques introduced
  • Problems solved

Methodology

  • Approach differences
  • Architectural choices
  • Training strategies

Performance

  • Benchmark results
  • Computational requirements
  • Efficiency metrics

Strengths & Limitations

  • Advantages of each approach
  • Known limitations
  • Trade-offs

Output Formats

Rich Format (Default)

scoutml compare 1810.04805 2005.14165

Displays: - Formatted comparison tables - Colored sections - Clear visual hierarchy - Side-by-side analysis

Markdown Format

scoutml compare 1810.04805 2005.14165 --output markdown --export comparison.md

Perfect for: - Documentation - Reports - Sharing with teams - Version control

JSON Format

scoutml compare 1810.04805 2005.14165 --output json

Structured data for: - Programmatic analysis - Custom visualizations - Integration with other tools

Common Comparisons

Model Architecture Evolution

# Transformer evolution
scoutml compare 1706.03762 1810.04805 2005.14165

# CNN architectures
scoutml compare 1512.03385 1608.06993 1707.01083

Same Task, Different Approaches

# Object detection methods
scoutml compare 1506.01497 1612.03144 1804.02767

# Language modeling
scoutml compare 1810.04805 2005.14165 1910.10683

Competing Methods

# Self-supervised learning
scoutml compare 1911.05722 2002.05709 2006.07733

Advanced Usage

Systematic Comparisons

Compare papers systematically:

# Base paper vs variants
BASE="1810.04805"  # BERT
VARIANTS="1907.11692 1906.08237 2003.10555"

for variant in $VARIANTS; do
    echo "=== Comparing BERT vs $(basename $variant) ==="
    scoutml compare $BASE $variant --output markdown \
        --export "bert_vs_${variant}.md"
done

Building Comparison Matrix

# Compare all pairs
papers=(1810.04805 2005.14165 1910.10683)

for i in "${!papers[@]}"; do
    for j in "${!papers[@]}"; do
        if [ $i -lt $j ]; then
            scoutml compare "${papers[$i]}" "${papers[$j]}" \
                --output json \
                --export "compare_${i}_${j}.json"
        fi
    done
done

Literature Review Support

# Compare papers for literature review
scoutml search "federated learning" --limit 5 --output json | \
    jq -r '.[].arxiv_id' > fl_papers.txt

scoutml compare --from-file fl_papers.txt \
    --output markdown \
    --export fl_comparison.md

Interpretation Guide

Understanding Comparisons

The comparison highlights:

  1. Fundamental Differences: Core approach variations
  2. Incremental Improvements: Building on previous work
  3. Trade-offs: What each paper optimizes for
  4. Use Cases: When to use each approach

Key Metrics to Compare

  • Performance: Accuracy, F1, BLEU, etc.
  • Efficiency: Training time, inference speed
  • Resources: Memory, compute requirements
  • Scalability: How methods scale with data/model size

Best Practices

Choosing Papers to Compare

Good Comparisons: - Papers solving the same problem - Different approaches to similar tasks - Evolution of a method over time - Competing state-of-the-art methods

Less Useful Comparisons: - Completely unrelated domains - Papers from very different eras - Theoretical vs applied papers

Effective Comparison Workflows

  1. Start with search: Find related papers first
  2. Compare incrementally: Start with 2-3 papers
  3. Export results: Save comparisons for reference
  4. Iterate: Refine based on initial comparisons

Use Cases

Research Planning

# Compare existing approaches before starting project
scoutml compare 2103.00020 2111.06377 2201.12086 \
    --output markdown \
    --export vision_language_comparison.md

Method Selection

# Compare methods for your use case
scoutml compare 1810.04805 2005.14165 1910.01108 \
    --output rich

Literature Reviews

# Systematic comparison for survey papers
cat landmark_papers.txt | xargs scoutml compare \
    --output markdown \
    --export survey_comparison.md

Teaching & Presentations

# Create educational comparisons
scoutml compare 1706.03762 1810.04805 \
    --output markdown \
    --export transformer_vs_bert.md

Output Examples

Markdown Output

# Comparison: BERT vs GPT-2

## Key Contributions

### BERT (1810.04805)
- Bidirectional pre-training
- Masked language modeling
- Next sentence prediction

### GPT-2 (2005.14165)
- Scaled up language modeling
- Zero-shot task transfer
- Unsupervised multitask learning

## Methodology Differences
...

JSON Output Structure

{
  "papers": [
    {"arxiv_id": "1810.04805", "title": "BERT: Pre-training..."},
    {"arxiv_id": "2005.14165", "title": "Language Models are..."}
  ],
  "comparison": {
    "key_contributions": {...},
    "methodology": {...},
    "performance": {...},
    "strengths_limitations": {...}
  }
}

Tips for Better Comparisons

  1. Compare comparable papers: Same task, similar era
  2. Limit number: 2-4 papers for detailed comparison
  3. Use markdown export: For sharing and documentation
  4. Follow up with details: Use paper command for deep dives
  5. Consider chronology: Compare papers in temporal order
  • paper - Get detailed information about a paper
  • similar - Find papers similar to a given paper
  • search - Find papers to compare
  • review - Generate comprehensive literature review