Skip to main content

External Papers & Reproductions

Tracking, reviewing, and reproducing important research papers from the broader scientific community. Our commitment to validating and building upon existing research.

📄
7
Total Papers
🔍
5
Triaged
🔬
2
In Progress
0
Reproduced

Browse Papers by Research Area

Select one or more research areas to filter papers. Papers must match ALL selected areas.

2025
Reproduction: In ProgressP2Interpretability

On the Biology of a Large Language Model

By Lindsey J et al
Published in: Transformer Circuits

Citation: Lindsey J et al. (2025). On the Biology of a Large Language Model. Transformer Circuits

Notes: Mechanistic interpretability research

2025
Reproduction: TriagedP2Interpretability

Tracing the thoughts of a large language model

By Anthropic Team
Published in: Anthropic

Citation: Anthropic Team. (2025). Tracing the thoughts of a large language model. Anthropic Research

Notes: Monitor for opportunities

2025
Reproduction: TriagedP2RAG Systems

Agentic RAG with uncertainty routing

By Lee S, Patel R
Published in: arXiv

Citation: Lee S & Patel R. (2025). Agentic RAG with uncertainty routing. arXiv:2501.01234

Notes: Wait for code release

2025
Reproduction: In ProgressP0Attention MechanismsEfficient Training

XAttention: Block Sparse Attention with Antidiagonal Scoring

By Xu R, Xiao G, Huang H, Guo J, Han S
Published in: arXiv

Citation: Xu R et al. (2025). XAttention: Block Sparse Attention with Antidiagonal Scoring. arXiv:2503.16428

Notes: High priority reproduction target

2024
Reproduction: TriagedP0Security

Effective Prompt Extraction from Language Models

By Yiming Zhang, Nicholas Carlini, Daphne Ippolito
2024
Reproduction: TriagedP3Multimodal

Multimodal pretraining for medical imaging

By Zhang L et al
Published in: ICLR

Citation: Zhang L et al. (2024). Multimodal pretraining for medical imaging. ICLR 2024

Notes: Out of scope - privacy concerns

2024
Reproduction: TriagedP0Efficient Training

Scaling-efficient finetuning with sparse adapters

By Doe J, Smith A
Published in: NeurIPS

Citation: Doe J & Smith A. (2024). Scaling-efficient finetuning with sparse adapters. NeurIPS 2024

Notes: Cost reduction aligned

Suggest Papers for Reproduction

Found an interesting paper that should be reproduced? Help us validate and build upon existing research by suggesting papers for our reproduction pipeline.