Martyna Płomecka

Martyna Płomecka

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    CURIE: Evaluating LLMs on Multitask Long Context Scientific Understanding and Reasoning
    Jackson Cui
    Zahra Shamsi
    Gowoon Cheon
    Xuejian Ma
    Shutong Li
    Maria Tikhanovskaya
    Nayantara Mudur
    Paul Raccuglia
    Victor V. Albert
    Haining Pan
    Philippe Faist
    Brian Rohr
    Michael Statt
    Drew Purves
    Elise Kleeman
    Ruth Alcantara
    Matthew Abraham
    Muqthar Mohammad
    Ean Phing VanLee
    Chenfei Jiang
    Lizzie Dorfman
    Eun-Ah Kim
    International Conference on Learning Representations (ICLR) (2025)
    Preview abstract Scientific problem-solving involves synthesizing information while applying expert knowledge. We introduce CURIE, a scientific long-Context Understanding,Reasoning and Information Extraction benchmark to measure the potential of Large Language Models (LLMs) in scientific problem-solving and assisting scientists in realistic workflows. This benchmark introduces ten challenging tasks with a total of 580 problems and solution pairs curated by experts in six disciplines - materials science, condensed matter physics, quantum computing, geospatial analysis, biodiversity, and proteins - covering both experimental and theoretical work-flows in science. We evaluate a range of closed and open LLMs on tasks in CURIE which requires domain expertise, comprehension of long in-context information,and multi-step reasoning. While Gemini Flash 2.0 and Claude-3 show consistent high comprehension across domains, the popular GPT-4o and command-R+ fail dramatically on protein sequencing tasks. With the best performance at 32% there is much room for improvement for all models. We hope that insights gained from CURIE can guide the future development of LLMs in sciences. Evaluation code and data are in https://github.com/google/curie View details
    ×