Jiefeng Chen
Jiefeng Chen is a Research Scientist at Google Cloud AI Research, working on building more efficient and reliable Large Language Model (LLM)-based agents for real-world applications. For complete list of publications and latest updates, please check out his primary homepage or visit his Google Scholar page.
Research Areas
Authored Publications
Sort By
Preview abstract
Automating data visualization from natural language is crucial for data science, yet current systems struggle with complex, multi-file datasets and iterative refinement. Existing approaches, including simple single- or multi-agent systems, often oversimplify the task, focusing on initial query parsing while failing to robustly manage data complexity, code errors, or final visualization quality. In this paper, we reframe this challenge as a collaborative multi-agent problem. We introduce CoDA, a multi-agent system that employs specialized LLM agents for metadata analysis, task planning, code generation, and iterative reflection. We formalize this pipeline, demonstrating how metadata-focused analysis bypasses token limits and quality-driven refinement ensures robustness. Extensive evaluations show CoDA achieves substantial accuracy gains, outperforming competitive baselines by up to 49.0%. This work advocates that future visualization automation should evolve from isolated code generation to integrated, collaborative agentic workflows.
View details
Preview abstract
While Large Language Models (LLMs) have shown remarkable advancements in reasoning and tool use, they often fail to generate optimal, grounded solutions under complex constraints. Real-world travel planning exemplifies these challenges, evaluating agents' abilities to handle constraints that are explicit, implicit, and even evolving based on interactions with dynamic environments and user needs. In this paper, we present ATLAS, a general multi-agent framework designed to effectively handle such complex nature of constraints awareness in real-world travel planning tasks. Our framework introduces a principled approach to address the fundamental challenges of constraint-aware planning through dedicated mechanisms for dynamic constraint management, iterative plan critique, and adaptive interleaved search. ATLAS demonstrates state-of-the-art performance on the TravelPlanner benchmark, improving the final pass rate from 17.8% to 44.4% over its best alternative. More importantly, this is the first work to be evaluated in and demonstrate quantitative effectiveness on real-world travel planning with live information search and multi-turn feedback. In this realistic setting, ATLAS demonstrates its ability to adapt to multi-turn user feedback, achieving an 84% final pass rate which significantly outperforms baselines including ReAct (59%) and a monolithic agent (27%).
View details
SETS: Leveraging Self-Verification and Self-Correction for Improved Test-Time Scaling
Xinyun Chen
Transactions on Machine Learning Research (TMLR) (2025)
Preview abstract
Recent advancements in Large Language Models (LLMs) have created new opportunities to enhance performance on complex reasoning tasks by leveraging test-time computation. However, existing scaling methods have key limitations: parallel methods like repeated sampling are often inefficient and quickly saturate, while sequential methods like SELF-REFINE struggle to improve after a few rounds. Although combining these approaches shows promise, current methods require fine-tuned reward and revision models. This paper proposes Self-Enhanced Test-Time Scaling (SETS), a simple yet effective approach that overcomes these limitations by strategically combining parallel and sequential techniques and fully leveraging LLMs' self-improvement abilities. SETS exploits the inherent self-verification and self-correction capabilities of LLMs, unifying sampling, verification, and correction within a single framework. This facilitates efficient and scalable test-time computation for enhanced performance on complex tasks without any model training. Our comprehensive experimental results on challenging benchmarks spanning planning, reasoning, math, and coding demonstrate that SETS achieves significant performance improvements and more advantageous test-time scaling behavior than the alternatives.
View details
TUMIX: Augmenting LLM Reasoning with a Dynamic Tool-Use Mixture
Chuchu Fan
Na Li
Chi Wang
Ji Yin
Yongchao Chen
Rui Meng
2025
Preview abstract
Integrating tools like Code Interpreter and Search has significantly improved Large Language Models (LLMs) reasoning, as shown by leading models such as OpenAI's ChatGPT Agent, Google's Gemini-Pro, and XAI's Grok4. However, the research community still lacks practical guidance on fully leveraging these tools. The main challenge lies in finding an effective method to fully exploit the benefits of textual reasoning, coding, and searching when facing distinctive questions. To address this, we propose an ensemble-based framework that runs multiple agents in parallel, each exploring different answer paths with distinct tool-use strategies. Agents iteratively share and refine their answers by considering the original question and previous responses. Our proposed method Tool-Use Mixture (TUMIX) achieves significant gains over other representative tool-augmented test-time scaling methods such as Self-MoA, Symbolic-MoE, DEI, SciMaster, and GSA. With near equal inference costs, TUMIX delivers an average +3.55% accuracy improvement over the best baseline on Gemini-2.5-Pro and Gemini-2.5-Flash across key reasoning benchmarks (HLE, GPQA, AIME 24&25), where coding and search can effectively support reasoning when applied properly. We find that agent diversity and quality are crucial, and can be further improved by querying LLMs to automatically optimize agent designs. To reduce costs, TUMIX halts refinement once sufficient confidence is reached, preserving nearly the same performance at just 49% of the inference cost. With further scaling, TUMIX can achieve even higher performance, though at substantially greater cost.
View details
Preview abstract
Agents based on large language models (LLMs) for machine learning engineering (MLE) can automatically implement ML models via code generation. However, existing approaches to build such agents often rely heavily on inherent LLM knowledge and employ coarse exploration strategies that modify the entire code structure at once. This limits their ability to select effective task-specific models and perform deep exploration within specific components, such as experimenting extensively with feature engineering options. To overcome these, we propose MLE-STAR, a novel approach to build MLE agents. MLESTAR first leverages external knowledge by using a search engine to retrieve effective models from the web, forming an initial solution, then iteratively refines it by exploring various strategies targeting specific ML components. This exploration is guided by ablation studies analyzing the impact of individual code blocks. Furthermore, we introduce a novel ensembling method using an effective strategy suggested by MLE-STAR. Our experimental results show that MLE-STAR achieves medals in 64% of the Kaggle competitions on the MLE-bench Lite, significantly outperforming the best alternative.
View details
Preview abstract
Inference-time scaling has been successful in enhancing large language model (LLM) performance by increasing computation at test time, but it often relies on external verifiers or is not optimized for manageable computational budgets. To address these, we propose DynScaling, which addresses these limitations through two primary innovations: an integrated parallel-sequential sampling strategy and a bandit-based dynamic budget allocation framework. The integrated sampling strategy unifies parallel and sequential sampling by constructing synthetic sequential reasoning chains from initially independent parallel responses, promoting diverse and coherent reasoning trajectories. The dynamic budget allocation framework formulates the allocation of computational resources as a multi-armed bandit problem, adaptively distributing the inference budget across queries based on the uncertainty of previously sampled responses, thereby maximizing computational efficiency. By synergizing these components, DynScaling effectively improves LLM performance under practical resource constraints without the need for external verifiers. Experimental results demonstrate that DynScaling consistently surpasses existing verifier-free inference scaling baselines in both task performance and computational cost.
View details
Astute RAG: Overcoming Imperfect Retrieval Augmentation and Knowledge Conflicts for Large Language Models
Fei Wang
The Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025) (2025) (to appear)
Preview abstract
Retrieval-Augmented Generation (RAG), while effective in integrating external knowledge to address the limitations of large language models (LLMs), can be undermined by imperfect retrieval, which may introduce irrelevant, misleading, or even malicious information. Despite its importance, previous studies have rarely explored the behavior of RAG through joint analysis on how errors from imperfect retrieval attribute and propagate, and how potential conflicts arise between the LLMs' internal knowledge and external sources. We find that imperfect retrieval augmentation might be inevitable and quite harmful, through controlled analysis under realistic conditions. We identify the knowledge conflicts between LLM-internal and external knowledge from retrieval as a bottleneck to overcome in the post-retrieval stage of RAG. To render LLMs resilient to imperfect retrieval, we propose Astute RAG, a novel RAG approach that adaptively elicits essential information from LLMs' internal knowledge, iteratively consolidates internal and external knowledge with source-awareness, and finalizes the answer according to information reliability. Our experiments using Gemini and Claude demonstrate that Astute RAG significantly outperforms previous robustness-enhanced RAG methods. Notably, Astute RAG is the only approach that matches or exceeds the performance of LLMs without RAG under worst-case scenarios. Further analysis reveals that Astute RAG effectively resolves knowledge conflicts, improving the reliability and trustworthiness of RAG systems.
View details
Preview abstract
Data science, which transforms raw data into actionable insights, is critical for data-driven decision-making. However, these tasks are often complex, involving steps like exploring multiple data sources and synthesizing findings to deliver clear answers. While large language model (LLM) agents show significant promise in automating this process, they often struggle with heterogeneous data formats and generate sub-optimal analysis plans, as verifying plan correctness is inherently difficult without ground-truth labels for such open-ended tasks. To overcome these limitations, we introduce DS-STAR, a novel data science agent. Specifically, DS-STAR makes three key contributions: (1) a data file analysis module that automatically reads and extracts context from diverse data formats, including unstructured types; (2) a verification step where an LLM-based judge evaluates the sufficiency of the analysis plan at each stage; and (3) a sequential planning mechanism that starts with a simple, executable plan and iteratively refines it based the DS-STAR's feedback until its sufficiency is confirmed. This iterative refinement allows DS-STAR to reliably navigate complex analyses involving varied data sources. Our experiments show that DS-STAR achieves state-of-the-art performance, improving accuracy on the challenging DABStep benchmark from 41.0% to 45.2% and on Kramabench from 31.3% to 44.7%. These results demonstrate the effectiveness of our approach for practical, multi-step data science tasks.
View details
ASPEST: Bridging the Gap Between Active Learning and Selective Prediction
Somesh Jha
Transactions on Machine Learning Research (TMLR) (2024)
Preview abstract
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain. These predictions can then be deferred to humans for further evaluation. As an everlasting challenge for machine learning, in many real-world scenarios, the distribution of test data is different from the training data. This results in more inaccurate predictions, and often increased dependence on humans, which can be difficult and expensive. Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples. Selective prediction and active learning have been approached from different angles, with the connection between them missing. In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain while increasing accuracy and coverage. For this new paradigm, we propose a simple yet effective approach, ASPEST, that utilizes ensembles of model snapshots with self-training with their aggregated outputs as pseudo labels. Extensive experiments on numerous image, text and structured datasets, which suffer from domain shifts, demonstrate that ASPEST can significantly outperform prior work on selective prediction and active learning (e.g. on the MNIST→SVHN benchmark with the labeling budget of 100, ASPEST improves the AUACC metric from 79.36% to 88.84%) and achieves more optimal utilization of humans in the loop.
View details
Is Forgetting Less a Good Inductive Bias for Forward Transfer?
Timothy Nguyen
Dilan Gorur
Arslan Chaudhry
International Conference on Learning Representations (ICLR) (2023)
Preview abstract
One of the main motivations of studying continual learning is that the problem setting allows a model to accrue knowledge from past tasks to learn new tasks more efficiently. However, recent studies suggest that the key metric that continual learning algorithms optimize, reduction in catastrophic forgetting, does not correlate well with the forward transfer of knowledge. We believe that the conclusion previous works reached is due to the way they measure forward transfer. We argue that the measure of forward transfer to a task should not be affected by the restrictions placed on the continual learner in order to preserve knowledge of previous tasks. Instead, forward transfer should be measured by how easy it is to learn a new task given a set of representations produced by continual learning on previous tasks. Under this notion of forward transfer, we evaluate different continual learning algorithms on a variety of image classification benchmarks. Our results indicate that less forgetful representations lead to a better forward transfer suggesting a strong correlation between retaining past information and learning efficiency on new tasks. Further, we found less forgetful representations to be more diverse and discriminative compared to their forgetful counterparts.
View details