Understanding transformer reasoning capabilities via graph algorithms

Abstract

Large language models (LLMs) have emerged as a groundbreaking technology, demonstrating remarkable capabilities in various tasks. Motivated by recent efforts to integrate structured data into LLMs, this paper investigates the effectiveness of transformer models, the backbone of LLMs, in solving diverse graph reasoning tasks. We introduce a novel representational hierarchy that categorizes graph reasoning problems into three classes: retrieval, parallelizable, and search tasks, each solvable by transformers with different scaling regimes. We prove that single-layer transformers with small embedding dimensions can solve retrieval tasks like node and edge counting, while parallelizable tasks like connectivity require logarithmic depth. For search tasks like shortest path, logarithmic-depth transformers are sufficient with larger embedding dimensions. We empirically validate this hierarchy on the GraphQA benchmark, demonstrating that transformers excel at global graph reasoning tasks like connectivity, triangle counting, and shortest path, outperforming specialized graph neural networks and even larger LLMs augmented with graph-based soft prompts. Our work provides theoretical insights and empirical evidence for the effectiveness of transformers in graph reasoning, culminating in actionable recommendations and a detailed comparison with existing baselines.
×