What algorithms do LLMs actually learn and use to solve problems? Studies addressing this ques tion are sparse, as research priorities are focused on improving performance through scale, leaving a theoretical and empirical gap in understanding emergent algorithms. This position paper pro poses AlgEval: a framework for systematic re search into the algorithms that LLMs learn and use. AlgEval aims to uncover algorithmic primi tives, reflected in latent representations, attention, and inference-time compute, and their algorith mic composition to solve task-specific problems. Wehighlight potential methodological paths and a case study toward this goal, focusing on emergent search algorithms. Our case study illustrates both the formation of top-down hypotheses about can didate algorithms, and bottom-up tests of these hypotheses via circuit-level analysis of attention patterns and hidden states. The rigorous, system atic evaluation of how LLMs actually solve tasks provides an alternative to resource-intensive scal ing, reorienting the field toward a principled un derstanding of underlying computations. Such al gorithmic explanations offer a pathway to human understandable interpretability, enabling compre hension of the model’s internal reasoning perfor mance measures. This can in turn lead to more sample-efficient methods for training and improv ing performance, as well as novel architectures for end-to-end and multi-agent systems.