What is it about?

Evolutionary algorithms (EAs) are powerful tools for solving complex optimization problems, but their performance heavily depends on how they choose and adjust their search strategies (called "operators"). Traditionally, this adjustment is done either by fixed rules (which lack adaptability) or by trained AI models (which require extensive computational resources). In this paper, we introduce ​​LAOS​​, a novel framework that uses ​​large language models (LLMs)​​—like those behind advanced chatbots—to dynamically select the best search strategies for EAs. LAOS works by: 1. ​​Providing context​​: It feeds the LLM real-time information about the optimization process (e.g., progress, population diversity). 2. ​​Learning from experience​​: It stores past optimization attempts in a "memory bank" to help the LLM make informed decisions. ​​3. Balancing exploration and exploitation​​: The LLM intelligently switches between trying new strategies and refining known good ones. We tested LAOS on numerical and combinatorial problems (like scheduling and routing) and found it outperforms traditional methods in both speed and solution quality. Importantly, LAOS doesn’t require pre-training, making it efficient and practical.

Featured Image

Why is it important?

LAOS is the first framework to use large language models (LLMs) for adaptive operator selection in evolutionary algorithms, eliminating the need for costly training like traditional reinforcement learning methods. Unlike rule-based approaches, LAOS leverages LLMs' reasoning to dynamically adjust search strategies based on real-time optimization states and historical experience. Its novel dual-layer memory system guides decisions without extra training, achieving superior performance across numerical and combinatorial problems. This work bridges LLMs and optimization, offering a faster, more flexible way to automate complex problem-solving in fields like logistics and engineering.

Perspectives

As the lead author, I’m particularly excited about how LAOS demonstrates the untapped potential of LLMs beyond traditional language tasks. Seeing an LLM intelligently guide an optimization algorithm—almost like a human expert analyzing trends and making strategic adjustments—was a thrilling validation of this approach. There were challenges, of course, especially in designing prompts that reliably extract the LLM’s “reasoning” for operator selection. But the results suggest something bigger: that LLMs could become versatile co-pilots for algorithmic design, not just text generators. I’m eager to explore how this synergy between LLMs and evolutionary computation might reshape adaptive systems in the future.

Yisong Zhang
Harbin Institute of Technology

Read the Original

This page is a summary of: LAOS: Large Language Model-Driven Adaptive Operator Selection for Evolutionary Algorithms, July 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3712256.3726450.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page