Abstract
Dynamic Optimization Problems (DOPs) have been widely studied using Evolutionary Algorithms (EAs). Yet, a clear and rigorous definition of DOPs is lacking in the Evolutionary Dynamic Optimization (EDO) community. In this paper, we propose a unified definition of DOPs based on the idea of multiple-decision-making discussed in the Reinforcement Learning (RL) community. We draw a connection between EDO and RL by arguing that both of them are studying DOPs according to our definition of DOPs. We point out that existing EDO or RL research has been mainly focused on some types of DOPs. A conceptualized benchmark problem, which is aimed at the systematic study of various DOPs, is then developed. Some interesting experimental studies on the benchmark reveal that EDO and RL methods are specialized in certain types of DOPs and more importantly new algorithms for DOPs can be developed by combining the strength of both EDO and RL methods.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2014 IEEE Congress on Evolutionary Computation, CEC 2014 |
Publisher | IEEE |
Pages | 1550-1557 |
Number of pages | 8 |
ISBN (Print) | 978-1-4799-6626-4 |
DOIs | |
Publication status | Published - 2014 |
Event | 2014 IEEE Congress on Evolutionary Computation - Beijing, China Duration: 6 Jul 2014 → 11 Jul 2014 |
Congress
Congress | 2014 IEEE Congress on Evolutionary Computation |
---|---|
Abbreviated title | CEC 2014 |
Country/Territory | China |
City | Beijing |
Period | 6/07/14 → 11/07/14 |