The 18th Asia Pacific Symposium on Intelligent and Evolutionary Systems
10-12th November 2014, Singapore
List of IES 2014 Tutorials
T1 - Collaborative Learning and Optimization
Learning and optimization are two essential tasks that computational intelligence aims at addressing. Numerous techniques had been developed for these two purposes separately. Intrinsically, learning and optimization are interrelated. On the one hand, learning can be formulated as model-centric or data-centric optimization problems. On the other hand, optimization can be regarded as adaptive learning processes. Recent years have seen tremendous efforts on hybridizing learning and optimization techniques aiming at making them to benefit from each other. Nowadays, modern problems, being featured by fast-growing scales, complexity and uncertainty, can seldom be solved by mere learning or mere optimization or their simple hybridizations, which calls for an in-depth investigation of the synergy between learning and optimization. In this context, the IEEE CIS task force on “collaborative learning and optimization” had been established in 2011 to provide an international forum for academic and industrial researchers from both learning and optimization communities to collaboratively explore promising directions of the synergy between learning and optimization.
T2 - Evaluating the Evolutionary Algorithms for Numerical Optimization: Foundations, Recent Advances, and Future Challenges
More Information: (Click here)
Evolutionary Algorithms (EAs) are expected to be good black-box optimizers. Their performance should remain statistically appreciable on a wide range of or at least on some well-defined classes of optimization problems. Before an EA can be published in a reputed journal, it usually needs to go through a number of tests to detect its strengths and weaknesses. Such investigation also includes the problem class to which the algorithm is most applicable and the problem characteristics that may deceive it from carrying out an effective search. Since the early days of research on and with EAs for real parameter optimization, a popular approach is to investigate their performances on a number of mathematical functions, also
called benchmark functions, which are expected to capture various aspects of the complexities of the real world problems.
This tutorial begins with a comprehensive overview of benchmarking EAs by using mathematical test functions. The talk then discusses the evolution of the benchmarking procedure itself along with the complexities and downsides of the modern day’s test problems. It also elaborates on the performance measures used for comparing the search abilities of various EAs. The discussion then proceeds to focus on the statistical methods currently in use to judge the significance of the results returned by an EA. The talk is concluded with a few potential issues that need the attention of the EC researchers. The discussions of tutorial lecture are mainly centered on EAs for solving single-objective box-constrained function optimization problems involving continuous variables.
Dr. Ke Tang
University of Science and Technology of China, China
Dr. Kai Qin
RMIT University, Australia
Dr. Swagatam Das
Indian Statistical Institute, Kolkata