Configuring irace using surrogate configuration benchmarks: Supplementary material

by Nguyen Dang, Leslie Pérez Cáceres, Thomas Stützle, Patrick De Causmaecker
2017

Table of Contents
  1. Abstract
  2. Supplementary data
  3. Surrogate Benchmarks
  4. Real Benchmarks
  5. Package
  6. References

1. Abstract:

Over recent years, several tools for the automated configuration of parameterized algorithms have been developed. These tools, also called configurators, have themselves parameters that influence their search behavior and make them malleable to different kinds of configuration tasks. The default values of these parameters are set manually based on the experience of the configurator's developers. Studying the impact of these parameters or configuring them is very expensive as it would require many executions of these tools on configuration tasks, each taking often many hours or days of computation. In this work, we tackle this problem using a meta-tuning process, based on the use of surrogate benchmarks that are much faster to evaluate. This paper studies the feasibility of this process using the popular irace configurator as the method to be meta-configured. We first study the consistency between the real and surrogate benchmarks using three measures: the prediction accuracy of the surrogate models, the homogeneity of the benchmarks and the list of important algorithm parameters. Afterwards, we use irace to configure irace on those surrogates. Experimental results indicate the feasibility of this process and a clear potential improvement of irace over its default configuration.

Keywords:surrogate benchmarks, automated parameter configuration, empirical performance models



2. Supplementary data:

The details of the results of the experiments described can be found in the appendix.

3. Surrogate Benchmarks:

Algorithm Parameters Instance set #Instances Cost metric Cut off time
CPLEX 76 parameters
int: 6, real: 4, categorical: 66, conditional: 4
[BIGMIX,REG,CORLAT,RCW] [1000,1000,1000,1000] Running time 300 seconds
SPEAR 26 parameters
int: 4, real: 10, categorical: 12, conditional: 9
[SWV,IBM] [604,765] Running time 300 seconds
ACOTSP 11 parameters
int: 4, real: 4, categorical: 3, conditional: 5
TSP sizes 1000-3000 50 Solution quality 20 seconds
All CPLEX and SPEAR benchmarks are taken from the Empirical Performance Models. Due to the heavy memory required for saving each surrogate model, for CPLEX benchmarks, the number of instances is limited to 1000. The (arbitrarily) chosen list of instances can be downloaded here: BIGMIX, REG, CORLAT, RCW.

The instance set of ACOTSP is composed of 10 instances of each size (i.e., the number of cities) 1000, 1500, 2000, 2500 and 3000. The performance dataset of the 1000 algorithm configurations used to train the surrogate model of ACOTSP can be downloaded here.

All surrogate benchmarks are built using the random forest regression model provided at http://www.cs.ubc.ca/labs/beta/Projects/EPMs/. Note that for ACOTSP, the log-transformation on the input performance data is disabled.

4. Real Benchmarks:

Algorithm Parameters Instance set #Instances Cost metric Cut off time
CPLEX 76 parameters
int: 6, real: 4, categorical: 66, conditional: 4
[CORLAT, Regions100, Regions200] [1000,1000,1000] Running time [300 seconds,5 seconds,300 seconds]
SPEAR 26 parameters
int: 4, real: 10, categorical: 12, conditional: 9
SWV 604 Running time 300 seconds
ACOTSP 11 parameters
int: 4, real: 4, categorical: 3, conditional: 5
TSP 50 Solution quality 20 seconds
ILS* 43 parameters
int: 14, real: 8, categorical: 21, conditional: 40
Permutation Flowshop 30 Solution quality 3 seconds
Lingeling 137 parameters
int: 0, real: 0, categorical: 137, conditional: 0
SAT 299 Running time 300 seconds
* ILS algorithm is executed through the EMILI FRAMEWORK, the frame is currently under development, for obtaining the code please contact Federico Pagnozzi.

5. The irace package:

The version 2.0 of the irace package was used in this work. Information about the irace package is available in: http://iridia.ulb.ac.be/irace/. For questions about the package, please write in our google group: The irace package google group.

6. References:

[1] López−Ibánez, M., Pérez Cáceres, L., Dubois-Lacoste, J., Birattari, M. and Stützle, T.: The irace Package: Iterated Racing for Automatic Algorithm Configuration, Operations Research Perspectives, 2016.
[2] Pérez Cáceres, L., López−Ibánez, M., Hoos, H., and Stützle, T.: The irace Package: An experimental study of adaptive capping in irace. Technical Report TR/IRIDIA/2016-007, IRIDIA, Université Libre de Bruxelles, Belgium, (2017).
[2] López−Ibánez, M., Pérez Cáceres, L., Dubois-Lacoste, J., Birattari, M. and Stützle, T.: The irace Package: User Guide. Technical Report TR/IRIDIA/2016-004, IRIDIA, Université Libre de Bruxelles, Belgium, (2016).
[3] Birattari, M., Yuan, Z., Balaprakash, P. and Stützle, T .: F-race and iterated F-race: An overview.. Experimental Methods for the Analysis of Optimization Algorithms, 311−336 (2010)
[4] Pérez Cáceres, L., López−Ibánez, M. and Stützle, T.: An analysis of parameters of irace In Proceedings of EvoCOP 2014 − 14th European Conference on Evolutionary Computation in Combinatorial Optimization, volume 8600 of LNCS, 37−48 (2014)