On 2008-06-06 at 15:00:00 (Brussels Time) |
Abstract
During my PhD at IDSIA, we developed an algorithm selection method for solving decision problems. Our approach fits in the more general framework of algorithm portfolios: computational resources are allocated to the candidate solvers, running in parallel. Allocation of CPU time is based on a model of algorithm performance (runtime distribution), and is updated repeatedly during the solution of each instance (dynamic selection). Survival analysis techniques allow to keep the cost of learning low, solving each problem once, and accounting for the time spent by unsuccessful algorithms as incomplete information. The model is updated every time a problem instance is solved (online selection). The resulting exploration-exploitation trade-off is represented as a bandit problem: this allows to add other selection methods as additional "arms", keeping a bound on the regret. In the next year at IRIDIA, we will extend our selection technique to optimisation problems, cooperating with the Statistics institute of UCL (Louvain-la-Neuve) for the statistics part.
Keywords
meta-learning, algorithm selection, bandit problem, survival analysis, optimisation
References
-
Matteo Gagliolo and Jürgen Schmidhuber. (2006)
Learning Dynamic Algorithm Portfolios,
Annals of Mathematics and Artificial Intelligence, 47(3-4):295-328.
See http://www.idsia.ch/idsiareport/IDSIA-02-07.pdf