Date : May 13, 2024, 3 p.m. - Type : Thesis - Thi Quynh Trang VO - Amphi 3 - Pôle commun
Algorithms and Machine Learning for fair and classical combinatorial optimization |
Combinatorial optimization is a field of mathematics that searches for an optimal solution in a finite set of objects. It has crucial applications in many fields, including applied mathematics, software engineering, theoretical computer science, and machine learning. Branch-and-cut is one of the most widely used algorithms for solving combinatorial optimization problems exactly. In this thesis, we focus on the computational aspects of branch-and-cut when studying two critical dimensions of combinatorial optimization: the fairness of solutions and the integration of machine learning.
In Part I, we study two common approaches to deal with the issue of fairness in combinatorial optimization, which has gained significant attention in the past decades. The first approach is balanced combinatorial optimization, which finds a fair solution by minimizing the difference between the largest and smallest components used. Due to the difficulties in bounding these components, to the best of our knowledge, no general exact framework based on mixed-integer linear programming (MILP) has been proposed for balanced combinatorial optimization. To address this gap, we present a branch-and-cut algorithm and a novel class of local cutting planes tailored for balanced combinatorial optimization problems. We demonstrate the effectiveness of the proposed framework in the Balanced Traveling Salesman Problem. Additionally, we introduce bounding algorithms and mechanisms to fix variables to accelerate performance further. The second approach to handling the issue of fairness is Ordered Weighted Average (OWA) combinatorial optimization, which integrates the OWA operator into the objective function. Due to the ordering operator, OWA combinatorial optimization is nonlinear, even if its original constraints are linear. Two MILP formulations of different sizes have been introduced in the literature to linearize the OWA operator. We provide theoretical and empirical comparisons of the two MILP formulations for OWA combinatorial optimization. In particular, we prove that the formulations are equivalent in terms of the linear programming relaxation. We empirically show that for OWA combinatorial optimization problems, the formulation with more variables can be solved faster with branch-and-cut.
In Part II, we develop methods for applying machine learning to enhance fundamental decision problems in branch-and-cut, with a focus on cut generation. Cut generation refers to the decision of whether to generate cuts or to branch at each node of the search tree. We empirically demonstrate that this decision significantly impacts branch-and-cut performance, especially for combinatorial cuts that exploit the facial structure of the convex hull of feasible solutions. We then propose a general framework combining supervised and reinforcement learning to learn effective strategies for generating combinatorial cuts in branch-and-cut. Our framework has two components: a cut detector to predict cut existence and a cut evaluator to choose between generating cuts and branching. Finally, we provide experimental results showing that the proposed method outperforms commonly used strategies for cut generation, even on instances larger than those used for training.
Jury:
- Dritan NACE, Professeur à l’Université de Technologie de Compiègne (Rapporteur)
- Axel PARMENTIER, Chercheur HDR à l’Ecole Nationale des Ponts et Chaussées (Rapporteur)
- Sophie DEMASSEY, MCF HDR à Ecole de Mines de Paris (Examiantrice)
- Kim Thang NGUYEN, Professeur à l’Université Grenoble Alpes (Examinateur)
- Fatiha BENDALI, Professeure à l’Université Clermont Auvergne (Examinatrice)
- Mourad BAIOU, Directeur de recherche au CNRS, LIMOS UMR 6158 (Co-encadrant)
- Viet Hung NGUYEN, Professeur à l’Université Clermont Auvergne (Co-encadrant, directeur de thèse).