Main research interests: Game AI, Combinatorial optimization mixed with Machine Learning, Decision-making under uncertainty, and Constraint-based Meta-heuristics.
I apply my research on artificial intelligence in games, in particular in real-time strategy games. Games are very convenient testbeds to develop and improve AI methods.
There are many reasons to choose games as an application domain. Among others: games are environments where one must take a series of decisions with a clear success metric (score or win/loss); there are great collections of game data (replays) often available for free without problem of confidentiality; they offer the possibility to have cooperative or competition human-machine interactions and machine-machine interactions, potentially on a massive scale.
Most importantly, games can be seen as a simplificition of the world: simpler rules, smaller space, more limited interactions, but rich and complex enought to contain many challenging and relevant scientific problems.
Machine Learning and combinatorial optimization frameworks such a Constraint Programming are two methods in AI with different purposes, but they can nurture each other.
Currently, modeling such problems through Constraint Programming requires to much expertise. This is a real brake on its democratization, both in academia and in the industry.
Thus, I investigate how can we benefic from Machine Learning to make easier the modeling of combinatorial optimization problems. My last result on this topic is an interpretable method to learn error functions implementing contraints in Cost Function Networks.
Constraint Programming can also bring something to Machine Learning: I am also interested in expressing Meta-Interpretive Learning with Constraint Programming, allwing to use constraint solvers to solve Machine Learning problems.
For this topic, I mainly study single-stage decision-making problems an agent must solve under uncertainty, where uncertainty lies on the value of some stochastic variables controlled by a third-party agent (such as the environment where our agent evolves). Such stochastic values only have an impact on the objective function the agent tries to maximize or minimize, and not on constraints it must satisfy.
Studying single-stage decision-making problems means that a decision must be made before revealing stochastic values so far unknown. Once these values are known, the agent can only observe the consequences of its decision without having the possibility to sharpen or fix it like in multi-stage decision-making processes. Although multi-stage decision-making problems are interesting and would deserve a proper study, I think single-stage decision-making problems are still relevant and capture all one-shot decision-making problems that must be made recurrently.
My last result on this topic is mixing Constraint Programming and Decision Theory to model decision-making problems under uncertainty like regular combinatorial optimization problems, allowing us to use regular constraint solvers to solve such problems under uncertainty.
There exist two types of algorithms to solve Constraint Programming problems: complete solvers, exploring the entire tree search by pruning parts without any solutions, and incomplete solvers, also named meta-heuristics. Although unable to prove ths optimality of a solution, meta-heuristics find good quality solutions quickly in practice and scale to large, industrial-size problems.
Meta-heuristics contain a large class of different algorithm families. In particular, I study Local Search algorithms by designing and implementing new ones. I am mostly interested in constraint-based local search: these algorithms aim to exploit characteristics of the problem structure to improve both runtimes and solution quality.