Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
We address multi-robot safe mission planning in uncertain dynamic environments. This problem arises in several applications including safety-critical exploration, surveillance, and emergency rescue missions. Computation of a multi-robot optimal control policy is challenging not only because of the complexity of incorporating dynamic uncertainties while planning, but also because of the exponential growth in problem size as a function of number of robots. Leveraging recent works obtaining a tractable safety maximizing plan for a single robot, we propose a scalable two-stage framework to solve the problem at hand. Specifically, the problem is split into a low-level single-agent control problem and a high-level task allocation problem. The low-level problem uses an efficient approximation of stochastic reachability for a Markov decision process to derive the optimal control policy under dynamic uncertainty. The task allocation is solved using polynomial-time forward and reverse greedy heuristics and in a distributed auction-based manner. By leveraging the properties of our safety objective function, we provide provable performance bounds on the safety of the approximate solutions proposed by these two heuristics. We evaluate the theory with extensive numerical case studies. Index terms—stochastic reachability, optimal control, task allocation, greedy algorithms, multi-robot systems
,