Autonomous motion planning requires the ability to safely reason about learned trajectory predictors, particularly in settings where an agent can influence other agents' behavior. These learned predictors are essential for anticipating the future states of uncontrollable agents,
...
Autonomous motion planning requires the ability to safely reason about learned trajectory predictors, particularly in settings where an agent can influence other agents' behavior. These learned predictors are essential for anticipating the future states of uncontrollable agents, whose decision-making process can be difficult to model analytically. Thus, uncertainty quantification of these predictors is crucial for ensuring safe planning and control. In this work, we introduce a framework for interactive motion planning in unknown dynamic environments with probabilistic safety assurances. We adapt a model predictive controller (MPC) to distribution shifts in learned trajectory predictors when other agents react to the ego agent's plan. Our approach leverages tools from conformal prediction (CP) to detect when the other agent's behavior deviates from the training distribution and employs robust CP to quantify the uncertainty in trajectory predictions during these agent interactions. We propose a method for estimating interaction-induced distribution shifts during runtime and the Huber quantile for enhanced outlier detection. Using a KL divergence ambiguity set that upper bounds the distribution shift, our method constructs prediction regions with probabilistic assurances in the presence of distribution shifts caused by interactions with the ego agent. We evaluate our framework in interactive scenarios involving navigation around autonomous vehicles in the BITS simulator, demonstrating enhanced safety and reduced conservatism.