Visualizing runners trajectory from video data is not straightforward because the video data does not contain the explicit information of which runners appear in the video. Only the visual information related to the runner, such as runner’s unique ID (called bib number), is avail
...
Visualizing runners trajectory from video data is not straightforward because the video data does not contain the explicit information of which runners appear in the video. Only the visual information related to the runner, such as runner’s unique ID (called bib number), is available. To this end, we propose two automatic runner detection methods, i.e. scene text detection which identifies the runners by detecting their bib number and person re-identification which detects the runners based on their appearance. To evaluate the proposed methods, we create a ground truth database from the video dataset, which consists of video and frame interval information where the runners appear. The video dataset was recorded by nine cameras at different locations during the Campus Run 2018 event. The experimental evidence shows that the scene text recognition method achieves up to 74.05 for F1-score and person re identification achieves up to 87.76 for F1-score. To conclude, we find that the person re-identification method outperforms the scene text recognition method.