Event cameras are promising sensors for on-line and real-time vision tasks due to their high temporal resolution, low latency, and redundant static data elimination. Many vision algorithms use some form of spatial convolution (i.e. spatial pattern detection) as a fundamental comp
...
Event cameras are promising sensors for on-line and real-time vision tasks due to their high temporal resolution, low latency, and redundant static data elimination. Many vision algorithms use some form of spatial convolution (i.e. spatial pattern detection) as a fundamental component. However, additional consideration must be taken for event cameras, as the visual signal is asynchronous and sparse. While elegant methods have been proposed for event-based convolutions, they are unsuitable for real scenarios due to their inefficient processing pipeline and subsequent low event-throughput. This paper presents an efficient implementation based on decoupling the event-based computations from the computationally heavy convolutions, increasing the maximum event processing rate by 15. 92 × to over 10 million events/second, while still maintaining the event-based paradigm of asynchronous input and output. Results on public datasets with modern 640 × 480 event-camera recordings show that the proposed implementation achieves real-time processing with minimal impact on the convolution result, while the prior state-of-the-art results in a latency of over 1 second.
@en