[Title]
Ev-ReconNet: Visual Place Recognition Using Event Camera With Spiking Neural Networks
[Authors]
Hyeongi Lee, Hyoseok Hwang
– Department of Software Convergence, Kyung Hee University
[Abstract]
In this article, we utilize the advantages of an event camera to tackle the visual place recognition (VPR) problem. The event camera’s high measurement rate, low latency, and high dynamic range make it well-suited to overcome the limitations of conventional vision sensors. However, to apply the existing convolutional neural network (CNN)-based algorithms such as NetVLAD, the asynchronous event stream should be converted to a synchronous image frame, which causes a loss in temporal information. To address this problem, this article proposes a method that employs the asynchronous characteristic of spiking neural networks (SNNs) to leverage the temporal nature of event streams. The event stream is converted to event images and tensors in our preprocessing module. The SNN-based reconstruction networks, which are converted from CNNs, reconstruct edge images from event tensors regardless of external environment changes. VPR is conducted by matching features of the database and those from NetVLAD, which we used as a feature extraction network in this study. To evaluate the performance of VPR by comparing the previous methods for DDD17 and the Brisbane-Event-VPR dataset, experimental results demonstrate that the matching accuracy of the proposed method is better than previous methods, especially for datasets with adverse weather conditions. We also verify that the performance and energy efficiency are improved with SNNs over CNNs. Our code is available for download on https://github.com/AIRLABkhu/EvReconNet.