科学研究

Research

首页 >  论文  >  详情

Implicit Event-RGBD Neural SLAM

发表会议及期刊:arXiv

Delin Qu1,2* Chi Yan2,5* Dong Wang2

Jie Yin3 Qizhi Chen2

Dan Xu5 Yiting Zhang2 Bin Zhao2,4† Xuelong Li2

1Fudan University 2Shanghai AI Laboratory 3Shanghai Jiao Tong University

4Northwestern Polytechnical University 5Hong Kong University of Sciences and Technology

 

Abstract

Implicit neural SLAM has achieved remarkable progress recently. Nevertheless, existing methods face significant challenges in non-ideal scenarios, such as motion blur or lighting variation, which often leads to issues like convergence failures, localization drifts, and distorted mapping. To address these challenges, we propose EN-SLAM, the first event-RGBD implicit neural SLAM framework, which effectively leverages the high rate and high dynamic range advantages of event data for tracking and mapping. Specifically, EN-SLAM proposes a differentiable CRF (Camera Response Function) rendering technique to generate distinct RGB and event camera data via a shared radiance field, which is optimized by learning a unified implicit representation with the captured event and RGBD supervision. Moreover, based on the temporal difference property of events, we propose a temporal aggregating optimization strategy for the event joint tracking and global bundle adjustment, capitalizing on the consecutive difference constraints of events, significantly enhancing tracking accuracy and robustness. Finally, we construct the simulated dataset DEV-Indoors and real captured dataset DEV-Reals containing 6 scenes, 17 sequences with practical motion blur and lighting changes for evaluations. Experimental results show that our method outperforms the SOTA methods in both tracking ATE and mapping ACC with a real-time 17 FPS in various challenging environments. Project page:

https://delinqu.github.io/EN-SLAM.


comm@pjlab.org.cn

上海市徐汇区云锦路701号西岸国际人工智能中心37-38层

沪ICP备2021009351号-1