KLI

Fast and Accurate 3D Object Detection for Lidar-Camera-Based Autonomous Vehicles Using One Shared Voxel-Based Backbone

Metadata Downloads
Abstract
Currently, many kinds of LiDAR-camera-based 3D object detectors have been developed with two heavy neural networks to extract view-specific features, while a LiDAR-camera-based 3D detector with only one neural network has not been implemented. To tackle this issue, this paper first presents an early-fusion method to exploit both LiDAR and camera data for fast 3D object detection with only one backbone, achieving a good balance between accuracy and efficiency. We propose a novel point feature fusion module to directly extract point-wise features from raw RGB images and fuse them with their corresponding point cloud with no backbone. In this paradigm, the backbone that extracts RGB image features is abandoned to reduce the large computation cost. Our method first voxelizes a point cloud into a 3D voxel grid and utilizes two strategies to reduce information loss during voxelization. The first strategy is to use a small voxel size (0.05m, 0.05m, 0.1m) in X-axis, Y-axis, and Z-axis, respectively, while the second one is to project the feature (e.g. intensity or height information) of point clouds onto RGB images. Numerous experiments evaluated on the KITTI benchmark suite show that the proposed approach outperforms stateof-the-art LiDAR-camera-based methods on the three classes in 3D performance (Easy, Moderate, Hard): cars (88.04%, 77.60%, 76.23%), pedestrians (66.65%, 60.49%, 54.51%), and cyclists (75.87%, 60.07%, 54.51%). Additionally, the proposed model runs at 17.8 frames per second (FPS), which is almost 2× faster than state-of-the-art fusion methods for LiDAR and camera.
Author(s)
문리화조강현
Issued Date
2021
Type
Article
Keyword
CamerasDetectorsFeature extractionFusesKITTI benchmarkLaser radarLiDAR-camera-based 3D detectorObject detectionone backbonepoint-wise fusionsingle stageThree-dimensional displays
DOI
10.1109/ACCESS.2021.3055491
URI
https://oak.ulsan.ac.kr/handle/2021.oak/9130
https://ulsan-primo.hosted.exlibrisgroup.com/primo-explore/fulldisplay?docid=TN_cdi_doaj_primary_oai_doaj_org_article_a1f01e4216664520a98d671493b2c156&context=PC&vid=ULSAN&lang=ko_KR&search_scope=default_scope&adaptor=primo_central_multiple_fe&tab=default_tab&query=any,contains,Fast%20and%20Accurate%203D%20Object%20Detection%20for%20Lidar-Camera-Based%20Autonomous%20Vehicles%20Using%20One%20Shared%20Voxel-Based%20Backbone&offset=0&pcAvailability=true
Publisher
IEEE ACCESS
Location
미국
Language
영어
ISSN
2169-3536
Citation Volume
9
Citation Number
1
Citation Start Page
22080
Citation End Page
22089
Appears in Collections:
Engineering > IT Convergence
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.