What is it about?

In this work, we introduce transformer into scene flow estimation on point clouds, which aims to predict the displacement vector field between two consecutive frames of point clouds in a scene. We propose a new point attention mechanism that is guided by the relative position between the query and target points. We build a neuron netwok with a pyramidal structure based on it. Experiments are conducted on FlyingThings3D and KITTI Scene Flow 2015 datasets.

Featured Image

Why is it important?

Relative position betweent points is important information for the scene flow estimation task. In our RPP attention, the matrices for feature linear projection are learned as functions of the relative position between the query and target points rather than constant parameters (hyperplanes), which allows to more finely construct correlation between points. Experimental results show that our method largely outperforms previous state-of-the-art scene flow estimation methods.

Read the Original

This page is a summary of: RPPformer-Flow: Relative Position Guided Point Transformer for Scene Flow Estimation, October 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3503161.3547771.
You can read the full text:

Read

Contributors

The following have contributed to this page