Omni-Perception

Omnidirectional Collision Avoidance for Legged Locomotion in Dynamic Environments

No robots were harmed !

Omni-Perception: Omnidirectional Collision Avoidance for Legged Locomotion in Dynamic Environments

Zifan Wang1   Teli Ma1   Yufei Jia3   Xun Yang1   Jiaming Zhou1  
Wenlong Ouyang1   Qiang Zhang1,4   Junwei Liang1,2†  
1The Hong Kong University of Science and Technology (Guangzhou)
2The Hong Kong University of Science and Technology
3Department of Electronic Engineering, Tsinghua University
4Beijing Innovation Center of Humanoid Robotics Co., Ltd.
HKUST Logo HKUST-GZ Logo

Omni-Perception enables legged robots to achieve omnidirectional collision avoidance in dynamic environments through direct processing of raw LiDAR point clouds.

Abstract

Agile locomotion in complex 3D environments requires robust spatial awareness to safely avoid diverse obstacles such as aerial clutter, uneven terrain, and dynamic agents. We propose Omni-Perception, an end-to-end locomotion policy that achieves 3D spatial awareness and omnidirectional collision avoidance by directly processing raw LiDAR point clouds.

At its core is PD-RiskNet (Proximal-Distal Risk-Aware Hierarchical Network), a novel perception module that interprets spatio-temporal LiDAR data for environmental risk assessment. We develop a high-fidelity LiDAR simulation toolkit with realistic noise modeling and fast raycasting, enabling scalable training and effective sim-to-real transfer.

System Framework

Omni-Perception System Framework

System Architecture: Our framework processes raw LiDAR point clouds through PD-RiskNet to generate risk-aware locomotion policies for omnidirectional collision avoidance.

Method

Validation Scenarios

Validation Scenarios

We introduce a hierarchical network architecture specifically designed to process spatio-temporal LiDAR point clouds, differentiating between proximal and distal regions to effectively quantify environmental risks for locomotion. The PD-RiskNet architecture is designed to process spatio-temporal point cloud data acquired from a legged robot’s LiDAR sensor. The initial step involves partitioning the raw point cloud Praw into two distinct subsets: the proximal point cloud and the distal point. This partitioning is based on a vertical angle threshold θ, distinguishing near-field points (higher θ) from far-field points (lower θ), effectively separating dense local geometry from sparse distant observations. Effective omnidirectional collision avoidance demonstrated across diverse environmental challenges including aerial, transparent, slender, and ground obstacles.

High-Fidelity LiDAR Simulation

High-Fidelity LiDAR Simulation

We developed a custom rendering framework supporting diverse LiDAR models with realistic scan patterns, self-occlusion effects, and optimized mesh management for massively parallel simulation.

PD-RiskNet Architecture: Our perception module partitions raw LiDAR point clouds into proximal and distal regions, processing each with specialized sampling strategies and temporal networks for spatio-temporal feature extraction.

BibTeX

@misc{wang2025omniperception,
      title={Omni-Perception: Omnidirectional Collision Avoidance for Legged Locomotion in Dynamic Environments}, 
      author={Zifan Wang and Teli Ma and Yufei Jia and Xun Yang and Jiaming Zhou and Wenlong Ouyang and Qiang Zhang and Junwei Liang},
      year={2025},
      eprint={2505.19214},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2505.19214}, 
}