Skip to content

Unified Autonomy Stack

Welcome to the documentation for the Unified Autonomy Stack. This stack presents an autonomy architecture integrating perception, planning, and navigation algorithms developed and field tested at the ARL Logo Autonomous Robots Lab across robot configurations. The stack consists of the software for the core algorithms along with drivers, utilities, and tools for simulation and testing. We currently support rotary-wing (e.g., multirotor) and certain ground systems (e.g., legged robots) with extension to other configurations, such as underwater robots, coming soon. The software distributed as a part of this stack has been thoroughly tested in real-world scenarios, demonstrating robust autonomous operation in challenging GPS-denied environments.

Unified Autonomy Stack Landing Image

Overview

The Unified Autonomy Stack is designed to provide a robust and flexible foundation for autonomous operations in various environments. It features:

  • Multi-modal Perception: fusing LiDAR, radar, vision, and IMU data for robust Simultaneous Localization and Mapping (SLAM) alongside integration of Vision-Language Models (VLMs) for high-level interaction.
  • Planning: Graph-based efficient path planning algorithms tailored for volumetric exploration, visual inspection, and waypoint navigation in complex environments. The planning framework extends to aerial, ground, and underwater robots.
  • Multi-layered Safe Navigation: Combining map-based path planning with learning-based reactive navigation and safety layers.
    • SDF-NMPC and RL: Neural MPC and Reinforcement Learning based map-free approaches for safe navigation.
    • Last-resort Safety: Control Barrier Functions for filtering unsafe commands.
  • Multi-platform Support: Designed for both aerial robots and ground robots with planned extension to underwater.
  • Containerized Deployment: Docker-based deployment for easy setup and reproducibility across different platforms

Getting Started

Please navigate through the tabs to explore setup:

  • Installation: Instructions for installation and setup.
  • Deployment: Docker-based deployment instructions.
  • Examples: Examples for testing the stack in simulation and on datasets.

descriptions of the subsystems:

and indicative results and datasets:

  • Prior Results: Previous experiences which create the foundation for the Unified Autonomy Stack.
  • Indicative Results: Results of the Unified Autonomy Stack on real robots.
  • Datasets: Relevant datasets for offline evaluation.

Technical Report

For a comprehensive report of the Unified Autonomy Stack please refer to the Technical Report.

Contact