Unified Autonomy Stack
Welcome to the documentation for the Unified Autonomy Stack.
This stack presents an autonomy architecture integrating perception, planning, and navigation algorithms developed and field tested at the
Autonomous Robots Lab across robot configurations. The stack consists of the software for the core algorithms along with drivers, utilities, and tools for simulation and testing. We currently support rotary-wing (e.g., multirotor) and certain ground systems (e.g., legged robots) with extension to other configurations, such as underwater robots, coming soon. The software distributed as a part of this stack has been thoroughly tested in real-world scenarios, demonstrating robust autonomous operation in challenging GPS-denied environments.

Overview
The Unified Autonomy Stack is designed to provide a robust and flexible foundation for autonomous operations in various environments. It features:
- Multi-modal Perception: fusing LiDAR, radar, vision, and IMU data for robust Simultaneous Localization and Mapping (SLAM) alongside integration of Vision-Language Models (VLMs) for high-level interaction.
- Planning: Graph-based efficient path planning algorithms tailored for volumetric exploration, visual inspection, and waypoint navigation in complex environments. The planning framework extends to aerial, ground, and underwater robots.
- Multi-layered Safe Navigation: Combining map-based path planning with learning-based reactive navigation and safety layers.
- SDF-NMPC and RL: Neural MPC and Reinforcement Learning based map-free approaches for safe navigation.
- Last-resort Safety: Control Barrier Functions for filtering unsafe commands.
- Multi-platform Support: Designed for both aerial robots and ground robots with planned extension to underwater.
- Containerized Deployment: Docker-based deployment for easy setup and reproducibility across different platforms
Getting Started
Please navigate through the tabs to explore setup:
- Installation: Instructions for installation and setup.
- Deployment: Docker-based deployment instructions.
- Examples: Examples for testing the stack in simulation and on datasets.
descriptions of the subsystems:
- Architecture: High-level overview of the system components and data flow.
- Multi-modal SLAM: Details on the estimation stack.
- VLM: Details on the VLM stack.
- Planning: Explanation of the planning stack.
- Navigation: Information on Neural MPC, RL and Composite CBF Safety Filter for safe navigation.
- Simulation: Simulation environments and setup.
- Multi-platform Support: Verified on diverse aerial and ground robots with the ambition to eventually cover most morphologies across air, land and sea.
and indicative results and datasets:
- Prior Results: Previous experiences which create the foundation for the Unified Autonomy Stack.
- Indicative Results: Results of the Unified Autonomy Stack on real robots.
- Datasets: Relevant datasets for offline evaluation.
Technical Report
For a comprehensive report of the Unified Autonomy Stack please refer to the Technical Report.
Contact
- Mihir Dharmadhikari: mihir.dharmadhikari@ntnu.no
- Nikhil Khedekar: nikhil.v.khedekar@ntnu.no
- Mihir Kulkarni: mihir.kulkarni@ntnu.no
- Morten Nissov: morten.nissov@ntnu.no
- Angelos Zacharia: angelos.zacharia@ntnu.no
- Martin Jacquet: martin.jacquet@ntnu.no
- Albert Gassol Puigjaner : albert.g.puigjaner@ntnu.no
- Philipp Weiss: philipp.weiss@ntnu.no
- Kostas Alexis: konstantinos.alexis@ntnu.no