Installation#
This page describes how to install nvblox_torch (python) and nvblox (c++).
Supported Platforms#
The following platforms are supported:
x86 + dGPU |
Jetson (ARM64) |
|
---|---|---|
|
✅ |
❌ |
|
✅ |
✅ |
We support the systems with the following configurations:
x86 + discrete GPU
Ubuntu 20.04, 22.04, 24.04
CUDA 11.4 - 12.8
Jetson (ARM64)
(ARM64) Jetpack 5, 6
A minimum NVIDIA driver version is imposed by the version of CUDA you have installed. See the support table here to find the minimum driver version for your platform.
nvblox_torch
#
There are two ways to install nvblox_torch
:
pip
is the preferred way to install nvblox_torch
on Supported Platforms.
Source installation is only recommended for developers who need to modify nvblox_torch
or for platforms that are not supported via pip
.
Install nvblox_torch
via pip
#
To install nvblox_torch
via pip
on a supported platform, run the following commands:
sudo apt-get install python3-pip libglib2.0-0 libgl1 # Open3D dependencies
pip3 install https://github.com/nvidia-isaac/nvblox/releases/download/v0.0.8/nvblox_torch-0.0.8rc5+cu12ubuntu24-863-py3-none-linux_x86_64.whl
sudo apt-get install python3-pip libglib2.0-0 libgl1 # Open3D dependencies
pip3 install https://github.com/nvidia-isaac/nvblox/releases/download/v0.0.8/nvblox_torch-0.0.8rc5+cu12ubuntu22-863-py3-none-linux_x86_64.whl
sudo apt-get install python3-pip libglib2.0-0 libgl1 # Open3D dependencies
pip3 install https://github.com/nvidia-isaac/nvblox/releases/download/v0.0.8/nvblox_torch-0.0.8rc5+cu11ubuntu22-863-py3-none-linux_x86_64.whl
(Optional) You can verify the installation by running our tests:
cd $(python3 -c "import site; print(site.getsitepackages()[0])")/nvblox_torch
pytest -s
You’re all set! You can now run the 3D Reconstruction example.
Install nvblox_torch
from Source (in Docker)#
The source installation is recommended for developers who need to modify nvblox_torch
or for platforms that are not supported via pip
.
We provide a docker image for building and developing inside.
First clone the repository:
git clone git@github.com:nvidia-isaac/nvblox.git
Then build and run the docker container:
cd nvblox
./docker/run_docker.sh
To build the c++ library run
mkdir -p /workspaces/nvblox/build
cd /workspaces/nvblox/build
cmake ..
make -j${nproc}
To install nvblox_torch in development/editable mode run
cd /workspaces/nvblox/nvblox_torch
pip3 install -e .
(Optional) You can verify the installation by running our tests:
cd /workspaces/nvblox/nvblox_torch
pytest -s
You’re all set! You can now 3D Reconstruction.
nvblox
#
We support two installation methods for building the nvblox
c++ library:
After installing either way, you’re ready to Run an Example.
Install nvblox
from Source (in Docker)#
The steps to build nvblox
in the development container are the same as the
instructions in Install nvblox_torch from Source (in Docker).
nvblox
is built inside our development container as the first part of
installing nvblox_torch
.
One difference is that on Jetson platforms we need to disable building of the pytorch
wrapper,
which is (currently) only supported on x86 platforms. Also note that the Jetson docker build only
supports Jetpack 6.2. The modified (jetson) and unmodified (x86) building commands are:
mkdir -p /workspaces/nvblox/build
cd /workspaces/nvblox/build
cmake ..
make -j${nproc}
mkdir -p /workspaces/nvblox/build
cd /workspaces/nvblox/build
cmake .. -DBUILD_PYTORCH_WRAPPER=0
make -j${nproc}
(Optional) To confirm building was a success, run the tests:
cd nvblox
ctest
You’re now ready to Run an Example.
Install nvblox
from Source (Outside Docker)#
These instructions describe how to install the nvblox
core library from source, outside
of our development container.
Note
We recommend using the Install nvblox from Source (in Docker) as it will handle all the dependencies for you. The docker image sets up a controlled environment in which we know things work. While we’ve tested the following instructions on many systems (see Supported Platforms), results may vary.
To start, install our dependencies
sudo apt-get update && sudo apt-get install cmake git jq gnupg apt-utils software-properties-common build-essential sudo python3-pip wget sudo git python3-dev git-lfs
Note that for Ubuntu 20.04, we need to install a more recent version of cmake
than is available in the default repositories.
We provide a script to add the relevant repositories and install a more recent
version in: docker/install_cmake.sh
.
Note that running this script will replace any previously installed version of cmake
.
Now follow the instructions in Install nvblox from Source (in Docker) to build the code and run the tests.
You’re now ready to Run an Example.
Advanced Build Options#
This section details build options for advanced nvblox
users.
Modifying maximum feature size#
The library supports integrating generic image features into the reconstructed voxel map.
The maximum supported length of image feature vectors is a compile-time constant which defaults to 128
.
To change the default, call cmake with the following flag:
cmake -DNVBLOX_FEATURE_ARRAY_NUM_ELEMENTS=XYZ ..
Note that increasing this number will approximately linearly increase memory usage for applications using deep feature mapping.
Building for Post-CXX11 ABI#
The library is built with the pre-cxx11 ABI by default in order to maintain compatibility with manylinux201X wheels. To build with the post cxx11 ABI, call cmake with the following flag:
cmake -DPRE_CXX11_ABI_LINKABLE=OFF ..
Disabling pytorch
wrapper#
If you don’t need the pytorch
wrapper, or you’re on a system without pytorch
installed,
you can disable it by calling cmake with the following flag:
cmake -DBUILD_PYTORCH_WRAPPER=0 ..
Other docker
containers#
We build and test in the following docker
images, so if you would like to install
in a docker
, and don’t want to use our development docker
, these are guaranteed to work.
nvcr.io/nvidia/cuda:12.8.0-devel-ubuntu24.04
nvcr.io/nvidia/cuda:12.6.1-devel-ubuntu22.04
nvcr.io/nvidia/cuda:11.8.0-devel-ubuntu22.04
Build a Redistributable Library#
By default, the nvblox
library only builds for the Compute Capability (CC)
of the GPU in the machine it’s being built on.
Sometimes it is desirable to build a library that can be used across multiple
machines that contain GPUs with different architectures.
We, for example, build nvblox
for several architectures for packaging
into our pip
package nvblox_torch
, such that it can be used on a
variety of machines.
To build binaries that can be used across multiple machines like this, you can
use the CMAKE_CUDA_ARCHITECTURE
flag and set it to a semicolon-separated
list of architectures to support.
For example, to build for Compute Capability (CC) 7.2 and 7.5, you would run:
cmake .. -DCMAKE_CUDA_ARCHITECTURES=75;72