HAILO offers a containerized development environment using Docker, but it is also possible to virtualize and containerize the development environment using LXD developed by Canonical.
- Differences between LXD and Docker
Here are the aspects in which LXD excels as a development environment compared to Docker:
- While Docker focuses on running single-application containers, LXD is more like a system container similar to KVM. This provides greater flexibility closer to the host environment. You can use package management tools like apt within the containers, creating an environment similar to your host development environment.
- LXD supports various Linux distributions as guest operating systems. Regardless of the host OS version, you can run different Ubuntu OS versions such as Ubuntu 18.04 (bionic) or Ubuntu 20.04 (focal), as well as other distributions like CentOS.
- Creating and managing snapshots to preserve the state of the OS is straightforward in LXD. Rollbacks and migrations are also easy to perform.
Similar to Docker, LXD shares drivers with the host OS, allowing access from the containers to resources like GPUs and Hailo devices.
- Installing LXD
For installing LXD and initializing it, please refer to the LXD webpage.
- OS Installation and Initial Configuration
To share the GPU with the host, similar to Docker, you need to install nvidia-container-toolkit
on the host machine.
# Install nvidia-container-toolkit
if [[ ! -e /etc/apt/sources.list.d/nvidia-container-toolkit.list ]]; then
DIST=$(
. /etc/os-release
echo $ID$VERSION_ID
) \
&& curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L "https://nvidia.github.io/libnvidia-container/experimental/${DIST}/libnvidia-container.list" \
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update && sudo apt install -y nvidia-container-toolkit
fi
The following script is a sample script that creates a container ubuntu-container
with the same username as the user executing the script.
- It transfers
~/.ssh/id_ed25519.pub
as the public key for accessing the container via SSH. - It enables running X applications from within the container (properly setting the
DISPLAY
environment variable is required). - It shares the
/dev/hailo0
device for use within the container through a mount.
# Create container
lxc launch images:ubuntu/jammy/amd64 ubuntu-container
# Create non-privileged user
lxc exec ubuntu-container -- sed -i "s/ubuntu/hailo/g" /etc/passwd
lxc exec ubuntu-container -- sed -i "s/ubuntu/hailo/g" /etc/shadow
lxc exec ubuntu-container -- sed -i "s/ubuntu/hailo/g" /etc/group
lxc exec ubuntu-container -- sed -i "s/ubuntu/hailo/g" /etc/gshadow
lxc exec ubuntu-container -- mv /home/ubuntu "/home/hailo"
lxc exec ubuntu-container -- mkdir -p "/home/hailo/.ssh"
lxc exec ubuntu-container -- chmod 700 "/home/hailo/.ssh"
lxc file push ~/.ssh/id_ed25519.pub "ubuntu-container/home/hailo/.ssh/authorized_keys"
lxc exec ubuntu-container -- chmod 600 "/home/hailo/.ssh/authorized_keys"
lxc exec ubuntu-container -- chown -R "hailo:hailo" "/home/hailo/.ssh"
lxc exec ubuntu-container -- passwd "hailo"
# Update timezone to Japan
lxc exec ubuntu-container -- timedatectl set-timezone Asia/Tokyo
# Add user to video group to use GPU
lxc exec ubuntu-container -- usermod -aG video "hailo"
# Configure to be enable to share the folders
CONTAINER_UID=$(id -u)
CONTAINER_GID=$(id -g)
echo -e "uid ${CONTAINER_UID} 1000\ngid ${CONTAINER_GID} 1000" | lxc config set ubuntu-container raw.idmap -
# Disable auto start to mount X11 socket
lxc config set ubuntu-container boot.autostart false
# Restart container to ensure to enable shared folders
lxc restart ubuntu-container
# Enable to use GPU from the container
if type nvidia-smi &>/dev/null; then
lxc config device add ubuntu-container nvgpu gpu gid="$(getent group video | cut -d: -f3)"
# This config should not be true if nvidia driver is installed.
lxc config set ubuntu-container nvidia.runtime true
fi
# Mount character devices
lxc config device add ubuntu-container hailo0 unix-char path="/dev/hailo" mode=666 required=false
# Restart avahi dameon to ensure to connect with zero-conf
lxc exec ubuntu-container -- systemctl enable avahi-daemon
lxc exec ubuntu-container -- systemctl restart avahi-daemon.service
lxc exec ubuntu-container -- systemctl restart avahi-daemon.socket
# Restart container
lxc restart ubuntu-container
To access the created container, use SSH.
(Access to the gurst is possible using the hostname.local
name because Avahi is installed.)
$ lxc list
+------------------------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------------------------+---------+----------------------+-----------------------------------------------+-----------+-----------+
| ubuntu-container | RUNNING | 10.218.82.239 (eth0) | fd42:76c8:e070:476e:216:3eff:fe8c:52be (eth0) | CONTAINER | 0 |
+------------------------+---------+----------------------+-----------------------------------------------+-----------+-----------+
$ ssh -i ~/.ssh/id_ed25519 [email protected]
Last login: Tue Jan 9 13:10:08 2024 from 10.218.82.1
Linux ubuntu-container 5.15.0-71-generic x86_64
16:39:49 up 3 days, 20:15, 3 users, load average: 0.03, 0.05, 0.06
You can also access the GPU and hailo device inside the container.
$ nvidia-smi
Tue Jan 9 17:24:35 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.223.02 Driver Version: 470.223.02 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro P1000 Off | 00000000:01:00.0 Off | N/A |
| 34% 27C P8 N/A / N/A | 209MiB / 4040MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
$ ls -al /dev/hailo*
Permissions Size User Group Date Modified Name
crw-rw-rw- 234,0 root root 5 Jan 20:24 /dev/hailo0
Once you have installed the CUDA toolkit, cuDNN, DFC, and HailoRT on the guest OS manually, you can develop within the container similarly as host OS.
It is also possible to achieve the same functionality as Docker’s shared directory with LXD.
For example, to share ~/workspace
between the host and container, execute the following command.
$lxc config device add ubuntu-container workspace disk source=${HOME}/workspace path=/home/hailo/workspace
It is also possible to access a Jupyter Notebook running inside the container from the host OS browser.
In this case, the LXD proxy feature is used.
In the following example, we assign port 8888 of the guest OS to port 18888 of the host.
$ lxc config device add ubuntu-container jupyternb proxy listen="tcp:0.0.0.0:18888" connect="tcp:127.0.0.1:8888" bind=host
With this configuration, you can access the Jupyter Notebook running inside the guest OS by accessing port 18888 of the host.
Similarly, it is possible to configure TensorBoard and other tools in the same way.
LXD offers various other features as well.
While you may need to manually install various tools, it provides the advantage of creating containers with greater flexibility compared to Docker. Consider exploring LXD as per your requirements.