Expand LXD block volumes on ZFS
Unfortunately, this is not document in the project’s or the now-official Canonical LXD documentation.
Volumes attached to our VM
> $ lxc config device show umbrel
disk-umbrel-data:
path: /media/host1-nvme1/encrypted/btcnode
source: /media/host1-nvme1/encrypted/umbrel-data/
type: disk
root:
path: /
pool: zfs-nvme1-lxd-pool1
size: 20GiB
type: disk
umbrel-docker_root:
pool: zfs-nvme1-lxd-pool1
source: umbrel-docker_root
type: disk
Current ZFS block volume usage from the VM
ubuntu@umbrel:~$ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 756M 796K 755M 1% /run
/dev/sda2 20G 19G 236M 99% /
tmpfs 3.7G 0 3.7G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 50M 14M 37M 27% /run/lxd_agent
/dev/sdb 9.8G 9.8G 0 100% /mnt/umbrel-docker_root
We can see we are full with 9.8G used out of 9.8G available.
Shut down the VM
Find the corresponding dataset
LXD cannot directly resize volumes unless they are root volumes.
Therefore, we need to use ZFS commands directly and this starts with first finding the location of our ZFS block volume.
> $ sudo zfs list -r host1-nvme1/encrypted/lxd-pool1
NAME USED AVAIL REFER MOUNTPOINT
host1-nvme1/encrypted/lxd-pool1 58.6G 176G 192K legacy
host1-nvme1/encrypted/lxd-pool1/buckets 192K 176G 192K legacy
host1-nvme1/encrypted/lxd-pool1/containers 5.51G 176G 192K legacy
host1-nvme1/encrypted/lxd-pool1/containers/gitea-alpn 466M 176G 463M legacy
host1-nvme1/encrypted/lxd-pool1/containers/mirror-alpn 1.85G 176G 1.66G legacy
host1-nvme1/encrypted/lxd-pool1/containers/wallabag 3.21G 176G 1.21G legacy
host1-nvme1/encrypted/lxd-pool1/custom 15.3G 176G 192K legacy
host1-nvme1/encrypted/lxd-pool1/custom/default_docker-staging-ubt-docker_root 7.86M 176G 5.10M -
host1-nvme1/encrypted/lxd-pool1/custom/default_docker-staging-ubt-docker_root2 88K 176G 88K -
host1-nvme1/encrypted/lxd-pool1/custom/default_umbrel-docker_root 15.3G 176G 7.03G -
host1-nvme1/encrypted/lxd-pool1/custom/default_umbrel-docker_root2 2.48M 176G 2.48M -
host1-nvme1/encrypted/lxd-pool1/deleted 1.19G 176G 192K legacy
host1-nvme1/encrypted/lxd-pool1/deleted/buckets 192K 176G 192K legacy
host1-nvme1/encrypted/lxd-pool1/deleted/containers 192K 176G 192K legacy
host1-nvme1/encrypted/lxd-pool1/deleted/custom 192K 176G 192K legacy
host1-nvme1/encrypted/lxd-pool1/deleted/images 1.19G 176G 192K legacy
host1-nvme1/encrypted/lxd-pool1/deleted/images/e169139be5538f21387dd0fa12f01473a8eaf365800a88211a9011f54984619d 216K 99.8M 208K legacy
host1-nvme1/encrypted/lxd-pool1/deleted/images/e169139be5538f21387dd0fa12f01473a8eaf365800a88211a9011f54984619d.block 1.19G 176G 1.19G -
host1-nvme1/encrypted/lxd-pool1/deleted/virtual-machines 192K 176G 192K legacy
host1-nvme1/encrypted/lxd-pool1/images 192K 176G 192K legacy
host1-nvme1/encrypted/lxd-pool1/virtual-machines 36.6G 176G 192K legacy
host1-nvme1/encrypted/lxd-pool1/virtual-machines/docker-staging-ubt 17.2M 82.8M 8.13M legacy
host1-nvme1/encrypted/lxd-pool1/virtual-machines/docker-staging-ubt.block 4.94G 176G 2.24G -
host1-nvme1/encrypted/lxd-pool1/virtual-machines/umbrel 15.3M 84.7M 8.13M legacy
host1-nvme1/encrypted/lxd-pool1/virtual-machines/umbrel.block 31.6G 176G 13.5G -
Resize the block volume
zfs set volsize=20G host1-nvme1/encrypted/lxd-pool1/custom/default_umbrel-docker_root
Start the VM again and expand fs
We can see the VM accurately reports the new block size.
ubuntu@umbrel ~> lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 100M 0 part /boot/efi
└─sda2 8:2 0 19.9G 0 part /
sdb 8:16 0 20G 0 disk /mnt/umbrel-docker_root
sdc 8:32 0 20G 0 disk
We are still full. We need to expand the filesystem to the whole block.
ubuntu@umbrel:~$ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 756M 796K 755M 1% /run
/dev/sda2 20G 19G 236M 99% /
tmpfs 3.7G 0 3.7G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 50M 14M 37M 27% /run/lxd_agent
lxd_disk-umbrel-data 827G 651G 177G 79% /media/host1-nvme1/encrypted/btcnode
/dev/sdb 9.8G 9.8G 0 100% /mnt/umbrel-docker_root
/dev/sda1 99M 8.3M 91M 9% /boot/efi
We will follow a standard procedure, well-described in the RHEL documentation and stictly identical for Debian derivatives.
Start by unmounting the partition. This is a must and resize2fs
will refuse to run without this prior step.
ubuntu@umbrel ~> sudo umount /mnt/umbrel-docker_root
Run a check on the volume. resize2fs
will refuse to execute unless you take this prior step.
ubuntu@umbrel ~ [1]> sudo e2fsck -f /dev/sdb
e2fsck 1.46.5 (30-Dec-2021)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/sdb: 467297/655360 files (0.1% non-contiguous), 2617344/2621440 blocks
And resize.
ubuntu@umbrel ~> sudo resize2fs /dev/sdb
resize2fs 1.46.5 (30-Dec-2021)
Resizing the filesystem on /dev/sdb to 5242880 (4k) blocks.
The filesystem on /dev/sdb is now 5242880 (4k) blocks long.
Mount the filesystem again.
ubuntu@umbrel ~> sudo mount /dev/sdb /mnt/umbrel-docker_root
Check the available space.
ubuntu@umbrel ~> df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 756M 872K 755M 1% /run
/dev/sda2 20G 20G 121M 100% /
tmpfs 3.7G 0 3.7G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 50M 14M 37M 27% /run/lxd_agent
lxd_disk-umbrel-data 827G 651G 177G 79% /media/host1-nvme1/encrypted/btcnode
/dev/sda1 99M 8.3M 91M 9% /boot/efi
/dev/sdb 20G 9.8G 8.9G 53% /mnt/umbrel-docker_root
We can now see that the filesystem use all available block space!