Andrii's Blog
Notes on how I was creating lvm cache volume
The process of setting up the LVM cache didn't go smoothly this time, so it deserves documenting.
I had a freshly installed EndeavourOS on my 256GB SSD:
lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
nvme0n1
├─nvme0n1p1 vfat FAT32 FA59-CAAE 932,6M 9% /efi
├─nvme0n1p2 ext4 1.0 endeavouros f0ea116f-xxx 205,1G 3% /
I acquired a 3TB HDD and planned to use it as primary data storage. Additionally, I intended to allocate 100GB of my SSD to be the cache volume for my primary storage.
Partitioning
TLDR just use GParted.
Unfortunately, when I was installing EndeavourOS, I didn't think about partitioning and LVM volume groups, so I ended up creating only one root partition, nvme0n1p2. Before I could create the LVM cache volume, I had to split the root partition. After splitting, the nvme0n1p2 partition should have been 150GB, and the new nvme0n1p3 partition was supposed to be 100GB and serve as the cache for my newly purchased 3TB HDD.
Since I didn't have any data yet, I decided to go "Leroy Jenkins" mode and just follow what ChatGPT instructed me to do.
After booting with a live USB, I went to the terminal and started the
partitioning with the parted tool.
parted /dev/nvme0n1
(parted) print
Model: BC711 NVMe SK hynix 256GB (nvme)
Disk /dev/nvme0n1: 256GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 2097kB 1076MB 1074MB fat32 EFI boot, esp
2 1076MB 256GB 255GB ext4 endeavouros
(parted)
Ensure to resize the filesystem before resizing the partition to avoid failure and potential data loss:
resize2fs /dev/nvme0n1p2 80G
Unfortunately, ChatGPT didn't instruct me to resize the filesystem first, which caused a lot of problems. I had to undo the resizing and start over again. Luckily, it didn't bork the installed filesystem. I chose 80GB deliberately, much smaller than the 155GB size I planned to allocate for the nvme0n1p2 partition in the end because after partition resizing we will resize the fs to use all the space in resized partition.
The next step was to resize the 2nd partition (nvme0n1p2) using parted:
parted /dev/nvme0n1 2 156GB
Then, we create a new partition with:
parted mkpart primary ext4 156GB 100%
The last step was to resize the pre-shrunken filesystem to occupy all the partition volume:
resize2fs /dev/nvme0n1p2
Failing to pre-shrink the filesystem on the first attempt caused some problems down the line. After recovering from this failure, I decided not to risk breaking the system beyond repair and just used GParted, which does all the calculations and filesystem resizing for you.
Creating LVM cache
After partitioning, I had three partitions on my SSD:
nvme0n1
├─nvme0n1p1 vfat FAT32 FA59-CAAE 932,6M 9% /efi
├─nvme0n1p2 ext4 1.0 endeavouros f0ea11xxxx 120,1G 13% /
└─nvme0n1p3 ext4 1.0 cache 09aiXoxxxx 80G 0%
To create the LVM cache, I followed instructions from this YouTube video.
First, I attached the 3TB HDD, which turned out to be /dev/sda. Then I booted
into my OS. Here are the commands that I executed:
# Create physical volume for my HDD
pvcreate /dev/sda1
# Create physical volume for my SSD
pvcreate /dev/nvme0n1p3
# Create volume group
vgcreate hybrid /dev/nvme0n1p3 /dev/sda1
# create logical volume spanning the whole of HDD
lvcreate hybrid --extent 100%PVS --name storage_primary /dev/sda1
# create cache volume in one command. Behind the scenes it creates also a
# logical volume for cache metadata of an appropriate size. In my case this
# volume had 36MB.
# If you want finer control you could issue more fine-grained commands, e.g.:
# lvcreate hybrid --size 80G --name storage_cache /dev/nvme0n1p3
# lvcreate hybrid --size 80M --name cache_meta /dev/nvme0n1p3
# lvconvert --type cache-pool --poolmetadata hybrid/cache_meta hybrid/storage_cache
# lvconvert --type cache --cachemode writeback --cachepool hybrid/storage_cache hybrid/storage_primary
lvcreate --type cache --cachemode writeback --extent 100%FREE --name storage_cache hybrid/storage_primary /dev/nme0n1p3
# Create filesystem
mkfs.ext4 /dev/mapper/hybrid-storage_primary
mkdir /data
Check the UUID of the new partition:
lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
sda
└─sda1 LVM2_member LVM2 001 faK7Vv-xxx
└─hybrid-storage_primary_corig
└─hybrid-storage_primary ext4 1.0 440aca3xxx 2,5T 0% /data
nvme0n1
├─nvme0n1p1 vfat FAT32 FA59-CAxxx 930,5M 9% /efi
├─nvme0n1p2 ext4 1.0 endeavouros f0ea116xxx 119,7G 13% /
└─nvme0n1p3 LVM2_member LVM2 001 09aiXo-xxx
├─hybrid-storage_cache_cpool_cdata
│ └─hybrid-storage_primary ext4 1.0 440aca3xxx 2,5T 0% /data
└─hybrid-storage_cache_cpool_cmeta
└─hybrid-storage_primary ext4 1.0 440aca3xxx 2,5T 0% /data
Update /etc/fstab
UUID=440aca3xxx /data ext4 defaults 0 2
Conclusion
By documenting this process, I aim to create a helpful reference for myself when setting up an LVM cache in the future. This was the second time I've done it, and despite the previous setup occurring less than three months ago, I found myself unable to recall the exact steps. Having this detailed guide will ensure that next time, I can follow the procedure smoothly and avoid the pitfalls I encountered this time.