Storage Groups
Alteeve Wiki :: How To :: Storage Groups |
In an Anvil! cluster, a "storage group" is a logical group of LVM Volume Groups. When a server is created, the selected storage group tells the cluster where to pull the physical storage from to create the server's virtual hard drive. Most Anvil! nodes have only one VG per subnode and they are generally the same size. As such, the Anvil! is able to automatically link them into a storage group automatically.
Multi-VG Example
Consider a more complex storage configuration, however.
Lets use the example of subnodes with two RAID arrays; The first is made of high speed flash drives, and the second made up of lower speed bulk storage drives. In this case, there will be two storage devices on the nodes;
Block Device | Purpose |
---|---|
/dev/sda | Bulk storage, slow drives. The subnode's OS is installed on this array. |
/dev/sdb | Smaller but faster drives, reserved for specific guest servers with high speed storage requirements. |
During the OS installation stage, /dev/sda3 is used as the LVM physical volume, or "PV".
Subnode | Physical Volume | Size | Volume Group | Type |
---|---|---|---|---|
an-a01n01 | /dev/sda3 | 500 GiB | an-a01n01_vg0 | Bulk storage, slow drives. The subnode's OS is installed on this array. |
an-a01n02 | /dev/sda3 | 500 GiB | an-a01n02_vg0 |
After the OS is installed, the /dev/sdb1 partition is used as the PV and used to back a second VG;
Subnode | Physical Volume | Size | Volume Group | Type |
---|---|---|---|---|
an-a01n01 | /dev/sdb1 | 10 TiB | an-a01n01_vg1 | Smaller but faster drives, reserved for specific guest servers with high speed storage requirements. |
an-a01n02 | /dev/sdb1 | 10 TiB | an-a01n02_vg1 |
Virtual Hard Drives
When a server is created, a virtual hard drive is created to back the storage of that server. Behind the scenes though, an LVM logical volume is created on each subnode, and those are used to back the replicates storage device.
Consider an example server we'll call 'srv01-database'. This looks like this
Subnode | Volume Group | Logical Volume | Replicated Volume |
---|---|---|---|
an-a01n01 | an-a01n01_vg0 | /dev/an-a01n01_vg0/srv01-database_0 | /dev/drbd/by-res/srv01-database/0 |
an-a01n02 | an-a01n02_vg0 | /dev/an-a01n02_vg0/srv01-database_0 |
At this point, you might be asking yourself; "How did the new server decide to use the bulk storage array? What if I wanted to use the high speed storage?".
This is where Storage Groups come in to play!
Storage Groups; Linking VGs
In the example above, the high-speed storage VGs were both 500 GiB, and both bulk storage VGs were 10 TiB. As the sizes matched, the Anvil! would auto-create two Storage Groups;
Subnode | Volume Group | Storage Group | Type |
---|---|---|---|
an-a01n01 | an-a01n01_vg0 | Storage Group 1 | Bulk storage, slow drives. The subnode's OS is installed on this array. |
an-a01n02 | an-a01n02_vg0 | ||
an-a01n01 | an-a01n01_vg1 | Storage Group 2 | Smaller but faster drives, reserved for specific guest servers with high speed storage requirements. |
an-a01n02 | an-a01n02_vg1 |
![]() |
Note: The auto-created storage groups have generic names. You can rename the storage groups. Example; 'anvil-manage-storage-groups --anvil an-anvil-01 --group "Storage Group 1" --new-name "Bulk Storage 1"'. |
If you have mis-matched hardware as subnodes in your Anvil! node, then your storage groups may not auto-form. If this happens, you can use anvil-manage-storage-groups' to manually groups VGs into SGs. Please see the man page for details.
Use of Storage Groups
With your storage groups created, you can now use them when you provision a server to tell the Anvil! which type of storage to use when you create a new server!
You can also use anvil-manage-server-storage' to add a second (or third) virtual hard drive to an existing server.
Consider the example 'srv01-database' server from before. In that case, the first disk, and so the disk used to install the guest OS on, was using the bulk 'Storage Group 1'. Now lets say you want to add a second hard drive using 100 GiB from the high speed storage to back the database engine and data. You can do so with a command like this;
- anvil-manage-server-storage --server srv01-database --disk vdb --add 100GiB --storage-group "Storage group 2"
Your server will now have two hard drives, one for either SG!
Subnode | Volume Group | Logical Volume | Replicated Volume |
---|---|---|---|
an-a01n01 | an-a01n01_vg0 | /dev/an-a01n01_vg0/srv01-database_0 | /dev/drbd/by-res/srv01-database/0 |
an-a01n02 | an-a01n02_vg0 | /dev/an-a01n02_vg0/srv01-database_0 | |
an-a01n01 | an-a01n01_vg1 | /dev/an-a01n01_vg1/srv01-database_1 | /dev/drbd/by-res/srv01-database/1 |
an-a01n02 | an-a01n02_vg1 | /dev/an-a01n02_vg1/srv01-database_1 |
An Important Note on Write Order
![]() |
Note: One of the important features of the Anvil! is write ordering. |
In this example, the two virtual hard drives exist on entirely separate RAID arrays. Even so, the order that writes are sent to the two volumes (the two virtual hard drives) are maintained. This is critical to protecting data integrity.
Consider the scenario where you have a write-ahead log on volume 0, and the database data is on volume 1. The write sequence to the database would be:
- . Write the transaction to be done to volume 0
- . Write the actual transaction to the database's data set on volume 1
- . Clear the completed transaction from the log on volume 0
In some systems, the storage would see two writes to volume 0 and one write to volume 1, and decide that it's more efficient to group the writes. If this happened, the actual sequence of writes would be;
- . Write the transaction to be done to volume 0
- . Clear the completed transaction from the log on volume 0
- . Write the actual transaction to the database's data set on volume 1
That's ... not good. If there was an unpredictable failure in between writing to the disk, you could end up in a situation where the write-ahead log is written and cleared as complete, and then the system shuts down mid transaction when working on the real data. This could leave your database in an inconsistent or corrupt state!
The Anvil! ensures that write ordering is maintained across disks to make sure this never happens.
Working With Storage Groups
In most systems, the "Storage Group 1" will be auto-assembled during node setup. If you have additional VGs, or for some reason the auto-assembly doesn't work, you can manage SGs manaually.
Using Striker
Using anvil-manage-storage-groups
The command line tool anvil-manage-storage-groups allows for management of SGs from the command line.
Showing Storage Info
First, lets see all the information;
anvil-manage-storage-groups --show --dr
Anvil Node: [an-anvil-01] - Demo VM Anvil! Node 1
- Subnode: [an-a01n01] volume groups;
- [an-a01n01_vg0], size: [248.41 GiB], free: [152.68 GiB], internal UUID: [m3Qkdi-ts8I-ZOwG-Dw41-117f-X0fz-g8gI1f]
- Subnode: [an-a01n02] volume groups;
- [an-a01n02_vg0], size: [248.41 GiB], free: [152.68 GiB], internal UUID: [LG8Wib-poB1-ygyo-fVUj-yTks-Mpq4-4qs057]
Anvil Node: [an-anvil-02] - Demo VM Anvil! Node 2
- Subnode: [an-a02n01] volume groups;
- [an-a02n01_vg0], size: [248.41 GiB], free: [50.54 GiB], internal UUID: [6Hcf5w-rzEz-rFO9-O7fe-eXUV-j82L-P1h5FW]
- Subnode: [an-a02n02] volume groups;
- [an-a02n02_vg0], size: [248.41 GiB], free: [50.54 GiB], internal UUID: [DDupu5-xg3R-0P5V-EEAf-ERpv-CaM7-IaoWyB]
Anvil Node: [an-anvil-03] - Demo VM Anvil! Node 3
- Subnode: [an-a03n01] volume groups;
- [an-a03n01_vg0], size: [249.00 GiB], free: [147.20 GiB], internal UUID: [19Sd1Q-HA8B-Z7iy-PsEM-NzgV-t7xb-zvJm5M]
- Subnode: [an-a03n02] volume groups;
- [an-a03n02_vg0], size: [249.00 GiB], free: [147.20 GiB], internal UUID: [10GAjf-E2yA-81kr-d1sV-JerO-Fbbn-ZwtDgV]
Disaster Recovery Hosts:
- DR Host: [an-a01dr01] VGs:
- [an-a01dr01], size: [77.88 GiB], free: [4.00 MiB], internal UUID: [BlabZw-Q4Ji-a208-LeFR-CIns-w4Ee-nqJCrJ]
- DR Host: [an-a03dr01] VGs:
- [an-a03dr01], size: [81.80 GiB], free: [4.00 MiB], internal UUID: [Nn4LOd-Mb3P-4cs0-GaVS-AClk-gVsy-U1FxI7]
Here we see three nodes and two DR hosts. For now, lets simplify and just look at an-anvil-01.
anvil-manage-storage-groups --show --anvil an-anvil-01
Anvil Node: [an-anvil-01] - Demo VM Anvil! Node 1
- Subnode: [an-a01n01] volume groups;
- [an-a01n01_vg0], size: [248.41 GiB], free: [152.68 GiB], internal UUID: [m3Qkdi-ts8I-ZOwG-Dw41-117f-X0fz-g8gI1f]
- Subnode: [an-a01n02] volume groups;
- [an-a01n02_vg0], size: [248.41 GiB], free: [152.68 GiB], internal UUID: [LG8Wib-poB1-ygyo-fVUj-yTks-Mpq4-4qs057]
Here we see two VGs, but no storage groups. So lets manually create Storage Group 1.
anvil-manage-storage-groups --anvil an-anvil-01 --add --group "Storage Group 1"
The storage group: [Storage Group 1] on the Anvil! node: [an-anvil-01] has been created with the UUID: [a92f4dde-de77-48b3-ae86-e6e8af80e3ad].
Now we can check to see that the group exists now.
anvil-manage-storage-groups --show --anvil an-anvil-01
Anvil Node: [an-anvil-01] - Demo VM Anvil! Node 1
- Subnode: [an-a01n01] volume groups;
- [an-a01n01_vg0], size: [248.41 GiB], free: [152.68 GiB], internal UUID: [m3Qkdi-ts8I-ZOwG-Dw41-117f-X0fz-g8gI1f]
- Subnode: [an-a01n02] volume groups;
- [an-a01n02_vg0], size: [248.41 GiB], free: [152.68 GiB], internal UUID: [LG8Wib-poB1-ygyo-fVUj-yTks-Mpq4-4qs057]
- Storage group: [Storage Group 1], UUID: [a92f4dde-de77-48b3-ae86-e6e8af80e3ad]
- <no members in this storage group yet>
The group now exists, but there are no members to the group yet. So lets add the subnode an-a01n01's VG an-a01n01_vg0. To do this, we'll use the internal UUID m3Qkdi-ts8I-ZOwG-Dw41-117f-X0fz-g8gI1f.
anvil-manage-storage-groups --anvil an-anvil-01 --add --group "Storage Group 1" --member m3Qkdi-ts8I-ZOwG-Dw41-117f-X0fz-g8gI1f
Added the volume group: [an-a01n01_vg0] on the host: [an-a01n01] to the storage group: [Storage Group 1]. The new member UUID is: [04a08c72-ded9-4394-a31b-12e714ebf42a].
Now verify it's added;
anvil-manage-storage-groups --show --anvil an-anvil-01
Anvil Node: [an-anvil-01] - Demo VM Anvil! Node 1
- Subnode: [an-a01n01] volume groups;
- [an-a01n01_vg0], size: [248.41 GiB], free: [152.68 GiB], internal UUID: [m3Qkdi-ts8I-ZOwG-Dw41-117f-X0fz-g8gI1f]
- Subnode: [an-a01n02] volume groups;
- [an-a01n02_vg0], size: [248.41 GiB], free: [152.68 GiB], internal UUID: [LG8Wib-poB1-ygyo-fVUj-yTks-Mpq4-4qs057]
- Storage group: [Storage Group 1], UUID: [a92f4dde-de77-48b3-ae86-e6e8af80e3ad]
- [an-a01n01]:[an-a01n01_vg0]
- [an-a01n02]:[an-a01n02_vg0]
Oh hey, look at this! See how [an-a01n02]:[an-a01n02_vg0] was also added? If we look in the logs;
2025/04/08 00:55:06:[27727]:Database.pm:6762; [ Note ] - The Anvil!: [an-anvil-01]'s storage group: [Storage Group 1] didn't have an entry for the host: [an-a01n02.alteeve.com]. The volume group: [LG8Wib-poB1-ygyo-fVUj-yTks-Mpq4-4qs057] is a close fit and not in another storage group, so adding it to this storage group now.
This happened because the sizes of both VGs are very close (within 1 [GiB]).
Any questions, feedback, advice, complaints or meanderings are welcome. | |||
Alteeve's Niche! | Alteeve Enterprise Support | Community Support | |
© 2025 Alteeve. Intelligent Availability® is a registered trademark of Alteeve's Niche! Inc. 1997-2025 | |||
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions. |