Storage Groups: Difference between revisions
Created page with "{{howto_header}} In an Anvil! cluster, a "''storage group''" is a logical group of LVM Volume Groups. When a server is created, the selected storage group tells the cluster where to pull the physical storage from to create the server's virtual hard drive. Most Anvil! nodes have only one VG per subnode and they are generally the same size. As such, the Anvil! is able to automatically link them into a storage group automatically. = Multi-VG Example = Consider a..." |
No edit summary |
||
Line 27: | Line 27: | ||
!Subnode | !Subnode | ||
!Physical Volume | !Physical Volume | ||
!Size | |||
!Volume Group | !Volume Group | ||
!Type | !Type | ||
Line 32: | Line 33: | ||
|class="code"|an-a01n01 | |class="code"|an-a01n01 | ||
|class="code"|/dev/sda3 | |class="code"|/dev/sda3 | ||
|class="code"|500 [[GiB]] | |||
|class="code"|an-a01n01_vg0 | |class="code"|an-a01n01_vg0 | ||
|rowspan="2"|Bulk storage, slow drives. The subnode's OS is installed on this array. | |rowspan="2"|Bulk storage, slow drives. The subnode's OS is installed on this array. | ||
Line 37: | Line 39: | ||
|class="code"|an-a01n02 | |class="code"|an-a01n02 | ||
|class="code"|/dev/sda3 | |class="code"|/dev/sda3 | ||
|class="code"|500 GiB | |||
|class="code"|an-a01n02_vg0 | |class="code"|an-a01n02_vg0 | ||
|} | |} | ||
Line 47: | Line 50: | ||
!Subnode | !Subnode | ||
!Physical Volume | !Physical Volume | ||
!Size | |||
!Volume Group | !Volume Group | ||
!Type | !Type | ||
Line 52: | Line 56: | ||
|class="code"|an-a01n01 | |class="code"|an-a01n01 | ||
|class="code"|/dev/sdb1 | |class="code"|/dev/sdb1 | ||
|class="code"|10 [[TiB]] | |||
|class="code"|an-a01n01_vg1 | |class="code"|an-a01n01_vg1 | ||
|rowspan="2"|Smaller but faster drives, reserved for specific guest servers with high speed storage requirements. | |rowspan="2"|Smaller but faster drives, reserved for specific guest servers with high speed storage requirements. | ||
Line 57: | Line 62: | ||
|class="code"|an-a01n02 | |class="code"|an-a01n02 | ||
|class="code"|/dev/sdb1 | |class="code"|/dev/sdb1 | ||
|class="code"|10 TiB | |||
|class="code"|an-a01n02_vg1 | |class="code"|an-a01n02_vg1 | ||
|} | |} | ||
Line 90: | Line 96: | ||
= Storage Groups; Linking VGs = | = Storage Groups; Linking VGs = | ||
<span class="code"></span> | {{note|1=The Anvil! tries to auto-form storage groups by matching VG sizes. There is no requirement to name volume groups in any specific way, so we can't use the VG names as a reliable way to form storage groups, so size is the only way we can guess which belong together.}} | ||
In the example above, the high-speed storage VGs were both 500 GiB, and both bulk storage VGs were 10 TiB. As the sizes matched, the Anvil! would auto-create two Storage Groups; | |||
{| class="wikitable" style="margin:auto" | |||
|+ Node: <span class="code">an-a01n01</span>. | |||
|- | |||
!Subnode | |||
!Volume Group | |||
!Storage Group | |||
!Type | |||
|- | |||
|class="code"|an-a01n01 | |||
|class="code"|an-a01n01_vg0 | |||
|rowspan="2" class="code"|Storage Group 1 | |||
|rowspan="2"|Bulk storage, slow drives. The subnode's OS is installed on this array. | |||
|- | |||
|class="code"|an-a01n02 | |||
|class="code"|an-a01n02_vg0 | |||
|- | |||
|class="code"|an-a01n01 | |||
|class="code"|an-a01n01_vg1 | |||
|rowspan="2" class="code"|Storage Group 2 | |||
|rowspan="2"|Smaller but faster drives, reserved for specific guest servers with high speed storage requirements. | |||
|- | |||
|class="code"|an-a01n02 | |||
|class="code"|an-a01n02_vg1 | |||
|} | |||
{{Note|1=The auto-created storage groups have generic names. You can rename the storage groups. Example; '<span class="code">[[anvil-manage-storage-groups]] --anvil an-anvil-01 --group "Storage Group 1" --new-name "Bulk Storage 1"</span>'.}} | |||
If you have mis-matched hardware as subnodes in your Anvil! node, then your storage groups may not auto-form. If this happens, you can use <span class="code">[[anvil-manage-storage-groups]]</span>' to manually groups VGs into SGs. Please see the man page for details. | |||
= Use of Storage Groups = | |||
With your storage groups created, you can now use them when you provision a server to tell the Anvil! which type of storage to use when you create a new server! | |||
You can also use <span class="code">[[anvil-manage-server-storage]]</span>' to add a second (or third) virtual hard drive to an existing server. | |||
Consider the example '<span class="code">srv01-database</span>' server from before. In that case, the first disk, and so the disk used to install the guest OS on, was using the bulk '<span class="code">Storage Group 1</span>'. Now lets say you want to add a second hard drive using 100 GiB from the high speed storage to back the database engine and data. You can do so with a command like this; | |||
* <span class="code">[[anvil-manage-server-storage]] --server srv01-database --disk vdb --add 100GiB --storage-group "Storage group 2"</span> | |||
Your server will now have two hard drives, one for either SG! | |||
{| class="wikitable" style="margin:auto" | |||
|+ Server: <span class="code">srv01-database</span> on <span class="code">an-a01n01</span>. | |||
|- | |||
!Subnode | |||
!Volume Group | |||
!Logical Volume | |||
!Replicated Volume | |||
|- | |||
|class="code"|an-a01n01 | |||
|class="code"|an-a01n01_vg0 | |||
|class="code"|/dev/an-a01n01_vg0/srv01-database_0 | |||
|rowspan="2"|<span class="code">/dev/drbd/by-res/srv01-database/0</span> | |||
|- | |||
|class="code"|an-a01n02 | |||
|class="code"|an-a01n02_vg0 | |||
|class="code"|/dev/an-a01n02_vg0/srv01-database_0 | |||
|- | |||
|class="code"|an-a01n01 | |||
|class="code"|an-a01n01_vg1 | |||
|class="code"|/dev/an-a01n01_vg1/srv01-database_1 | |||
|rowspan="2"|<span class="code">/dev/drbd/by-res/srv01-database/1</span> | |||
|- | |||
|class="code"|an-a01n02 | |||
|class="code"|an-a01n02_vg1 | |||
|class="code"|/dev/an-a01n02_vg1/srv01-database_1 | |||
|} | |||
= An Important Note on Write Order = | |||
{{note|1=One of the important features of the Anvil! is write ordering.}} | |||
In this example, the two virtual hard drives exist on entirely separate RAID arrays. Even so, the order that writes are sent to the two volumes (the two virtual hard drives) are maintained. This is critical to protecting data integrity. | |||
Consider the scenario where you have a write-ahead log on volume <span class="code">0</span>, and the database data is on volume <span class="code">1</span>. The write sequence to the database would be: | |||
#. Write the transaction to be done to volume 0 | |||
#. Write the actual transaction to the database's data set on volume 1 | |||
#. Clear the completed transaction from the log on volume 0 | |||
In some systems, the storage would see two writes to volume 0 and one write to volume 1, and decide that it's more efficient to group the writes. If this happened, the actual sequence of writes would be; | |||
#. Write the transaction to be done to volume 0 | |||
#. Clear the completed transaction from the log on volume 0 | |||
#. Write the actual transaction to the database's data set on volume 1 | |||
That's ... not good. If there was an unpredictable failure in between writing to the disk, you could end up in a situation where the write-ahead log is written and cleared as complete, and then the system shuts down mid transaction when working on the real data. This could leave your database in an inconsistent or corrupt state! | |||
The Anvil! ensures that write ordering is maintained across disks to make sure this never happens. | |||
{{footer}} | {{footer}} |
Revision as of 06:33, 19 September 2024
Alteeve Wiki :: How To :: Storage Groups |
In an Anvil! cluster, a "storage group" is a logical group of LVM Volume Groups. When a server is created, the selected storage group tells the cluster where to pull the physical storage from to create the server's virtual hard drive. Most Anvil! nodes have only one VG per subnode and they are generally the same size. As such, the Anvil! is able to automatically link them into a storage group automatically.
Multi-VG Example
Consider a more complex storage configuration, however.
Lets use the example of subnodes with two RAID arrays; The first is made of high speed flash drives, and the second made up of lower speed bulk storage drives. In this case, there will be two storage devices on the nodes;
Block Device | Purpose |
---|---|
/dev/sda | Bulk storage, slow drives. The subnode's OS is installed on this array. |
/dev/sdb | Smaller but faster drives, reserved for specific guest servers with high speed storage requirements. |
During the OS installation stage, /dev/sda3 is used as the LVM physical volume, or "PV".
Subnode | Physical Volume | Size | Volume Group | Type |
---|---|---|---|---|
an-a01n01 | /dev/sda3 | 500 GiB | an-a01n01_vg0 | Bulk storage, slow drives. The subnode's OS is installed on this array. |
an-a01n02 | /dev/sda3 | 500 GiB | an-a01n02_vg0 |
After the OS is installed, the /dev/sdb1 partition is used as the PV and used to back a second VG;
Subnode | Physical Volume | Size | Volume Group | Type |
---|---|---|---|---|
an-a01n01 | /dev/sdb1 | 10 TiB | an-a01n01_vg1 | Smaller but faster drives, reserved for specific guest servers with high speed storage requirements. |
an-a01n02 | /dev/sdb1 | 10 TiB | an-a01n02_vg1 |
Virtual Hard Drives
When a server is created, a virtual hard drive is created to back the storage of that server. Behind the scenes though, an LVM logical volume is created on each subnode, and those are used to back the replicates storage device.
Consider an example server we'll call 'srv01-database'. This looks like this
Subnode | Volume Group | Logical Volume | Replicated Volume |
---|---|---|---|
an-a01n01 | an-a01n01_vg0 | /dev/an-a01n01_vg0/srv01-database_0 | /dev/drbd/by-res/srv01-database/0 |
an-a01n02 | an-a01n02_vg0 | /dev/an-a01n02_vg0/srv01-database_0 |
At this point, you might be asking yourself; "How did the new server decide to use the bulk storage array? What if I wanted to use the high speed storage?".
This is where Storage Groups come in to play!
Storage Groups; Linking VGs
In the example above, the high-speed storage VGs were both 500 GiB, and both bulk storage VGs were 10 TiB. As the sizes matched, the Anvil! would auto-create two Storage Groups;
Subnode | Volume Group | Storage Group | Type |
---|---|---|---|
an-a01n01 | an-a01n01_vg0 | Storage Group 1 | Bulk storage, slow drives. The subnode's OS is installed on this array. |
an-a01n02 | an-a01n02_vg0 | ||
an-a01n01 | an-a01n01_vg1 | Storage Group 2 | Smaller but faster drives, reserved for specific guest servers with high speed storage requirements. |
an-a01n02 | an-a01n02_vg1 |
![]() |
Note: The auto-created storage groups have generic names. You can rename the storage groups. Example; 'anvil-manage-storage-groups --anvil an-anvil-01 --group "Storage Group 1" --new-name "Bulk Storage 1"'. |
If you have mis-matched hardware as subnodes in your Anvil! node, then your storage groups may not auto-form. If this happens, you can use anvil-manage-storage-groups' to manually groups VGs into SGs. Please see the man page for details.
Use of Storage Groups
With your storage groups created, you can now use them when you provision a server to tell the Anvil! which type of storage to use when you create a new server!
You can also use anvil-manage-server-storage' to add a second (or third) virtual hard drive to an existing server.
Consider the example 'srv01-database' server from before. In that case, the first disk, and so the disk used to install the guest OS on, was using the bulk 'Storage Group 1'. Now lets say you want to add a second hard drive using 100 GiB from the high speed storage to back the database engine and data. You can do so with a command like this;
- anvil-manage-server-storage --server srv01-database --disk vdb --add 100GiB --storage-group "Storage group 2"
Your server will now have two hard drives, one for either SG!
Subnode | Volume Group | Logical Volume | Replicated Volume |
---|---|---|---|
an-a01n01 | an-a01n01_vg0 | /dev/an-a01n01_vg0/srv01-database_0 | /dev/drbd/by-res/srv01-database/0 |
an-a01n02 | an-a01n02_vg0 | /dev/an-a01n02_vg0/srv01-database_0 | |
an-a01n01 | an-a01n01_vg1 | /dev/an-a01n01_vg1/srv01-database_1 | /dev/drbd/by-res/srv01-database/1 |
an-a01n02 | an-a01n02_vg1 | /dev/an-a01n02_vg1/srv01-database_1 |
An Important Note on Write Order
![]() |
Note: One of the important features of the Anvil! is write ordering. |
In this example, the two virtual hard drives exist on entirely separate RAID arrays. Even so, the order that writes are sent to the two volumes (the two virtual hard drives) are maintained. This is critical to protecting data integrity.
Consider the scenario where you have a write-ahead log on volume 0, and the database data is on volume 1. The write sequence to the database would be:
- . Write the transaction to be done to volume 0
- . Write the actual transaction to the database's data set on volume 1
- . Clear the completed transaction from the log on volume 0
In some systems, the storage would see two writes to volume 0 and one write to volume 1, and decide that it's more efficient to group the writes. If this happened, the actual sequence of writes would be;
- . Write the transaction to be done to volume 0
- . Clear the completed transaction from the log on volume 0
- . Write the actual transaction to the database's data set on volume 1
That's ... not good. If there was an unpredictable failure in between writing to the disk, you could end up in a situation where the write-ahead log is written and cleared as complete, and then the system shuts down mid transaction when working on the real data. This could leave your database in an inconsistent or corrupt state!
The Anvil! ensures that write ordering is maintained across disks to make sure this never happens.
Any questions, feedback, advice, complaints or meanderings are welcome. | |||
Alteeve's Niche! | Alteeve Enterprise Support | Community Support | |
© 2025 Alteeve. Intelligent Availability® is a registered trademark of Alteeve's Niche! Inc. 1997-2025 | |||
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions. |