Configuring a Disaster Recovery Host: Difference between revisions

From Alteeve Wiki
Jump to navigation Jump to search
No edit summary
 
(One intermediate revision by the same user not shown)
Line 39: Line 39:
We can check the current associations using the '<span class="code">--show</span>' switch.
We can check the current associations using the '<span class="code">--show</span>' switch.


<syntaxhighlight lang="bash">
{| class="wikitable" style="margin:left"
anvil-manage-dr --show
|width="10%" class="code"|an-a01n01 #
</syntaxhighlight>
|width="90%" class="code"|anvil-manage-dr --show
<syntaxhighlight lang="text">
|-
|colspan="2"|<syntaxhighlight lang="text">
Anvil! Nodes
Anvil! Nodes
- Node Name: [an-anvil-01], Description: [Demo VM Anvil!]
- Node Name: [an-anvil-01], Description: [Demo VM Anvil!]
Line 56: Line 57:
- Server name: [srv04-min] on Anvil! Node: [an-anvil-01]
- Server name: [srv04-min] on Anvil! Node: [an-anvil-01]
</syntaxhighlight>
</syntaxhighlight>
|}


In this example cluster, there is one node called "<span class="code">an-anvil-01</span>", and one DR host called "<span class="code">an-a01dr01.alteeve.com</span>". There are four servers that we'll protect later.
In this example cluster, there is one node called "<span class="code">an-anvil-01</span>", and one DR host called "<span class="code">an-a01dr01.alteeve.com</span>". There are four servers that we'll protect later.
Line 63: Line 65:
The first step is to "link" the DR host "<span class="code">an-a01dr01</span>" to the node "<span class="code">an-anvil-01</span>". When a DR is linked, it simply tells the cluster that the DR host is a candidate target for protecting servers.
The first step is to "link" the DR host "<span class="code">an-a01dr01</span>" to the node "<span class="code">an-anvil-01</span>". When a DR is linked, it simply tells the cluster that the DR host is a candidate target for protecting servers.


<syntaxhighlight lang="bash">
{| class="wikitable" style="margin:left"
anvil-manage-dr --anvil an-anvil-01 --dr-host an-a01dr01 --link
|width="10%" class="code"|an-a01n01 #
</syntaxhighlight>
|width="90%" class="code"|anvil-manage-dr --anvil an-anvil-01 --dr-host an-a01dr01 --link
<syntaxhighlight lang="text">
|-
|colspan="2"|<syntaxhighlight lang="text">
The DR host: [an-a01dr01] has been linked to the Anvil! node: [an-anvil-01].
The DR host: [an-a01dr01] has been linked to the Anvil! node: [an-anvil-01].
</syntaxhighlight>
</syntaxhighlight>
|}


We can confirm that this link has been created by re-running "<span class="code">--show</span>".
We can confirm that this link has been created by re-running "<span class="code">--show</span>".


<syntaxhighlight lang="bash">
{| class="wikitable" style="margin:left"
anvil-manage-dr --show
|width="10%" class="code"|an-a01n01 #
</syntaxhighlight>
|width="90%" class="code"|anvil-manage-dr --show
<syntaxhighlight lang="text">
|-
|colspan="2"|<syntaxhighlight lang="text">
Anvil! Nodes
Anvil! Nodes
- Node Name: [an-anvil-01], Description: [Demo VM Anvil!]
- Node Name: [an-anvil-01], Description: [Demo VM Anvil!]
Line 87: Line 92:
- Server name: [srv02-win2019] on Anvil! Node: [an-anvil-01]
- Server name: [srv02-win2019] on Anvil! Node: [an-anvil-01]
- Server name: [srv03-el6] on Anvil! Node: [an-anvil-01]
- Server name: [srv03-el6] on Anvil! Node: [an-anvil-01]
- Server name: [srv04-min] on Anvil! Node: [an-anvil-01]
- Server name: [srv04-min] on Anvil! Node: [an-anvil-01]</syntaxhighlight>
</syntaxhighlight>
|}


Excellent!
Excellent!
Line 100: Line 105:
Consider this example;
Consider this example;


{| class="wikitable" style="margin:auto"
{| class="wikitable" style="margin:left"
|+ Server: <span class="code">srv01-database</span> on <span class="code">an-a01n01</span>.
|+ Server: <span class="code">srv01-database</span> on <span class="code">an-a01n01</span>.
|-
|-
Line 117: Line 122:
When we use <span class="code">[[anvil-manage-storage-groups]]</span>, we can see this in detail;
When we use <span class="code">[[anvil-manage-storage-groups]]</span>, we can see this in detail;


<syntaxhighlight lang="bash">
{| class="wikitable" style="margin:left"
anvil-manage-storage-groups --anvil an-a01n01
|width="10%" class="code"|an-a01n01 #
</syntaxhighlight>
|width="90%" class="code"|anvil-manage-storage-groups --anvil an-a01n01
<syntaxhighlight lang="text">
|-
|colspan="2"|<syntaxhighlight lang="text">
Anvil Node: [an-anvil-01] - Demo VM Anvil!
Anvil Node: [an-anvil-01] - Demo VM Anvil!
- Subnode: [an-a01n01] volume groups;
- Subnode: [an-a01n01] volume groups;
Line 132: Line 138:
Disaster Recovery Hosts:
Disaster Recovery Hosts:
- DR Host: [an-a01dr01] VGs:
- DR Host: [an-a01dr01] VGs:
  - [an-a01dr01_vg0], size: [248.41 GiB], free: [170.53 GiB], internal UUID: [N9TvqT-IhEr-cjb0-Xs3l-2dQG-enDv-DZugyK]
  - [an-a01dr01_vg0], size: [248.41 GiB], free: [170.53 GiB], internal UUID: [N9TvqT-IhEr-cjb0-Xs3l-2dQG-enDv-DZugyK]</syntaxhighlight>
</syntaxhighlight>
|}


This example is pretty simple, as there is only one volume group per subnode and there's only one volume group on the DR host. So adding the DR host's VG to the storage group is simple;
This example is pretty simple, as there is only one volume group per subnode and there's only one volume group on the DR host. So adding the DR host's VG to the storage group is simple;


<syntaxhighlight lang="bash">
{| class="wikitable" style="margin:left"
anvil-manage-storage-groups --anvil an-anvil-01 --group "Storage group 1" --add --member N9TvqT-IhEr-cjb0-Xs3l-2dQG-enDv-DZugyK
|width="10%" class="code"|an-a01n01 #
</syntaxhighlight>
|width="90%" class="code"|anvil-manage-storage-groups --anvil an-anvil-01 --group "Storage group 1" --add --member N9TvqT-IhEr-cjb0-Xs3l-2dQG-enDv-DZugyK
<syntaxhighlight lang="text">
|-
|colspan="2"|<syntaxhighlight lang="text">
Added the volume group: [an-a01dr01_vg0] on the host: [an-a01dr01] to the storage group: [Storage group 1]. The new member UUID is: [ac276cff-982a-4b4a-8956-c3291c81ef73].
Added the volume group: [an-a01dr01_vg0] on the host: [an-a01dr01] to the storage group: [Storage group 1]. The new member UUID is: [ac276cff-982a-4b4a-8956-c3291c81ef73].
</syntaxhighlight>
</syntaxhighlight>
|}


Now if we look again, the storage group now has the DR host's VG.
Now if we look again, the storage group now has the DR host's VG.


 
{| class="wikitable" style="margin:left"
{| class="wikitable" style="margin:auto"
|+ Server: <span class="code">srv01-database</span> on <span class="code">an-a01n01</span>.
|+ Server: <span class="code">srv01-database</span> on <span class="code">an-a01n01</span>.
|-
|-
Line 167: Line 174:
The more detailed view;
The more detailed view;


<syntaxhighlight lang="bash">
{| class="wikitable" style="margin:left"
anvil-manage-storage-groups --show
|width="10%" class="code"|an-a01n01 #
</syntaxhighlight>
|width="90%" class="code"|anvil-manage-storage-groups --show
<syntaxhighlight lang="text">
|-
|colspan="2"|<syntaxhighlight lang="text">
Anvil Node: [an-anvil-01] - Demo VM Anvil!
Anvil Node: [an-anvil-01] - Demo VM Anvil!
- Subnode: [an-a01n01] volume groups;
- Subnode: [an-a01n01] volume groups;
Line 185: Line 193:
  - [an-a01dr01_vg0], size: [248.41 GiB], free: [170.53 GiB], internal UUID: [N9TvqT-IhEr-cjb0-Xs3l-2dQG-enDv-DZugyK]
  - [an-a01dr01_vg0], size: [248.41 GiB], free: [170.53 GiB], internal UUID: [N9TvqT-IhEr-cjb0-Xs3l-2dQG-enDv-DZugyK]
</syntaxhighlight>
</syntaxhighlight>
|}


We're not ready to protect servers!
We're not ready to protect servers!
Line 202: Line 211:
{{note|1=The '<span class="code">long-throw</span>' protocol uses a closed-source utility and, as such, requires a licence. [[Contact us]] for more information.}}
{{note|1=The '<span class="code">long-throw</span>' protocol uses a closed-source utility and, as such, requires a licence. [[Contact us]] for more information.}}


{| class="wikitable" style="margin:auto"
{| class="wikitable" style="margin:left"
|+ DR Replication Protocols
|+ DR Replication Protocols
|-
|-
Line 226: Line 235:
The time between initiating the write and getting the write complete message is what determines how fast or slow your storage is.  
The time between initiating the write and getting the write complete message is what determines how fast or slow your storage is.  


== Short-Throw; The Default ==
=== Short-Throw; The Default ===


In the vast majority of cases, <span class="code">short-throw</span> is the best option, and it is the default replication protocol.
In the vast majority of cases, <span class="code">short-throw</span> is the best option, and it is the default replication protocol.
Line 236: Line 245:
{{note|1=As mentioned earlier, write-ordering is maintained, even across multi-disk servers. As such, the data on the DR will be consistent and crash safe.}}
{{note|1=As mentioned earlier, write-ordering is maintained, even across multi-disk servers. As such, the data on the DR will be consistent and crash safe.}}


== Sync; When Data is Critical ==
=== Sync; When Data is Critical ===


If your environment is one where the value of the generated data is paramount over performance, then you will want to use the '<span class="code">sync</span>' protocol.  
If your environment is one where the value of the generated data is paramount over performance, then you will want to use the '<span class="code">sync</span>' protocol.  
Line 244: Line 253:
The main draw back here is that the disk performance will effectively be set by the bandwidth and network latency of the connection between the production node and the DR host. As such, you will want to use the shortest and fastest network link between the two sites as possible.
The main draw back here is that the disk performance will effectively be set by the bandwidth and network latency of the connection between the production node and the DR host. As such, you will want to use the shortest and fastest network link between the two sites as possible.


== Long-Throw; A Long Distance Relationship ==
=== Long-Throw; A Long Distance Relationship ===
 
The core difference between <span class="code">short-throw</span> and <span class="code">long-throw</span> is where the data to transmit is cached.
 
In <span class="code">short-throw</span>, the network transmit buffer is relatively small, and if the buffer is filled up faster than it can be drained, write performance inside your server will be effected. Generally this isn't an issue in most standard server installs, but if it does effect you, there are two options.
 
#. Tune the network performance of the network connection used to send data to the DR host. Red Hat has an excellent guide for this: [https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/tuning-the-network-performance_monitoring-and-managing-system-status-and-performance#testing-the-tcp-throughput-using-iperf3_tuning-tcp-connections-for-high-throughput RHEL 9 - Tuning the network performance] (RH Subscription Required)
#. Purchase a [[DRBD Proxy]] license to enable the <span class="code">long-throw</span> protocol. Note that the license is per node, not per server.
 
With <span class="code">long-through</span>, system RAM is used to create a dramatically larger transmit buffer.
 
If you find that tuning alone is not sufficient to avoid buffer exhaustion, this is the best option to maintain good performance. However, having more data in cache means that the DR host is allowed to fall further behind.
 
== Protecting srv01-bar ==
 
The first server we'll protect is <span class="code">srv01-bar</span>.
 
We can verify that the server is NOT being protected yet by checking '<span class="code">[[anvil-watch-drbd]]</span>';


{| class="wikitable" style="margin:left"
|width="10%" class="code"|an-a01n01 #
|width="90%" class="code"|anvil-watch-drbd
|-
|colspan="2"|<syntaxhighlight lang="text">Sync progress:
  Res              To          Vol
- srv01-bar    -> an-a01n02 ->  0: [0;0]  //  ds:UpToDate/UpToDate, ro:Secondary/Primary
- srv02-win2019 -> an-a01n02 ->  0: [0;0]  //  ds:UpToDate/UpToDate, ro:Secondary/Primary
- srv03-el6    -> an-a01n02 ->  0: [0;0]  //  ds:UpToDate/UpToDate, ro:Secondary/Primary
- srv04-min    -> an-a01n02 ->  0: [0;0]  //  ds:UpToDate/UpToDate, ro:Secondary/Primary
- srv04-min    -> an-a01n02 ->  1: [0;0]  //  ds:UpToDate/UpToDate, ro:Secondary/Primary
* Total transfer speed is about: [0 MiB/sec]</syntaxhighlight>
|}


Above we see that <span class="code">srv01-bar</span> is replicating to <span class="code">an-a01n01</span> only, which is normal.


<span class="code"></span>
<span class="code"></span>

Latest revision as of 23:35, 19 September 2024

 Alteeve Wiki :: How To :: Configuring a Disaster Recovery Host

A Disaster Recovery host, aka "DR Host", is a physical server that is installed in a physically different location from the production Anvil! cluster.

The purpose of the DR host is to provide a fall-back location to run servers should the production cluster suffer a catastrophic failure. Consider failure scenarios like;

  • Accidental fire suppression discharge
  • Transformer failure feeding the data-center
  • Localized fire in the cluster location

In such scenarios, the facility might still be perfectly able to function, but all cluster equipment is damaged or destroyed.

The DR host is often installed in an opposite corner of the facility, in another building on campus, or in an entirely different city. Where ever it happens to be, the DR host can be pressed into service!

Storage replication to the DR is streaming and ordered, but it is not synchronous. This way, the latency of the remote connection does not impact day to day performance, but the data being replicated to the DR host is ordered, even across multiple virtual disks. As such, the DR host may be allowed to fall a few seconds behind production, but the data will be contiguous.

What this means is that your servers will boot on the DR host, file system journals will replay, database write-ahead logs will work, and your applications will start, no different than if a machine had simply rebooted

The time to get the DR site online is measured in minutes. Far faster than recovering from even onsite backups with standby hardware!

DR Host Hardware Considerations

In an ideal configuration, there would be a dedicate DR host to match the hardware of each Anvil! node. In such a setup, a full fail-over to the DR site would be possible without any loss in performance.

For those with stricter budgets, a DR host's hardware can be sized such that a subset of core production servers are protected only.

Another possible configuration is to have one (or a few) much larger machines that each can provide DR hosting to 2 or more production nodes. Of course, the performance in such a configuration may be impacted. It is strongly advised to test that performance will be acceptable prior to deployment.

Connecting a DR Host to an Anvil Node

The first step is to link a DR host to a node. Initially, a DR host is "floating", meaning it's connected to the cluster but not assigned to any node.

Note: In a future release, DR functions will be moved into the Striker UI. Until then management is done via the command line.

We will use the command line tool anvil-manage-dr.

Note: The anvil-manage-dr tool can be run from any machine in the cluster.

We can check the current associations using the '--show' switch.

an-a01n01 # anvil-manage-dr --show
Anvil! Nodes
- Node Name: [an-anvil-01], Description: [Demo VM Anvil!]
 - No linked DR hosts yet.

-=] DR Hosts
- Name: [an-a01dr01.alteeve.com]

-=] Servers
- Server name: [srv01-bar] on Anvil! Node: [an-anvil-01]
- Server name: [srv02-win2019] on Anvil! Node: [an-anvil-01]
- Server name: [srv03-el6] on Anvil! Node: [an-anvil-01]
- Server name: [srv04-min] on Anvil! Node: [an-anvil-01]

In this example cluster, there is one node called "an-anvil-01", and one DR host called "an-a01dr01.alteeve.com". There are four servers that we'll protect later.

Linking a DR Host to a Node

The first step is to "link" the DR host "an-a01dr01" to the node "an-anvil-01". When a DR is linked, it simply tells the cluster that the DR host is a candidate target for protecting servers.

an-a01n01 # anvil-manage-dr --anvil an-anvil-01 --dr-host an-a01dr01 --link
The DR host: [an-a01dr01] has been linked to the Anvil! node: [an-anvil-01].

We can confirm that this link has been created by re-running "--show".

an-a01n01 # anvil-manage-dr --show
Anvil! Nodes
- Node Name: [an-anvil-01], Description: [Demo VM Anvil!]
 - Linked: [an-a01dr01], link UUID: [daea7173-748a-4874-8126-5858d6226e5b]

-=] DR Hosts
- Name: [an-a01dr01.alteeve.com]

-=] Servers
- Server name: [srv01-bar] on Anvil! Node: [an-anvil-01]
- Server name: [srv02-win2019] on Anvil! Node: [an-anvil-01]
- Server name: [srv03-el6] on Anvil! Node: [an-anvil-01]
- Server name: [srv04-min] on Anvil! Node: [an-anvil-01]

Excellent!

Adding a DR Host's Volume Group to a Node's Storage Group

In an Anvil! cluster, storage groups are groupings of LVM volume groups across subnodes and DR hosts. These are used to know where to create the backing logical volumes for a server's virtual hard drive.

When a server is "protected" by a DR host, a new logical volume is created on that DR host and it is then added to the server's replicated storage. In order to know which VG to use when creating these new LVs, the DR host's volume group(s) must be added to the node's storage group(s).

Consider this example;

Server: srv01-database on an-a01n01.
Storage Group Subnode/DR Host Volume Group
Storage Group 1 an-a01n01 an-a01n01_vg0
an-a01n02 an-a01n02_vg0

When we use anvil-manage-storage-groups, we can see this in detail;

an-a01n01 # anvil-manage-storage-groups --anvil an-a01n01
Anvil Node: [an-anvil-01] - Demo VM Anvil!
- Subnode: [an-a01n01] volume groups;
 - [an-a01n01_vg0], size: [248.41 GiB], free: [62.68 GiB], internal UUID: [Lzlhon-E4gr-2PEE-IInb-JmGr-uRID-gLrtDy]
- Subnode: [an-a01n02] volume groups;
 - [an-a01n02_vg0], size: [248.41 GiB], free: [62.68 GiB], internal UUID: [z2an5E-JHq9-p1Hl-ln02-17P7-ciZk-OtjDZI]
- Storage group: [Storage group 1], UUID: [c6f1b34d-052b-49e3-a4a2-b9e5b31f3280]
 - [an-a01n01]:[an-a01n01_vg0]
 - [an-a01n02]:[an-a01n02_vg0]

Disaster Recovery Hosts:
- DR Host: [an-a01dr01] VGs:
 - [an-a01dr01_vg0], size: [248.41 GiB], free: [170.53 GiB], internal UUID: [N9TvqT-IhEr-cjb0-Xs3l-2dQG-enDv-DZugyK]

This example is pretty simple, as there is only one volume group per subnode and there's only one volume group on the DR host. So adding the DR host's VG to the storage group is simple;

an-a01n01 # anvil-manage-storage-groups --anvil an-anvil-01 --group "Storage group 1" --add --member N9TvqT-IhEr-cjb0-Xs3l-2dQG-enDv-DZugyK
Added the volume group: [an-a01dr01_vg0] on the host: [an-a01dr01] to the storage group: [Storage group 1]. The new member UUID is: [ac276cff-982a-4b4a-8956-c3291c81ef73].

Now if we look again, the storage group now has the DR host's VG.

Server: srv01-database on an-a01n01.
Storage Group Subnode/DR Host Volume Group
Storage Group 1 an-a01n01 an-a01n01_vg0
an-a01n02 an-a01n02_vg0
an-a01dr01 an-a01dr01_vg0

The more detailed view;

an-a01n01 # anvil-manage-storage-groups --show
Anvil Node: [an-anvil-01] - Demo VM Anvil!
- Subnode: [an-a01n01] volume groups;
 - [an-a01n01_vg0], size: [248.41 GiB], free: [62.68 GiB], internal UUID: [Lzlhon-E4gr-2PEE-IInb-JmGr-uRID-gLrtDy]
- Subnode: [an-a01n02] volume groups;
 - [an-a01n02_vg0], size: [248.41 GiB], free: [62.68 GiB], internal UUID: [z2an5E-JHq9-p1Hl-ln02-17P7-ciZk-OtjDZI]
- Storage group: [Storage group 1], UUID: [c6f1b34d-052b-49e3-a4a2-b9e5b31f3280]
 - [an-a01dr01]:[an-a01dr01_vg0]
 - [an-a01n01]:[an-a01n01_vg0]
 - [an-a01n02]:[an-a01n02_vg0]

Disaster Recovery Hosts:
- DR Host: [an-a01dr01] VGs:
 - [an-a01dr01_vg0], size: [248.41 GiB], free: [170.53 GiB], internal UUID: [N9TvqT-IhEr-cjb0-Xs3l-2dQG-enDv-DZugyK]

We're not ready to protect servers!

Protecting a Server

Protecting a server is the process of configuring the cluster to allow for the server to run on a DR host. This involves copying over the server's "definition file" to the DR host, and extending the server's replicated storage to the DR host.

Replication Protocols

To extend the storage replication to the DR host, we must decide how we want the storage to replicate. The is controlled by selecting a "protocol".

The two subnodes in an Anvil! node always replicate using the 'sync' protocol. This is fine as there is a dedicated link between the subnodes to ensure that storage replication happens very quickly and it ensures no data loss in case of a catastrophic and unexpected fault.

Over DR, the replication link could be higher latency and/or lower bandwidth. In such cases, using the 'sync' protocol would cause a performance hit, as a write to disk would not be complete until the data reaches persistent storage on the DR host. For this reason, two alternative replication protocols are supported; 'short-throw' and 'long-throw'.

Note: The 'long-throw' protocol uses a closed-source utility and, as such, requires a licence. Contact us for more information.
DR Replication Protocols
Protocol Benefit Drawback
sync Maximum data protection Storage performance limited to slowest network latency/bandwidth
short-throw Minimal performance hit over DR link DR host "falls behind" production
long-throw Supports high-latency, low bandwidth links DR is allowed to fall further behind. Requires a license

Storage performance inside a server depends on when the write to disk is "complete". That is to say; Your application sends data to the storage driver in your server. The driver takes the data and passes it down to the hypervisor, and then waits for the hypervisor to tell it that the data has been written. Once it's told the write is complete, your application is told the write is complete and it can move on to other tasks.

The time between initiating the write and getting the write complete message is what determines how fast or slow your storage is.

Short-Throw; The Default

In the vast majority of cases, short-throw is the best option, and it is the default replication protocol.

In this case, a write to disk is considered "complete" when the data has been stored on persistent storage on both subnodes, and when it's buffered for transmission to the DR host. Thanks to this, data is fully redundant between the subnodes, but the DR host is allowed to fall behind.

If the production Anvil! is destroyed while data is still in the transmit buffer, that data will be lost. How much data is lost is determined by how much data was in the buffer at the time of the destruction. The network transmit buffer is relatively small though, so generally this translates to just a few seconds of lost data.

Note: As mentioned earlier, write-ordering is maintained, even across multi-disk servers. As such, the data on the DR will be consistent and crash safe.

Sync; When Data is Critical

If your environment is one where the value of the generated data is paramount over performance, then you will want to use the 'sync' protocol.

In this case, the guest is told the write is complete only when the data has reached persistent storage on both subnodes and the DR host. This is functionally like running a three-disk RAID Level 1 array, but with the data being spread across three machines in two physical locations.

The main draw back here is that the disk performance will effectively be set by the bandwidth and network latency of the connection between the production node and the DR host. As such, you will want to use the shortest and fastest network link between the two sites as possible.

Long-Throw; A Long Distance Relationship

The core difference between short-throw and long-throw is where the data to transmit is cached.

In short-throw, the network transmit buffer is relatively small, and if the buffer is filled up faster than it can be drained, write performance inside your server will be effected. Generally this isn't an issue in most standard server installs, but if it does effect you, there are two options.

  1. . Tune the network performance of the network connection used to send data to the DR host. Red Hat has an excellent guide for this: RHEL 9 - Tuning the network performance (RH Subscription Required)
  2. . Purchase a DRBD Proxy license to enable the long-throw protocol. Note that the license is per node, not per server.

With long-through, system RAM is used to create a dramatically larger transmit buffer.

If you find that tuning alone is not sufficient to avoid buffer exhaustion, this is the best option to maintain good performance. However, having more data in cache means that the DR host is allowed to fall further behind.

Protecting srv01-bar

The first server we'll protect is srv01-bar.

We can verify that the server is NOT being protected yet by checking 'anvil-watch-drbd';

an-a01n01 # anvil-watch-drbd
Sync progress:
  Res              To           Vol
- srv01-bar     -> an-a01n02 ->   0: [0;0]  //  ds:UpToDate/UpToDate, ro:Secondary/Primary
- srv02-win2019 -> an-a01n02 ->   0: [0;0]  //  ds:UpToDate/UpToDate, ro:Secondary/Primary
- srv03-el6     -> an-a01n02 ->   0: [0;0]  //  ds:UpToDate/UpToDate, ro:Secondary/Primary
- srv04-min     -> an-a01n02 ->   0: [0;0]  //  ds:UpToDate/UpToDate, ro:Secondary/Primary
- srv04-min     -> an-a01n02 ->   1: [0;0]  //  ds:UpToDate/UpToDate, ro:Secondary/Primary
* Total transfer speed is about: [0 MiB/sec]

Above we see that srv01-bar is replicating to an-a01n01 only, which is normal.


 

Any questions, feedback, advice, complaints or meanderings are welcome.
Alteeve's Niche! Enterprise Support:
Alteeve Support
Community Support
© Alteeve's Niche! Inc. 1997-2024   Anvil! "Intelligent Availability®" Platform
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.