Quantcast
Channel: VMware Communities : All Content - Site Recovery Manager
Viewing all 2572 articles
Browse latest View live

Changes in 1 recovery plan affect other recovery plans

$
0
0

Hi, Hoping someone out there can shed some light on this. Using Vsphere 5.5U2 with SRM 5.8 in a Flexpod environment. We have had SRM for a couple of years and have always thought it odd that we can create multiple recovery plans, say one for production DR (all VMs) and say one for testing a small selection of VMs (for perhaps just the accounts system) and if we change the priority or power state in one recovery plan (say test) it changes the priority and power state in all the other recovery plans to the same. This is a real pain if you want a DR recovery plan but your test, or specific application, recovery plans keep changing the settings for you.

 

I'm at VMworld 2015 and spoken to three separate techies and they've all said the same thing that I should be able to create two recovery plans with the same VMs in and the priority and power on state should be allowed to be different in each one. Ironically I testing the same scenario in an SRM hands on lab and hey presto I replicated the same thing and left the same techies scratching their heads.

 

Am I missing something here ? Everyone says we should be able to do what I'm trying to do but no one seems to be able to explain why we can't. Suspect I'm going to have to log a call but I'm curious to see if others are experiencing the same thing.

 

Cheers

 

Peter


SRM Install failure - SRMhost/vCenter time skew issue

$
0
0

Hello,

 

I'm having an issue installing SRM 6.0.0-2700495 in my vCenter 6 environment vCenter Server 6.0.0, 2776510 (appliance).

 

There error is:

 

The time skew between the Site Recovery Manager host and the vCenter Server is too large.

 

This is very odd since I set both the vCenter appliance and the host that it's running on are configured to use an NTP server (time.windows.com).  I have also tried setting the vCenter appliance to sync its time with the host.  As you can see from the screenshot, there does appear to be a time sync issue of approximately 5 minutes.

 

Any suggestions?  Is this error just another red herring or do I have an actual time sync issue??

 

Thanks.

Trouble registering SRM with vCenter Server

$
0
0

I'm installing SRM 5.8 on a VM (Windows 2012 R2) which is talking to a VCenter 5.5 instance. I've successfully installed SRM on the primary site but the secondary site fails during install at "Installing certificate" with the error: Failed to register SRM with vCenter Server. Cannot reach host for vCenter 192.168.xxx.xxx. I have validated repeatedly the login credentials and IP address. I've accepted the new certificate through the installer, validated FQDN, etc. but continually fail here. Any ideas what is causing this issue?

VR server registration failed: "TCP connection to server at... failed"

$
0
0

Hi!

 

I am trying to deploy an SRM 5 environment with vSphere replication. Unfortunately, I ran into a problem at the final step: when I try to register VR server i got an error message: "TCP connection to server at '172.19.13.23:8123' failed."

 

I followed the SRM 5 evaluation guide. There is one thing I changed: I don't use vCenter Linked Mode. It is written that there is no any reason to use Linked Mode in SRM 5. Therefore I dont't use it, because the SRM is in a DMZ, and there is no AD for the vCenters. (So it not supposed to be a problem.)   But when I try to deploy VR server, the documentation says "Deploy it to the Recovery Site." Here is one problem: I cannot deploy VR server from the Protected Site to the Recovery Site, because my vCenters are not linked to each other.

 

Finally I deployed it to the protected site, I configured it, and I got the error message.

 

I tried a lot of things:

I migrated the deployed VR server to the Recovery Site, no use.

Network connection is OK. (I can ping each VM from each VM. There is only one VLAN.)

On the deployed VR server I used netcat to connect to the local 8123 port, it seemed OK.

 

I asked one of my colleagues who tried to deploy the same SRM 5, and he said he had used Linked Mode, but he had run into the same problem.

 

Has anyone met this problem?

 

Regards,

Attila

Supported array topology

$
0
0

Hello,

 

I've seen many documents talking about supported SRM topologies, but all from the perspective of SRM, but my question is about the ABR topology that SRM supports. Let me explain an scenario:

 

Production site:

01 storage named Storage_1_Site_1 with 50TB of capacity

 

Recovery site:

01 storage named Storage_1_Site_2 with 20TB of capacity

01 storage named Storage_2_Site_2 with 30TB of capacity


TheStorage_1_Site_1 have 20TB replicating to Storage_1_Site_2 and 30TB replicating to Storage_1_Site_2 and this configuration 1-to-N is fully supported by the storage vendor.


Problem #1: on SRM at production site we can only add the array manager once, for example Storage_1_Site_1 -> Storage_1_Site_2, but when we try add Storage_1_Site_1 -> Storage_2_Site_2, SRM tell us that found a duplicated array... but with the first added array manager, the SRM detect that exist two Remote Arrays and detect all the replicated volumes.


Problem #2: since SRM detect correctly all replicated volumes, we're able to create the protection groups, but if we failover volumes from Storage_2_Site_2 to Storage_1_Site_1and then try reprotect, the SRM unmap volumes located atStorage_1_Site_2from all hosts at recovery site :-(


Can I recover just one server that is part of a protection group

$
0
0

vCenter 5.5

ESXi 5.5u2

SRM 5.8

Recoverpoint 4.1

 

1 Recoverpoint replicated lun that contains all tier 1 servers, lun is replicated to DR site.

 

1 Protection Group, because you cant create more on the same lun

 

Question is their a way to create recovery plans that will only recover a subset of the servers that reside on the replicated lun with out removing protection for the not to be recovered VM's at the Protection Group level.

 

example

 

I would like a recovery plan that does bring up all of the servers but then also a recovery plan that might only bring up the servers that belong to the same application such as a database server with it's corresponding application server.

Cleanup Operation - Should it Delete VM

$
0
0

Hi All -

 

I have a quick question in regarding to testing and running the "Cleanup" operation after running a test recovery plan.  Everything works perfectly as planned.  However in the recovery site the LUNs unmounts, but the VM remains in that vCenter environment.  Is this expected behavior and if so, is it by design to have to go in and manually delete these VMs?

Unable to create a protection group

$
0
0

This is the initial install of SRM 5.8 with  Recoverpoint, vCenter Appliance 5.5, Esxi 5.5, EMC SRA 2.2.

 

I have installed a SRM server at each site, configured Recoverpoint, it is Replicating and changed the control of the consistency group to SRM, Created the Mappings, Configured the placeholder datastore, installed the SRA at each site, and configured the array pairs.

 

When I click to add the protection group of a Lun that is being replicated with Recoverpoint and only has 1 powered on VM on the LUN, I can see the array pair, I check the LUN and I can see the VM I want to protect, so everything looks fine but when I click Finish nothing happens.  It never creates the protection group the dialog simply closes and nothing else happens.

 

Any ideas?


Test recovery cleanup fail frequently if one of the hosts loses connection to a placeholder datastore.

$
0
0

Hi Guys,

 

I ran a test  on a cluster with one host and one standalone host on a recovery site. Each host loses connection to the placeholder datastore frequently while executing cleanup and the error message "Error - No hosts with hardware version '8' and datastore(s) '"xxx"' which are powered on and not in maintanence mode are available." shows up in the SRM History.

 

I found the issue similar to the situation in SRM 5.1 release note. (VMware Site Recovery Manager 5.1 Release Notes)

 

However, I ran the test with SRM 5.5 and do not find the issue on SRM 5.5 release note. I wonder this issue have been fixed?

 

Furthermore, I found the issue here (Site Recovery Manager 5.5 Documentation Center) with the error message "Error - No hosts with hardware version '7' and datastore(s) '"xxx"' which are powered on and not in maintanence mode are available."But the issue is about recovery and test recovery not about clean up.

PS: I could cleanup successfully only to test recover and wait 15 minutes to do cleanup. As the workaround mention.

 

Please help me to figure out the two questions :

1. Do I meet the same issue as this Site Recovery Manager 5.5 Documentation Center?

2. Will this issue happen with hardware version '8' ?


Thanks in advance


srm API for discovering replication devices ?

$
0
0

Hi

I was wondering if anyone has a solution for discovering new sra devices automation

srm.png

 

I am trying to automate an end to end scenario where a newly created datastore which is replicated by the storage provider is used for a new protection group.

But the problem is that the replicated datastore won't be visible before the discovery button is pushed.

I am searching for some time now for a workaround.

I can't find an API for this procedure.

Maybe a system setting for enabling the procedure ?

Datastore group For SRM

$
0
0

Hi ,

 

I have one query, How many Data store Group can be created per VMware Cluster

 

In our environment we are using Site Recovery manger with Storage replication. We need to Create multiple protection Group, But Storage revealed that they can create only on Datastore Group per HA cluster. If we need to create multiple Protection Group in SRM , we need additional Datastore Group In the same cluster. Kindly provide your suggestion

 

Thanks,

Nepoleon S

SRM 6.1 Site Pair

$
0
0

Wondering if anyone has been able to pair sites with SRM 6.1.

 

2015-09-17T16:37:34.491Z [02980 warning 'HmsProxy'] Dr::Providers::Hbr::HmsProxy::HmsServerMonitor::CheckRemotePair: Unable to find the remote site

2015-09-17T16:37:34.491Z [02980 verbose 'HmsProxy'] Dr::Providers::Hbr::HmsProxy::HmsProxyImpl::SetRemoteInfoFailed: Unable to get remote HMS server info, error=

--> (dr.fault.RemoteSiteNotFound) {

-->    faultCause = (vmodl.MethodFault) null,

-->    msg = ""

--> }

change vrm site names

$
0
0

I accidentally used the same vrm site name for both sites. how can i change the name of this? tried doing it from the web link but will not do it

Newly created datastore not showing up

$
0
0

I'm building out a new NetApp LUN that needs to be replicated to our DR site. I've build the volume, LUN, and configured the SnapMirror relationship. The SnapMirror is running successfully as scheduled. When I went to create the SRM Protection Group I had no options to select any datastore groups.

 

I'm sure there is something I'm overlooking. What do I need to do to get the datastore to show as an option?

 

We have a pretty standard configuration.

vSphere/Center 5.5

SRM 5.8

NetApp 8.2.3

SRM 6.x Certificate chain issue vCenter 6.0 Update 1

$
0
0

Does anyone have a fix for the Web Client in VMware 6 Update 1 to correct the certificate chain issue with SRM 6.x?  I have upgraded SRM from 5.8 to 6.x as well as vCenter to 6 Update 1 but when clicking on the link in the Web Client and going to Sites it comes up with the External PSC SDK link and certificate chain not verified.

 

I have an external PSC server for 2 vCenters and replaced all SSL Certificates with custom PKI from our Root CA server.  Everything is set up correctly and SRM looks to be running as I see tasks every now and then the client showing.

 

Any help is appreciated.


SRM 6.0.0.1 - Recovery Plan - IP Customization

$
0
0

Hi,

Please help me, I have a problem with manual IP Customization during recovery process.

I always get an error: Cannot complete customization, possibly due to a scripting runtime error or invalid script parameters (error code 255). IP settings may have been partially applied.

I found in Release notes this kind of error but that was for SRM5.5.

I've attached log.

 

Regards,

Sebastian

SRM 5.5 - vSphere replication - Error: Unable to reverse replication for the Virtual Machine. A snapshot operation cannot be performed

$
0
0

Hi all, we have just run a test DR failover of a couple of VMs in our protected site to our failover site and all failed over perfectly, both VMs came online and all applications working well. However when we cam to r-eprotect the VMs we're getting a couple of errors.

 

One of the VMs seems to have gone through the reprotect process fine, it's still running but hasn't moved off 89% for some time. The VM does have a couple of large (just under 2 TB) VMDKs. is the slow progress just a direct result of the large VMDKs?

 

More worrying though is the other VM which won't reprotect at all. It generates an error each time I click the 'Reprotect' button (Error: Unable to reverse replication for the Virtual Machine. A snapshot operation cannot be performed).


Does anyone have any ideas as to the cause?

 

Thanks in advance for any assistance.

 

Andy

Moving off SRM via Array based replication to vSphere SRM replication appl

$
0
0

Is there any documentation and/or any experience with moving off an EMC Recovery Point appliance or RPA  to vSphere SRM vAppliance?

Any information and/or experience with this scenerio?

 

SRM 5.0 | vCenter 5.0 with EMC Recovery Point Appliance 

 

SRM 6.0 | vCenter 6.0 with vSphere Replication Appliance

Impact of Migrating SRM Reovery Site VMs

$
0
0

Hi All--

 

Had a quick question regarding moving virtual machines in the recovery site.  During various test recovery the VMs remain in the recovery site in an off state.  However when I need to do any host maintenance or move these vms is gives a warning.

 

WARNING:  One of more of the entities involved in the
virtual machine migration action are managed by a

solution.

 

Solution Site Recovery Manager manages the

selected virtual machine.  You should not modify the

virtual machine directly.  Use the management console

of the solution to make changes.


Do you want to proceed?

 

My question will this have any impact from a recovery and or protected site perspective?  I cannot find in the SRM portion of the web client how to migrate VM?

Shared SRM for distinct versions of vCenter and ESXi clusters and Recovery Time estimator

$
0
0

Hello Community,

 

Quite new regarding SRM and I may have not thoroughly read the manual, so expect some novice-level question.

Just wanted a confirmation regarding implementation of SRM version 5.0 (may need an upgrade due to compatibility issues with storage arrays) for various vSphere clusters:

  • 5 clusters vSphere 5.0 managed by a vCenter server v5.0,
  • 6 clusters vSphere 5.5 managed by a vCenter server v5.5.

 

In case I want to implement a single set of clusters in my DR site (in version 5.5, at least), I'll have to install two SRM plugins:

  1. one to enable pairing with the one in vCenter 5.0
  2. and another one to enable pairing with the plugin in vCenter 5.5, ¿right?

 

Note: migration+upgrade of v5.0 VMs and upgrade of v5.0 ESXi host is already running, but very slowly due to organizational complexities, so upgrade as pre-requisite is unfortunately not an option.

 

Another question: have you ever seen some kind of Recovery Time estimator for SRM. Of course, I guess it would need a lot of parameters or assumptions: hw platform, number and versions of ESXi, vCenters, SRM, # and configuration of VMs, # datastores, # VMs/datastores, # VM start-up/host, level of customization with specific scripts in recovery plans, number of recovery plans, controlled or real disaster recovery plans, etc.).

Nothing is better than real-life testing, but such estimator would help a lot in some internal decision making processes.

 

In our situation, we would start with simple recovery plans (less than 3), handling 1.200 VMs approximately in total, with only IP@ change, VMware products versions mentioned above, platform details:

clusters v5.0: 32 Dell PE M610, 11 M620 and 7 M910, total of 114 CPUs, 336 cores, total 10,7 TB RAM, 600 VMs, 195 datastores (sigh) full FC SAN, 138 TB used.

clusters v5.5: 4 HP BL460c G8, 44 IBM/Lenovo Flex x240, total 96 CPUs, 400 cores, 18,5 TB RAM, 600 VMs, 90 datastores full FC SAN, 161 TB used.

All replication is array-based, mix of EMC VNX, HP 3PAR and IBM Storewize FC SAN arrays.

Later, we'll implement application/service-based recovery plans.

 

Thanks a lot for your support.

Regards,

Cyril

Viewing all 2572 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>