Move Cloud Sync from old to new NAS w/out starting over? Or other possiblities?

Currently reading
Move Cloud Sync from old to new NAS w/out starting over? Or other possiblities?

DS218+ 8GB RAM, DS212
Operating system
  1. Windows
Mobile operating system
  1. Android
  2. iOS
I've completed migrating all content to my new DS218+ from DS212.

I had cloud sync to google set up for my photos on the DS212. Ideally there is a way to enable the same sync on the new DS218+ w/out it going nuts and trying to re-copy everything (we're talking >60,000 photos) up to the Google Cloud. I don't want it to have to start over again.

I don't see any option to backup/copy my Cloudsync config/status from the DS212 to the DS218+.

Has anyone dones this, and was there a way to move the Cloud Sync over to the new NAS w/out starting over?

If that's not possible, as an alternative I can keep all of my Cloud Sync duties as they are on the DS212, and just sync any new content from the DS218+ to the DS212. But that begs a similar question about avoiding "starting over." I currently have exactly matching directories/content on the DS218+ and DS212. If I set up a Local Folder and USB (single version) recurring sync from the DS218+ photo folder(s) to the DS212 folders, is there a way to keep Hyper Backup from thinking it has to copy every file from the DS218+ to the DS212 the first time it runs? Since all of the folders/files on the two NAS are currently exactly the same that would be a huge waste of time.

Thanks for any wisdom on this...
Last edited:
I'm going to move this question below to it's own thread, so it isn't lost here. I think this is pretty important info for folks to sure has complicated my life so far. :)
I"m actually figuring out, I think, that my real problem is that there is no way to start syncing two existing folders on two different NAS.

From what I've read, if you tell Hyper Backup to sync existing FolderA on NAS-Source to existing FolderA on NAS-Target, when Hyper Backup sees there is already a FolderA on NAS-Target, rather than putting content there, it will create a new directory called FolderA-1 and put all the files from the source NAS FolderA there.

So as far as I can tell there is no way to get Hyper Backup to compare the contents of an existing FolderA on the Target NAS to FolderA on the source NAS, and add the changes to the target folder.

Have I got this right?

Is there a way around this limitation?

Will I feel stoopid when you tell me the easy way around this? (I know the answer to this one.) :)
OK so I experienced something like this t'other day. Short answer: if you're moving the data between drives, I found it easier and quicker just to let the new task recreate the sync'ed backup.

It's easy to move or re-use a multi-version vault : just relink it to a new task. But I found moving a single-version folder was stupendously slow and prone to stop part way through when using File Station: I normally use CCC on Mac to ensure copies are sync'ed correctly but that would've meant bring the files to the Mac and back ... it might have been quicker!

My plan had been to make a copy of the task's folder from SHR storage to USB, then delete the SHR one. On my source NAS I made a new single-version task with a new name (it becomes the folder name) and destination on the remote NAS's USB drive. I made sure the destination folder was created, for permissions etc, and didn't run the task (or stopped it very quickly, don't recall which). Then when the copy jobs were finished I was going to rename it as the new task's destination, after renaming the new folder to something else (so I could check permission etc if it was working).

I gave up when I realised that all the Plex Media Server small database/cache files were taking ages and that it would probably be just as quick to let the new task create the sync'ed copy itself. I wasn't wrong as it completed that evening and well before the normal midnight schedule.

If you're not moving the data but wanting to use the same shared folder then you could try what I was attempting (after I'd got the files moved):
  1. Rename original task folder so it's name is freed up, e.g. myfolder -> myfolder_temp
  2. Make new single-version task, e.g. myfolder, run it enough to get the folder created and task config files.
  3. Rename this new folder to save it, e.g. myfolder -> myfolder_saved
  4. Rename original task folder back, e.g. myfolder_temp -> myfolder
  5. Try running the HB task on the source NAS
I've no idea if it will work. If it refuses then I'd try swapping over the config files so the new task config files are being used.
Last edited:
@fredbert - Thanks for the suggestion! I've done some initial testing and it appears what you recommend does work w/some slight modifications. This is great, and will save me (or my NAS I mean) a ton of time/cpu cycles. Thanks very much.

The one change I have to make, is the result of Hyperbackup putting it's backups into a new folder it creates (required in the backup task settings) Example below for what I'm going to do to enable backup of my photos:
  1. Create the photos backup task on the source NAS, start it, but as you suggest, cancel it after it starts copying files over to the target.
  2. Go to the target NAS and confirm the new backup folder was created by the backup task in the target folder.
  3. Move all of my photos on the target NAS into that new backup folder
  4. Restart the backup task on the source NAS and "sees" that the files it's been asked to back up are already there so it doesn't try to copy anything over, and backup completes very quickly.
I tried this with about 9GB of data. Created and ran the backup task normally and things were going slowly as expected as the data was copied over. Then stopped the task, moved the existing data that is already on the target NAS into place in the new backup task directory, and re-ran the backup. It completed in a less than a minute.

Thanks very much for the cool suggestion, saves me a bunch of time/cpu cycles to get my new NAS to NAS backup in place.

Hyperbackup: Interesting details

HyperBackup (as far as I can tell) requires that it always creates a new directory in the selected target directory for each backup task. The backup data then goes into that new directory.

Example below, both the source and target NAS have the exact same folder structure:

Test (root level folder w/all my data)
- Testsub1
- Testsub2

I created an rsync copy (single version) task on the source NAS and told it to use Test as the target on the target NAS. During the backup task creation steps I am required to provide a Directory name to HyperBackup to tell it where to place the backup.

When the backup task runs on the new NAS, a folder with that directory name is created inside the target folder (Test in this case) on the target NAS, and the backup files from the source NAS are placed in that new folder.

Some shots below - xxxxx set as Directory in the task settings and resulting xxxxx folder created on the target NAS in the Test directory. The content coming from the source NAS will always be placed in that new directory created by the rsync task. Hopefully that makes sense.

So on the target NAS you end up with:

- xxxxx (Directory specified in the backup task settings)
- Test
- Testsub1
- Testsub2

Backup Directory setting
2020-03-13 14_24_19-Missy.jpg

Resulting directory on target NAS.
2020-03-13 14_27_22-Missy2.jpg
Last edited:
@fredbert - OK, something isn't right (maybe in my head?).

I deleted all of my backup tasks on my source NAS from last night, and deleted the target folder on the target NAS so I could do some more fresh testing this AM, wanted to make sure things were completely solid on this.

However, now in one of the two new backup tasks I created I am not able to get this process to work. Followed same steaps as above, create target shared folder on target NAS, create backup task on source NAS, run backup task partial and cancel, copy files on source NAS to backup target, re-run backup task and let it finish. Backup doesn't work:

1. When I add files to the backup source and re-run the backup task, they aren't getting backed up to the target.
2. Files that weren't in the target directory the first time I let the backup task run completely aren't getting backed up.

Not sure what's going on - it seemed to be working fine last night. I'll have to keep looking at this to make sure I'm not missing something obvious, but it definitely is not backing up files properly at the moment.

The other backup task I created today works as expected (same as last night). Odd.

The only difference between the two tasks is that when created, the task that isn't working now inlcluded all the sub-folders under one folder, but the folder containing the subfolders wasn't included. I then added the folder containing the sub-folders in the backup task settings.

I'm going to recreate the backup task that is failing and include the folder holding the sub-folders from the start and see if that resolves the issue.
OK, recreated the backup that was behaving strangely and it works as expected, like the other new backup task.

So lesson learned, if I don't select eveything correctly the first time, delete the task and start over, aparrently backup tasks may not play nicely when you update the folders content included in the task.
Thanks for the info, I'll check that next time.

I'm now doing a similar investigating the Folder Sync options, and that appears to be doable via a similar process as Fred recommended, w/the bonus that Ithe arget NAS doesn't have to have a new backup directory name folder above the backed up files, I can back them up directly into the same structure on the target that they are on the source. Mo bettah.

So looks like I'll be able to "fool" folder sync by renaming directories/moving content around and it will skip copying over what's already there, and just sync new content. Testing now...this is a great distractor from the real world at the moment, I have to admit I'm mining this rabbit hole for everything it's worth. :D

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

For some reason, my firewall/router or even the NAS was occupying port 443, preventing the IDrive...
The service was definitely restarted. DiskStation uptime is 262 days. I don't remember when I added Cloud...
only if xiaomi cloud supports one of the “open” standards like webdav or openstack. It takes two to...
CS has been working just fine doing a very simple task of uploading some surveillance jpg files to Google...
NVM! Rebooted (again) and the icon is showing up normally. I guess two reboots are better than one. ;-)...
Thanks for you reply! Right now CS runs perfectly and I guess I can setup more than one OneDrive...
You will need to re-add it under the new username, which is tedious - but the good news is that it doesn't...

Welcome to! is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending threads