VERY slow restore Hyper Backup

Currently reading
VERY slow restore Hyper Backup

32
9
NAS
DS411+II, DS412+, DS418Play, DS216+II, DS218+ DS918+
Router
  1. RT1900ac
Operating system
  1. macOS
  2. Windows
Mobile operating system
  1. Android
  2. iOS
Hi guys. Like to pick on your brains.
We had to do a restore for a NAS. It is 1,6Tb off data. Restore is for everything.
Repository is on a 412+ no additional memory with 3x10tb SHR volume.
Only one NIC is connected
Internet connection: 350/50

Target is a DS216+II, no additional memory and is in the same network

What we did so far:
- DS216 was remote (internet connection 50/50). Max speed we got was 5MBs. Seems fair but to slow
- Take the DS216 into the source netwerk: average speed 5-15.
And that is where I was surprised. I did not expect 100MBs. But 15???

So our approach is to do some test/checking:
- Do NIC-teams on de source the proper way & test see what happens

Another thing we thought is: the internal speed of the 412+. But looking ate the Resource Monitor we do nog get the feeling there is a issue: CPU is below 20 all the time. RAM below 55; disk util below 25 and Volume below 30.

Can it be this backup is is started a long long long long time ago that the recalculation takes a long time?
Any other suggestions please let me know.
 
Well with a remote restore and upload/download speeds on your source and destination you couldn't get more than 5MB with a 50Mbit connection, so that's ok.

From the local point, the speed was better but still, you are restoring what kind of files? Are those file small in size? A large number of small files and a small number of large files are two separate scenarios.

It could be simply that the size and number of files in the backup are the results of this slower speed(s).
 
Last edited:
I have seen slow backups in the past. A lesson learned from the restore excercise:

I do split the backups, effectively separating the ”current” documents from the older/sleeping ones.
It is now possible to restore quickly "this years" documents, from a small backup set.
So we can start working within hours. Then start the 2nd and 3rd restore when the first wave is done. Nobody needs all data same day.
 
I have seen slow backups in the past. A lesson learned from the restore excercise:

I do split the backups, effectively separating the ”current” documents from the older/sleeping ones.
It is now possible to restore quickly "this years" documents, from a small backup set.
So we can start working within hours. Then start the 2nd and 3rd restore when the first wave is done. Nobody needs all data same day.
Can you explain in which way you do this?
 
I can only explain how we organised it, so it will for sure be different in other user cases, you should puzzle to see what works for you based on what data you need quickly in case of a disaster.

  1. Data (.doc, .xls, etc is organised in folders by year, so users should now work in folder(s) 2021 on public. This and previous year 2020 is HB daily run say task number 1. This one also includes the packages and configs. It has day week month year versioning (around 20 version max). The task is small and done twice to two different external locations as well.
  2. All previous years, are backupped in a task number 2. Documents typically do not change and also this file is big and may take longer to restore, which is calculated risk. it is a weekly task here.
  3. The Video and Photo, program backup folders and other stuff are backupped in HB task number 3. As this data is not crucial to restore quickly, it is all in one backup, and it is accepted that restore may take some days.

At the beginning of the year, the folder list in HB task 2 is changed to add the previous year-1, and if ok, removed from backup number 1, then add the new year folder(s) to task number 1.
If you need versioning longer than 2 years, keep the backup 1 and start a new one.

I try to keep backup tasks smaller (<500GB) and needed to puzzle a bit to get it scheduled in a way they did not overlap.
I also feel more comfortable, as an issue with a multiple TB backup, potentially leads to big data loss. My perception now is that risk is spread a bit better as well.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

You'll never miss it. Clam AV targets Windows malware. If your PCs are AV protected, the Synology...
Replies
6
Views
1,939
Hi I solved this by connecting two NAS's and rsynching the data I wanted to copy. I suspect the problem...
Replies
6
Views
12,917
That sounds exactly like something I would do! (y);)
Replies
6
Views
3,361
Well not much I can say about that. Obviously the backup didn’t complete for whatever reason and if there...
Replies
5
Views
864
  • Question
I have Synology ABB Client active on my RHEL 8.6 Linux Server and it has happily been taking backups for...
Replies
0
Views
770
I am talking about the files from the desktop utility Synology Active Backup for Business Recovery Wizard.
Replies
6
Views
2,659
Ok. Ill go and dig details about ABfB And also, will do some research in to Docker. Currently, the only...
Replies
4
Views
1,107

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending threads

Back
Top