Solved File size limitation with Cloud Station?

Currently reading
Solved File size limitation with Cloud Station?

10
1
NAS
DS1019+, DS213+
What is the file size limit for Cloud Station? I can't get a file that is just over 1TB is size to move across between Cloud Station Server and Cloud Station ShareSync.

Additional details/backstory+
I bought a DS1019+ and am using my old DS213+ to have a remote copy of certain files/folders for redundancy. I had previously used Cloud Sync with an AWS S3 bucket and it works fine but figured if I could eliminate the bucket fees, I'd be better off in the long term.

There are two folders I've set up for syncs. "Surveillance" and "PCBackup", both of which are self explanatory for the contents. I have no apparent issues with the "Surveillance" folder contents. The "PCBackup" folder contains a system image of my main PC and this is where I run into problems. It will sync over the smaller files but ignores the main file which is just over 1TB in size. No errors are reported and the system reports everything is in sync which is concerning as well.

I've verified that my ShareSync folder filesize limits are set to 0 (unlimited) and I've found nothing that suggests there is a limit in size but there apparently is. I've also set this up twice (deleting the database in between) to ensure it wasn't likely a fluke.
 
Maybe switch to Hyper Backup instead considering you are using 2 Syno NAS if it’s not something that has to be in sync in matter of minutes? HB will do a great incremental backup anyways that will for sure be fast on any successive backups.

The surveillance folder needs to be as close to real time as possible. If my house is robbed and the NAS is stolen or if the house burns down taking my NAS with it, I want to have as much video of the incident as possible.

The concern I have is more what the limit is so I can make an informed decision on using Cloud Station. Right now I can only assume all files are successfully syncing since that is what it is reporting. But I know this to not be the case so it has me a bit concerned that something vital may not get synced.

I fell back to trying hyperbackup for the PCBackups folder last night but it runs very slow. 12 hours so far and it's only 34% complete. Likely because I have compression and encryption turned on. Network shouldn't be an issue since the DS1019+ is running 2-GB bonded network connection and is on the same switch as the DS213+ right now. The downside for HB in this folder is that I won't have new computer images every day or even every week so I don't have a schedule that I can set to reliably run when I put a new image in the folder without wasting a lot of resources. I also don't care about versions just due to this being the last line of defense in my backup strategy and the sizes being too large for my old NAS which only has 2 - 4TB drives.
 
Last edited:
Re file size limitations: There is an upper limit on the file size however, 10GB per file. Then this is not right approach to transfer large files to NAS by Cloud Station. Try to use:
  • direct file transfer (File station, mounted Shared Net disk)
  • or as Rusty recommend you by HB.
Cloud station is not prepared for such big file "replication".
Bigger files I have in LUNs, but there is iSCSI protocol, that is faster than SMB(Win) or https (cloud station), then again Cloud station is not right approach.

Your connection architecture between two NAS is based on LAN connection. I am right?
Lets calc. the transfer speed:
1TB file x 8bits x 1048576 (Tb to Mb) = 8 388 608 Mb
Fast HDD can perform at 1300Mbps in the DS213+ (for both 1G Eth ports by link aggregatetion)
then 8 388 608 Mb / 1300Mbps = 1,8 hour of “theoretical” clean file replication speed is minimum possible time consuption for this task + compression + encryption.
... but life is like boxed chocolate, then you have to check setup of your architecture boundaries (HDD performance, NAS compression/encryption, NAS Eth. performance) lots of them are possible.

Re security reasons: Have you got both NAS in same house (same place or in different rooms)?This consideration is based on your description (usage of same switch). How you can solve the problem with:
"If my house is robbed and the NAS is stolen or if the house burns down taking my NAS with it, I want to have as much video of the incident as possible."
... because it is still in same "risk location".
 
Re file size limitations: There is an upper limit on the file size however, 10GB per file. Then this is not right approach to transfer large files to NAS by Cloud Station. Try to use:
  • direct file transfer (File station, mounted Shared Net disk)
  • or as Rusty recommend you by HB.

The 10GB limit per file is nice to know. Thank you! It seems like a bug when their handling of files too large is to just ignore it and report everything is sync'd rather than reporting an error or warning. Does synology document this limitation anywhere that you know of? The closest thing I found was not a direct reference to max file size but rather a setting range:
  • Filter by file size: In the box next to Don't sync files over, enter a file size between 1~10240 (MB). 0 means unlimited.

I also found conflicting information in Synology release notes for Cloudstation which state that in Version: 3.2-3475 - Files greater than 10GB can be synced.


Re security reasons: Have you got both NAS in same house (same place or in different rooms)?This consideration is based on your description (usage of same switch). How you can solve the problem with:
"If my house is robbed and the NAS is stolen or if the house burns down taking my NAS with it, I want to have as much video of the incident as possible."
... because it is still in same "risk location".
Good catch. :) I'm testing the configuration with the two NAS's next to each other right now to reduce the troubleshooting complexity. Once everything is working smoothly, the DS213+ will be moved off site. Right now, files are still going up to an AWS S3 bucket until this is figured out.

The speed I'm getting on the Hyperbackup is a pitiful ~7MB/sec on average for some reason. It's been running now for 15 hours and has just surpassed 500GB (41%). It is being conservative and not using all of the resources available to it.
 
Re 10GB sync:
It doesn’t matter (if yes or no) the 10GB file syncyng is killing for all WAN users with common internet speed (download) up to 50Mbps (nominal speed, not real).
10 x 8 x 1024 = 27minutes for Avg 50Mbps
or 54minutes for Avg 25Mbps
or 108minutes ....
what is the main point - during the single file syncing you can’t sync another - then abnormal waiting time (must calculate bottleneck: upload speed of second side for sync) can’t support a collaboration from syncing point of view- primary target of the Cloud File syncing. This Cloud sync is not “valid” backup approach as you need for 1TB file size.
If the value of data is really high (security reason), try High Availability (NAS cluster). But you have to purchase different NAS set. Everything must start from a preparation stage - to define a primary target. And as I understand, your target is pretty high (1TB encryption/compression + sync). How much you can pay for the security reasons - this question must be another part of your evaluation. But don’t expect that cheap DS will hepl you with this.

Last - performance consideration from my side:
you can’t count that small DS213+ with dual core 1GHz (2T only) and just DDR support for RAM (512MB or extended to 1GB) will provide same performance for 1TB transfer as your source DS1019+ (4C/4T with 1,5GHz, DDR3 1866). Plus the DS1019+ is under really massive pressure from 1TB encryption/compression and backup tasks. This is the next bottleneck. I don’t know what kind of disks there, Raid, switch is also frequent bottleneck.
 
I've had no issues syncing file sizes of 25GB+ with CloudStation ShareSync (Soon to be Drive ShareSync) to NAS devices in both local and remote locations.
I tested and dropped a 50.56GB file in the same synced folder and it went across without issue. But it ignores the 1.04TB file.
 
Is it possible the file in question contains characters unsupported by CloudStation?
Or do you need to whitelist/add the specific file extension to the filter list?
Good suggestion but I checked. The filename is essentially a GUID with no special characters and has a vhdx extension. Other vhdx files in the folder sync across without issue.
 
Update.....it started working!! After sitting there for many hours doing nothing, last night it started pushing that 1TB file across. While it isn't fast (~14MB/sec), it is twice the speed I was getting with Hyperbackup. I don't currently have encryption turned on though so I'm not sure if that is the difference. I'll test that later. I have no idea why it takes so long to cue up though.

what is the main point - during the single file syncing you can’t sync another
This is not quite accurate. It is single threaded for a folder but if you are syncing multiple folders, other folders are not held up. In my case, this has had no effect on syncing of my surveillance folder.

you can’t count that small DS213+ with dual core 1GHz (2T only) and just DDR support for RAM (512MB or extended to 1GB) will provide same performance for 1TB transfer as your source DS1019+
There are no issues with the DS213+ doing just this job. I see no severe memory pressure, CPU pressure or network pressure being applied on the NAS. This is the ONLY thing this NAS is being used for. And it will be the third location for the computer image files so even going a bit slow is ok. I appreciate your concern and feedback though.
190


My first line of defense is creating an image of my main drive to another drive on the computer (fastest backup).
My second line of defense is copying the image to my DS1019+ manually so that there is no direct connection on the network where something like ransomware would also get to my backups. This location will have several images over time.
My third and final line of defense is syncing the latest backup off site to the DS213+ in case of robbery, fire or other disaster resulting in the loss of both my NAS and computer.
 
Wondering if there was an inconspicuous background/indexing processes that it was working through before syncing the file.

Glad to hear it started syncing! I have about 150k+ files and 16TB+ to sync between 3 devices and I've learned patience is key with drive when handling large sync jobs. (Especially on first config)

The only other thing I thought of is whether or not the file was somehow showing as in use and was unable to sync during that time. Though, I thought an error message or notification shows when that is the case. (At least it does for the drive client.)
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

My Drive client was working fine for months. Then one day out of the blue it stopped syncing. I get this...
Replies
0
Views
967
I would open an SSH session and see what's listed in /volume1/homes.
Replies
8
Views
999
Does the ISP at the problematic site apply QoS and packet shaping, or maybe you have this in your...
Replies
2
Views
875
Synology Drive Admin Console let's you specify User Sync Profiles. In these profile you can specify file...
Replies
0
Views
2,041
Hi, I’ve shared a Synology Drive file using its public link generator with anyone with the link to be...
Replies
0
Views
1,152
Synology Drive Admin Console -> Settings -> Others Have you tried adjusting the settings in Customise...
Replies
2
Views
1,559
The Mac agent for Drive will sync the selected personal and team folders from the NAS to the Mac, meaning...
Replies
3
Views
3,131

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Back
Top