Data capacity eaters

Currently reading
Data capacity eaters

Synology, TrueNAS
Operating system
  1. Linux
  2. Windows
Last edited:
you can check in defined volume source of your data capacity eaters> e.g. /volume1#
du -h -s *
you can get list of all working directories include capacity eaten
like this:

2.3G @ActiveBackup
672K @AntiVirus
36K @MailScanner
16K @S2S
3.8M @SYNO.FileStatio.core
748K @SYNO.FileStatio.core.gz
23M @SynoDrive
12K @SynoFinder-LocalShadow
1.3M @SynologyApplicationService
40M @SynologyDriveShareSync
.... the list is really longer

and here you can find some troubles:

e.g. my SynoDrive DB is about 78.3GB (in this NAS)
but follow this list outputs it eats:
194G @synologydrive
+ size all of the Shared folders under Drive operation

when you go to level down to subfolder: /volume1/@synologydrive
and run same command, you will get:

64K @clientd
194G @sync
12K app_integration
506M log
then into @sync
0 dbapi.lock
9.0G delta
13M file
36K job-db.sqlite
32K job-db.sqlite-shm
4.0M job-db.sqlite-wal
4.0K log
12M log-db.sqlite
32K log-db.sqlite-shm
96K log-db.sqlite-wal
0 logdb.lock
165M node_delta
12K notification-db.sqlite
32K notification-db.sqlite-shm
0 notification-db.sqlite-wal
0 notificationdb.lock
464K old_view
184G repo
20K syncfolder-db.sqlite
148K user-db.sqlite
32K user-db.sqlite-shm
384K user-db.sqlite-wal
408M view
4.7M view-route-db.sqlite
32K view-route-db.sqlite-shm
1.1M view-route-db.sqlite-wal

so there are two candidates to check:
9.0G delta
184G repo

then repo

4.0K -
1.1G 1
5.2G 2
3.6G 3
469M 4
574M 5
3.6G 6
11G 7
2.1G 8
1.5G 9
4.0K A
4.0K B
4.0K C
4.0K D
4.0K E
4.0K F
4.0K G
13M H
24M I
4.0K J
4.0K K
4.0K L
4.0K M
4.0K N
4.0K O
536K P
12M Q
4.0M R
988K S
61M T
151M U
72M V
43M W
21M X
15M Y
4.7M Z
4.0K _
211M a
5.4G b
2.4G c
313M d
11M e
3.4M f
1.7G g
11G h
3.4G i
16G j
79M k
17G l
1005M m
41G n
54G o
2.7G p
180M q
73M r
445M s
420M t
4.0K u
4.0K v
4.0K w
4.0K x
4.0K y
4.0K z
here is next level in this data eating mess

I will find a way how to understand. When you have an idea - do it.
-- post merged: --

same for iSCSI, what uses /volume2 for store (incl. snapshots)
follow previous example it eats 244GB in /volume1/@iSCSI
in my setup I have just two LUNs = 2x20GB thin provisioning and two snapshots per LUN
what is OK, because in /volume2/@iSCSI i have 39GB of data space

/volume1/@docker ... next candidate with 22GB ... really out of imaginable boundaries

needs to find a way to understanding
-- post merged: --

next candidate to review is @sharesnap with almost 1TB
follow uncertain Syno KB it is "Snapshots of a shared folder: @sharesnap"
-- post merged: --

same @sharesnap in Volume3 with almost 300GB
and @synologydrive .... I have not idea why is there also this one
ticket issued ... now I need "throw the dice" who will support it from Syno helpdesk :cool:

never mind:
- seems to be /volume1/@iSCSI it was a target previously operated single LUNs in my volume1, 1y ago deleted. Now just /volume2/@iSCSI is the defined target for my LUNs (and there is the capacity value OK).
- then source of the capacity eating is in /volume1/@iSCSI/EP_unmap
- there is just single file what has name construction like LUN Target name.

- after rename of the folder /volume1/@iSCSI/EP_unmap ... everything is OK and my two LUNs are running w/o troubles

remove structures under old @iSCSI folder
rm -rf /volume1/@iSCSI/*
- it will save your troubles

this mess was created by Synology architecture of iSCSI pckg. When you delete LUN by DSM GUI, it never delete the LUN from your Volume. OMG.

/volume2 is BTRFS based pool
99,9% of the capacity in /volume2/@sharesnap contains .... in /volume2/@sharesnap/photo subfolder
what is same size (-1%) of
/volume2/photo .... what is one of Shared folders

no dlna service in usage
no Moments in usage

just pure SMB Share for my RAW target

/volume2/@sharesnap/photo contains:
file: desktop.ini .... reasonable from one of Win workstation
file: GMT-2020.07.06-15.37.30 ..... something from 6th July 2020 and 15h 37m
and yeap - this is a snapshot of the entire main "photo" folder" from that time

question is what subsystem is the snapshot producer?
- no snapshot for such folder is created in Snapshot package
- there is oneway rsync to another NAS only
it’s about 36 days of response time from Syno support.

The data eatery magic is partially explained now just for the Syno Drive pckg:

..... The repo and delta Folder contain File Chunks from the Versioning Database.

The Admin Konsole doesn't exactly use du, but apparently uses similar functions. Though the Developers will not provide which exactly.

The Admin Konsole will display the \@synologydrive folders Data Usage.

verdict: I haven’t idea dear customer

my message for all crazy Syno Drive specialists:
Please stop to recommend Syno Drive for a Photo, even Movie editing workflow! There is another reason.
9,9% of the capacity in /volume2/@sharesnap contains ....
IIRC, sharesnap size reports the base folder capacity... That sounded unclear, so an example...

I have 500GB in /photos which is snapshotted. Snapshot Replication is activated. During the following week, I add 2GB of pics to that folder... @sharesnap reports 502GB. Whether this is a Synology thing or btrfs thing IDK. What I do know is that my reported disk consumption "exceeds" my volume size.

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

Correct. The Knowledge Center states... The destination volume of a Snapshot Replication task is not...
  • Question
Frequency is set in Schedule Data Scrubbing but resource usage is the Resync Speed Limits in Global...

Welcome to! is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Trending threads