- 2,486
- 840
- NAS
- Synology, TrueNAS
- Operating system
- Linux
- Windows
Last edited:
you can check in defined volume source of your data capacity eaters> e.g. /volume1#
like this:
and here you can find some troubles:
e.g. my SynoDrive DB is about 78.3GB (in this NAS)
but follow this list outputs it eats:
194G @synologydrive
+ size all of the Shared folders under Drive operation
when you go to level down to subfolder: /volume1/@synologydrive
and run same command, you will get:
so there are two candidates to check:
9.0G delta
184G repo
then repo
I will find a way how to understand. When you have an idea - do it.
Thx
[automerge]1607872862[/automerge]
same for iSCSI, what uses /volume2 for store (incl. snapshots)
but
follow previous example it eats 244GB in /volume1/@iSCSI
in my setup I have just two LUNs = 2x20GB thin provisioning and two snapshots per LUN
what is OK, because in /volume2/@iSCSI i have 39GB of data space
/volume1/@docker ... next candidate with 22GB ... really out of imaginable boundaries
needs to find a way to understanding
[automerge]1607873204[/automerge]
next candidate to review is @sharesnap with almost 1TB
follow uncertain Syno KB it is "Snapshots of a shared folder: @sharesnap"
[automerge]1607873458[/automerge]
same @sharesnap in Volume3 with almost 300GB
and @synologydrive .... I have not idea why is there also this one
you can get list of all working directories include capacity eatendu -h -s *
like this:
.... the list is really longer2.3G @ActiveBackup
672K @AntiVirus
36K @MailScanner
16K @S2S
3.8M @SYNO.FileStatio.core
748K @SYNO.FileStatio.core.gz
23M @SynoDrive
12K @SynoFinder-LocalShadow
1.3M @SynologyApplicationService
40M @SynologyDriveShareSync
and here you can find some troubles:
e.g. my SynoDrive DB is about 78.3GB (in this NAS)
but follow this list outputs it eats:
194G @synologydrive
+ size all of the Shared folders under Drive operation
when you go to level down to subfolder: /volume1/@synologydrive
and run same command, you will get:
then into @sync64K @clientd
194G @sync
12K app_integration
506M log
0 dbapi.lock
9.0G delta
13M file
36K job-db.sqlite
32K job-db.sqlite-shm
4.0M job-db.sqlite-wal
4.0K log
12M log-db.sqlite
32K log-db.sqlite-shm
96K log-db.sqlite-wal
0 logdb.lock
165M node_delta
12K notification-db.sqlite
32K notification-db.sqlite-shm
0 notification-db.sqlite-wal
0 notificationdb.lock
464K old_view
184G repo
20K syncfolder-db.sqlite
148K user-db.sqlite
32K user-db.sqlite-shm
384K user-db.sqlite-wal
408M view
4.7M view-route-db.sqlite
32K view-route-db.sqlite-shm
1.1M view-route-db.sqlite-wal
so there are two candidates to check:
9.0G delta
184G repo
then repo
here is next level in this data eating mess4.0K -
1.1G 1
5.2G 2
3.6G 3
469M 4
574M 5
3.6G 6
11G 7
2.1G 8
1.5G 9
4.0K A
4.0K B
4.0K C
4.0K D
4.0K E
4.0K F
4.0K G
13M H
24M I
4.0K J
4.0K K
4.0K L
4.0K M
4.0K N
4.0K O
536K P
12M Q
4.0M R
988K S
61M T
151M U
72M V
43M W
21M X
15M Y
4.7M Z
4.0K _
211M a
5.4G b
2.4G c
313M d
11M e
3.4M f
1.7G g
11G h
3.4G i
16G j
79M k
17G l
1005M m
41G n
54G o
2.7G p
180M q
73M r
445M s
420M t
4.0K u
4.0K v
4.0K w
4.0K x
4.0K y
4.0K z
I will find a way how to understand. When you have an idea - do it.
Thx
[automerge]1607872862[/automerge]
same for iSCSI, what uses /volume2 for store (incl. snapshots)
but
follow previous example it eats 244GB in /volume1/@iSCSI
in my setup I have just two LUNs = 2x20GB thin provisioning and two snapshots per LUN
what is OK, because in /volume2/@iSCSI i have 39GB of data space
/volume1/@docker ... next candidate with 22GB ... really out of imaginable boundaries
needs to find a way to understanding
[automerge]1607873204[/automerge]
next candidate to review is @sharesnap with almost 1TB
follow uncertain Syno KB it is "Snapshots of a shared folder: @sharesnap"
[automerge]1607873458[/automerge]
same @sharesnap in Volume3 with almost 300GB
and @synologydrive .... I have not idea why is there also this one