- 2,509
- 849
- NAS
- Synology, TrueNAS
- Operating system
- Linux
- Windows
1. How the Issue Occurred:
A. Context:
Testing SMB tunnel connection:
The goal was to map a remote SMB shared folder (\volume2\photo) via a Cloudflare Zero Trust Tunnel to a Windows 11 workstation.
Deployment:
A Cloudflared Docker container was deployed on Synology DSM 7 using the official Cloudflare repository, docker compose and Portainer.
The Cloudflare tunnel Docker compose configuration was secured - based on helpful guidance from Jim’s YouTube video.
B. Expectations:
SMB sharing should have been accessible via the defined Cloudflare tunnel, enabling secure remote mapping of the SMB folder as a network drive.
C. Implementation:
- Cloudflared Configuration:
Enhanced with additional security measures as outlined in the video.
The deployment was successful.
The tunnel was added to an existing setup with a public hostname defined for the SMB service.
- Win11 SMB Connection:
PowerShell commands (Get-SmbShare, Get-SmbShareAccess, New-SmbMapping) were used for mapping.
PowerShell failed to map the remote SMB folder due to prolonged response times and unsuccessful connections.
Don't forget, there is official info in the Cloudflare documentation:
2. Symptoms of the Issue:
Post-SMB testing:
The SMB folder \volume2\photo on the NAS appeared empty (0 MB) despite previously containing 1.4 TB of data.
The folder was still visible in DSM and via SSH.
Data were confirmed to be physically intact by an independent recovery tool for BTRFS FS recovery.
3. Tests Conducted and Results:
A. Log Analysis:
DSM Logs (/var/log/messages):
SMB Logs (log.smbd, log.nmbd):
No direct errors linked to \volume2
No errors related to access or content of \volume2\photo
No info about the \volume2\photo related to its permissions (change).
Inspected BTRFS logs to verify metadata integrity and track anomalies.
Since I have the entire \var\log\ mapped, analyzing the log content was really quick and easy.
All the more surprising that I found no trace.
B. Diagnostics via SSH:
BTRFS Scrubbing:
ro=false (read/write enabled)
Scrubbed 1.88 TiB on \volume2 with no errors reported.
BTRFS related:
inspect specific subvolumes for missing references
and
ro=false (read/write enabled)
The btrfs inspect-internal dump-tree command outputs the entire metadata tree of the filesystem, which can be overwhelming. To refine the output and show only data with missing references, you can use additional tools and filters.
Unfortunately, btrfs inspect-internal dump-tree itself does not have built-in filtering for missing references. However, you can combine it with grep or other parsing utilities to focus on entries that indicate issues.
C. SMART Diagnostics related to the \volume2 drives:
Every kind of smartctl test I know, incl. extended SMART test
4. Findings:
Data were not physically lost:
This was confirmed by an independent recovery tool that quickly identified and restored data to its original state.
However, they were not visible and had no flag associated with deletion.
BTRFS is consistent:
Scrubbing validated the integrity of both data and metadata.
Hypotheses:
Altered BTRFS metadata:
Manipulations of permissions (ACL) or changes during SMB tunnelling could have affected data visibility.
DSM behavior during restart:
DSM may have attempted to synchronize quotas or restore ACL settings, leading to anomalies.
5. Final Actions:
Complete SMART extended test + Use an independent surface test tool after removing the disk from the NAS. Jus to be sure.
Restore the entire data from HyperBackup back to the volume.
Conclusion:
Without clear evidence from logs, this event is documented as an anomaly.
If you do this, do it with extreme caution. Be sure to take a Snapshot or HB Backup beforehand.
A. Context:
Testing SMB tunnel connection:
The goal was to map a remote SMB shared folder (\volume2\photo) via a Cloudflare Zero Trust Tunnel to a Windows 11 workstation.
Deployment:
A Cloudflared Docker container was deployed on Synology DSM 7 using the official Cloudflare repository, docker compose and Portainer.
The Cloudflare tunnel Docker compose configuration was secured - based on helpful guidance from Jim’s YouTube video.
B. Expectations:
SMB sharing should have been accessible via the defined Cloudflare tunnel, enabling secure remote mapping of the SMB folder as a network drive.
C. Implementation:
- Cloudflared Configuration:
Enhanced with additional security measures as outlined in the video.
The deployment was successful.
The tunnel was added to an existing setup with a public hostname defined for the SMB service.
- Win11 SMB Connection:
PowerShell commands (Get-SmbShare, Get-SmbShareAccess, New-SmbMapping) were used for mapping.
PowerShell failed to map the remote SMB folder due to prolonged response times and unsuccessful connections.
Don't forget, there is official info in the Cloudflare documentation:
If you are using a Windows machine and cannot specify the port for SMB, you might need to disable the local server. The local server on a client machine uses the same default port 445 for CIFS/SMB. By listening on that port, the local server can block the cloudflare access connection.
SMB · Cloudflare Zero Trust docs
The Server Message Block (SMB) protocol allows users to read, write, and access shared resources on a network. Due to security risks, firewalls and ISPs usually block public connections to an SMB file share. With Cloudflare Tunnel, you can provide secure and simple SMB access to users outside of...developers.cloudflare.com
2. Symptoms of the Issue:
Post-SMB testing:
The SMB folder \volume2\photo on the NAS appeared empty (0 MB) despite previously containing 1.4 TB of data.
The folder was still visible in DSM and via SSH.
Code:
ls -l /volume2/photo
ls -la /volume2/photo
3. Tests Conducted and Results:
A. Log Analysis:
DSM Logs (/var/log/messages):
SMB Logs (log.smbd, log.nmbd):
No direct errors linked to \volume2
No errors related to access or content of \volume2\photo
No info about the \volume2\photo related to its permissions (change).
Inspected BTRFS logs to verify metadata integrity and track anomalies.
Since I have the entire \var\log\ mapped, analyzing the log content was really quick and easy.
All the more surprising that I found no trace.
B. Diagnostics via SSH:
BTRFS Scrubbing:
Code:
btrfs property get /volume2/photo
Code:
btrfs scrub start /volume2
btrfs scrub status /volume2
BTRFS related:
inspect specific subvolumes for missing references
Code:
btrfs subvolume list -p /volume2
Code:
btrfs property get /volume2/photo
The btrfs inspect-internal dump-tree command outputs the entire metadata tree of the filesystem, which can be overwhelming. To refine the output and show only data with missing references, you can use additional tools and filters.
Unfortunately, btrfs inspect-internal dump-tree itself does not have built-in filtering for missing references. However, you can combine it with grep or other parsing utilities to focus on entries that indicate issues.
Code:
btrfs inspect-internal dump-tree /dev/mapper/cachedev_1 | grep -E 'orphan|unreferenced|refcount 0'
C. SMART Diagnostics related to the \volume2 drives:
Every kind of smartctl test I know, incl. extended SMART test
4. Findings:
Data were not physically lost:
This was confirmed by an independent recovery tool that quickly identified and restored data to its original state.
However, they were not visible and had no flag associated with deletion.
BTRFS is consistent:
Scrubbing validated the integrity of both data and metadata.
Hypotheses:
Altered BTRFS metadata:
Manipulations of permissions (ACL) or changes during SMB tunnelling could have affected data visibility.
DSM behavior during restart:
DSM may have attempted to synchronize quotas or restore ACL settings, leading to anomalies.
5. Final Actions:
Complete SMART extended test + Use an independent surface test tool after removing the disk from the NAS. Jus to be sure.
Restore the entire data from HyperBackup back to the volume.
Conclusion:
Without clear evidence from logs, this event is documented as an anomaly.
If you do this, do it with extreme caution. Be sure to take a Snapshot or HB Backup beforehand.