Question "Destination corrupted" error in HyperBackup

Currently reading
Question "Destination corrupted" error in HyperBackup

Glad to help.
Mine took less than 24h to recover 300MB in a DS215j.
 
I had a similar issue for a DS418j (source) and DS416 (destination). You could try the following steps. YMMV.

"Src" is the name for the source NAS and "Dst" is the name for the destination NAS.

Before proceeding,
a. Disable any backup schedules.
b. Check that your disks and memory are all performing well. If there is any hardware fault, fix it first.

SSH into Dst to perform the following steps.
1. Change to root.
> sudo -i

2. Change into backup directory.
> cd /volume1/backups/Dst/

3. Check the status. You should get "detect-bad".
> sqlite3 Config/target_info.db "select status from target_info"

4. Change to HyperBackup Vault's bin folder and run synoimgbkptool.
> cd /var/packages/HyperBackupVault/target/bin
nohup ./synoimgbkptool -r /volume1/backups -t Dst -R detect > /volume1/\@tmp/recover.output &
You may get "nohup: ignoring input and redirecting stderr to stdout". It's ok to ignore this.

5. Wait for synoimgbkptool to complete running. It can take a very long time, 3 to 4 days. You can check with the following command.
> ps aux | grep synoimgbkptool

If you see something similar to the following, it is still running.


Otherwise, you should see something similar to this.

You can now exit the SSH to Dst.

Next, SSH into Src.
1. Change to root.
> sudo -i

2. Change to HyperBackup's config directory.
> cd /var/synobackup/config/

3. Edit task_state.conf (using vi) to the following.
last_state="Backupable"
state="Backupable"


You can now exit SSH to Src.

Finally, log in to the admin portal. Start HyperBackup and choose "Check backup integrity". It can take a long time (3 to 4 days). Once that's completed without errors, you should be able to resume your backup schedule.

Hope that helps.
Yes! That really helped. Nobody on the Googles internet seems to know this.

More context: My (local) backup failed probably because it was interfering with a CloudSync task copying the backup files to a remote storage (OneDrive). Maybe. I don't know.

Thank you so much! And sorry for bumping this thread. It is worth it.
 
I had a similar issue for a DS418j (source) and DS416 (destination). You could try the following steps. YMMV.

"Src" is the name for the source NAS and "Dst" is the name for the destination NAS.

Before proceeding,
a. Disable any backup schedules.
b. Check that your disks and memory are all performing well. If there is any hardware fault, fix it first.

SSH into Dst to perform the following steps.
1. Change to root.
> sudo -i

2. Change into backup directory.
> cd /volume1/backups/Dst/

3. Check the status. You should get "detect-bad".
> sqlite3 Config/target_info.db "select status from target_info"

4. Change to HyperBackup Vault's bin folder and run synoimgbkptool.
> cd /var/packages/HyperBackupVault/target/bin
nohup ./synoimgbkptool -r /volume1/backups -t Dst -R detect > /volume1/\@tmp/recover.output &
You may get "nohup: ignoring input and redirecting stderr to stdout". It's ok to ignore this.

5. Wait for synoimgbkptool to complete running. It can take a very long time, 3 to 4 days. You can check with the following command.
> ps aux | grep synoimgbkptool

If you see something similar to the following, it is still running.


Otherwise, you should see something similar to this.

You can now exit the SSH to Dst.

Next, SSH into Src.
1. Change to root.
> sudo -i

2. Change to HyperBackup's config directory.
> cd /var/synobackup/config/

3. Edit task_state.conf (using vi) to the following.
last_state="Backupable"
state="Backupable"


You can now exit SSH to Src.

Finally, log in to the admin portal. Start HyperBackup and choose "Check backup integrity". It can take a long time (3 to 4 days). Once that's completed without errors, you should be able to resume your backup schedule.

Hope that helps.
3 years later and this was the life saver! No idea what caused my issue on a DS420+ to DS216 backup but this worked flawlessly, thanks so much for saving my current backup and avoiding all the hassle of creating a new one
 
I had a similar issue for a DS418j (source) and DS416 (destination). You could try the following steps. YMMV.

"Src" is the name for the source NAS and "Dst" is the name for the destination NAS.

Before proceeding,
a. Disable any backup schedules.
b. Check that your disks and memory are all performing well. If there is any hardware fault, fix it first.

SSH into Dst to perform the following steps.
1. Change to root.
> sudo -i

2. Change into backup directory.
> cd /volume1/backups/Dst/

3. Check the status. You should get "detect-bad".
> sqlite3 Config/target_info.db "select status from target_info"

4. Change to HyperBackup Vault's bin folder and run synoimgbkptool.
> cd /var/packages/HyperBackupVault/target/bin
nohup ./synoimgbkptool -r /volume1/backups -t Dst -R detect > /volume1/\@tmp/recover.output &
You may get "nohup: ignoring input and redirecting stderr to stdout". It's ok to ignore this.

5. Wait for synoimgbkptool to complete running. It can take a very long time, 3 to 4 days. You can check with the following command.
> ps aux | grep synoimgbkptool

If you see something similar to the following, it is still running.


Otherwise, you should see something similar to this.

You can now exit the SSH to Dst.

Next, SSH into Src.
1. Change to root.
> sudo -i

2. Change to HyperBackup's config directory.
> cd /var/synobackup/config/

3. Edit task_state.conf (using vi) to the following.
last_state="Backupable"
state="Backupable"


You can now exit SSH to Src.

Finally, log in to the admin portal. Start HyperBackup and choose "Check backup integrity". It can take a long time (3 to 4 days). Once that's completed without errors, you should be able to resume your backup schedule.

Hope that helps.
Just made a synoforum account to say thank you.

I updated to DSM 7.2 and out of no where HyperBackup started failing to complete the backup tasks. Did a file system check and SMART test on the external USB backup drive and everything was fine. I had a spare extra USB drive, so I did a new backup just in case. Also did a memory test on the NAS and everything was good. Also, using the HyperBackup Explorer on my PC did a quick sanity check of the files and everything looked good.

To troubleshoot the original backup task I stared un-selecting things from the HyperBackup task to try to see if it was due to a specific file or app. There were several Apps and folders that were causing the task to fail. I manage to fix some by just stopping the app and starting it again, uninstalling the app and installing it again, etc. The folders that were causing the issues where really due to the apps failing to backup. The folders initially were not backing up even if I selected and unselected them manually. But after fixing most of the apps that were failing to backup, all the share folders started backing up again too. At the end there were 2 apps that I was never able to get them to backup again.

After doing all that I decided to do an integrity check on the backup to see if it would fix the issue but the check failed and HyperBackup flagged it as "Restore Only". Searched the whole internet, Synology's KB, and found nothing on how to fix the backup. I was about to give up and starting to accept the fact that I would have to upload again the backups to the cloud (backblaze) which took me initially around 3 to 4 months. But thankfully found your post.

I followed your steps for the local USB storage drive and it fixed whatever was wrong with the original HyperBackup. Ran the integrity check and it passed. All the folders and Apps are working fine now. Thanks again for the help.

HyperBackup is a great solution to be included in the software for the NAS. For that, I kind of give it to Synology as a positive thing. Also the easiness to use it is great. However, the error logs and failure logs need a lot of improvement. They say basically nothing helpful to fix the issue. They should improve it. Also, they should add a way to attempt to fix the backups directly from the web interface instead of just saying, re-do the whole backup. Is not just the fact of the time it might take to do another backup from scratch and uploading it, but also you lose all the versioning history that the original backup had. That was my other big concern. That backup had files from 2020, and all that would be gone. In other words, having a way to fix the backup would be great. I do understand the philosophy they might have used to make the decision of the current implementation of nuking and starting from scratch, ie. zero trust the backup or device that failed the integrity. But, give the users the ability to decide what to do, maybe through an advance menu or something.

Again, thanks for the post and help. There was little info online related to HyperBackup troubleshooting and this post was like finding an oasis in the middle of the desert.
 
I had a similar issue for a DS418j (source) and DS416 (destination). You could try the following steps. YMMV.

[...]

Hope that helps.

Yeah, that helped a whole lot!

Only one thing really to add for others in the future: My source is an RS2324+ running DSM7.2-64570 Update 3 and my task_state.conf is located in /volume1/@appdata/HyperBackup/config/task_state.conf instead.
 
It's crazy that this is what we are reduced to to fix what is clearly Hyperbackup problem. Kudos to @BoxingHyena for the fix, but this is kind of nuts.
Yeah. Can't believe there's no feature to allow us to replace the corrupt file with the current copy or something. Pretty dangerous to completely disable the entire back-up. And it's not clear why I'm seeing so many corrupt backups. Is it RAM? It's near-impossible to run a RAM test remotely -- you need a machine on the same LAN. Why can't the backups be more resilient?
 
This has been happening with my DS718+ for about 2 years now. Hyperbackups to various destinations failing for unknown reasons with "Destination corrupt" errors. I had a C2 backup subscription that failed 3 different times. Each time, Synology support was no help at all. We tested the RAM and hard drives multiple times over about a 6 month period (each time the backups failed), but the hardware passed the tests each time. Their final solution each time was for me to delete the C2 backup, which they couldn't do for some reason, and start a new Hyperbackup to upload ~120GB of files each time. After the 3rd failure, I finally had enough and cancelled the C2 Backup subscription.

I moved to just doing local backups to an external hard drive, and those have been failing now. Three times as of last week. Again, Synology support was no help. I jumped through the hoops again testing my RAM and hard drives multiple times, but all tests came back good. Their solution was that my external hard drive must be bad, so they advised me to buy a new one, which cost me $150. A week went by with the new external hard drive and, you guessed it, Hyperbackup failed again with the SAME error as usual "Destination corrupt". At this point there can't possibly be anything wrong except Hyperbackup itself. I don't know what to do at this point.

I tried to set up using Backblaze as a backup destination, but I must have made an account ages ago. Unfortunately, I cannot get into the account because logging in tries to send a code to a phone # that I haven't had in over 7 years, Backblaze support turned out to be no help. They gave me a laundry list of things to provide them to recover my account, but I have no idea what credit card # I used 7 years ago after multiple moves and bank changes, what charges I made to Backblaze, and they requested a Recovery Key from my initial account setup that I have NEVER been given. They won't even delete the account entirely so that I can recreate it with all my up-to-date information. Their solution was for me to create a new email address and Backblaze account and start over. Well, I'm not interested in doing that, so are there any other cloud backups that are worth pursuing?

On a side note, I'm not sure if this has anything to do with the Hyperbackup issues, but after BTRFS was released to use on Synology devices, I wiped my NAS and did a factory reset, then rebuilt with a BTRFS volume. The storage pool/volume failed numerous times over a year period with no recovery possible, so I had to literally factory reset my NAS multiple times. I have a post somewhere on here explaining it in detail if anyone cares.

So in conclusion, my DS718+ cannot use BTRFS without the storage pool/volume failing at random and requiring a complete factory reset to fix. I also cannot use Hyperbackup without it failing at random and requiring me to lose my entire backup and having to delete the backup task and start my backups over from scratch. I'm still a Synology fan, but my confidence in their hardware is at an all-time low at this point.

Thanks for letting me rant!

Tim
 
IMO this is a software thing. @tjohns34 do you know if this started after 7.2 or 7.1 or something?

I never had this happen under the 6.x software. But I think somewhere around after the 7.2 update this started happening.

That said, I also had a weird memory error thing. I bought some ECC memory that passed the memory test. But after digging in the logs, I see a rare ECC error reported, usually just one per boot. I had the RAM replaced a day ago, and do not see that same ECC error, so perhaps it was related to that.

Which is a long winded way of saying the RAM can pass the memtest and still be bad shooting out ECC errors. So look for "ECC" in your logs to see if it threw any errors.
 
IMO this is a software thing. @tjohns34 do you know if this started after 7.2 or 7.1 or something?

I never had this happen under the 6.x software. But I think somewhere around after the 7.2 update this started happening.

That said, I also had a weird memory error thing. I bought some ECC memory that passed the memory test. But after digging in the logs, I see a rare ECC error reported, usually just one per boot. I had the RAM replaced a day ago, and do not see that same ECC error, so perhaps it was related to that.

Which is a long winded way of saying the RAM can pass the memtest and still be bad shooting out ECC errors. So look for "ECC" in your logs to see if it threw any errors.

Hey zombiephysicist, thanks for the reply.

I'm not exactly sure what DSM I had when the issue started, but I am leaning towards it being a software issue as well.

I replaced the 2GB RAM stick in my DS718+ with an 8GB kit (2x 4GB sticks) as soon as I took it out of the box. It worked for years with no Hyperbackup problem. In the interest of covering all the bases I can, I just took my DS718+ apart and have replaced the 8GB RAM kit with the original 2GB RAM stick. I'll set up my Hyperbackup backups again and see what happens. I definitely did not have ECC RAM. I did take a look though my logs, but haven't found any errors besides the failing backup errors, which of course report "Destination corrupt".

If Hyperbackup starts failing again, I'll come back here and post about it.

Thanks again!

Tim
 
I keep getting more "destination corrupt" happening to my HyperBackup tasks. Hmmm... I'm assuming any possible memory corruption would be at the destination NAS, right?
 
I keep getting more "destination corrupt" happening to my HyperBackup tasks. Hmmm... I'm assuming any possible memory corruption would be at the destination NAS, right?

Hi Ken830. That is a reasonable assumption for sure. In my case though, the destinations I was using were an external hard drive and Synology's own C2 Storage service. I kept getting "Destination corrupt" issues with both options. I hope I'm not mentioning it too soon, but I haven't had any of the errors when backing up to my external hard drive since I re-installed the original RAM that came with my NAS. It's only been a week now, but I am somewhat hopeful that fixed my problem. We shall see. Going from 8GB RAM back down to 2GB RAM had no impact whatsoever on performance, but I'm not running any containers or VMs on my NAS. Just file storage/backup, Plex, QuickConnect, HyperBackup, Download Station, Surveillance Station, and VPN Server. My Resource Monitor shows I still have ~200MB RAM still free!

Did you happen to have upgraded the RAM in your NAS? Just curious. That would be a common factor to these "Destination corrupt" issues if you have. If so, maybe replace the upgraded RAM kit with the original RAM that came with your NAS like I did and see what happens.

Tim
 
Hi Ken830. That is a reasonable assumption for sure. In my case though, the destinations I was using were an external hard drive and Synology's own C2 Storage service. I kept getting "Destination corrupt" issues with both options. I hope I'm not mentioning it too soon, but I haven't had any of the errors when backing up to my external hard drive since I re-installed the original RAM that came with my NAS. It's only been a week now, but I am somewhat hopeful that fixed my problem. We shall see. Going from 8GB RAM back down to 2GB RAM had no impact whatsoever on performance, but I'm not running any containers or VMs on my NAS. Just file storage/backup, Plex, QuickConnect, HyperBackup, Download Station, Surveillance Station, and VPN Server. My Resource Monitor shows I still have ~200MB RAM still free!

Did you happen to have upgraded the RAM in your NAS? Just curious. That would be a common factor to these "Destination corrupt" issues if you have. If so, maybe replace the upgraded RAM kit with the original RAM that came with your NAS like I did and see what happens.

Tim

Yeah. I always upgrade the RAM on every new Synology NAS.

My destination NAS is my DS1815+ and that has 2x8GB Crucial DDR3 PC3-12800 SO-DIMMS (CT2K8G3S160BM) since early 2015. That used to be my main NAS and it ran HyperBackup tasks to Google Drive without any issue for many years (except for that HyperBackup bug last year that falsely corrupted everyone's Google Drive backups). Not sure if I can locate the original stick of RAM, but I will definitely try it.

For the remote NAS, I'm planning to run the memtest, but I need to re-purpose a PC at that the remote location (my parents' place) and get remote access to run Synology Assistant. I can't run the memtest remotely, even over Tailscale.

My main NAS is the RS2423+ and that has 2x16GB DDR4-3200 ECC UDIMM 1.2V CL22 (MTA9ASF2G72AZ-3G2R). These are ECC DIMMs, so unlikely to be a problem? How long would the memtest take for 32GB of RAM? Downtime is very disruptive.
 
I suspect it is a software issue too with the latest HyperBackup, I have been running it for years with 0 issues. My backup task runs twice on a daily basis and I never had an issue. Then updated to the version they released recently, and the errors began. Too much coincidence in my opinion and also there is the fact that one of the releases had a bug that was corrupting backups. Hopefully it gets figured out.
 
I had a similar issue for a DS418j (source) and DS416 (destination). You could try the following steps. YMMV.

"Src" is the name for the source NAS and "Dst" is the name for the destination NAS.

Before proceeding,
a. Disable any backup schedules.
b. Check that your disks and memory are all performing well. If there is any hardware fault, fix it first.

SSH into Dst to perform the following steps.
1. Change to root.
> sudo -i

2. Change into backup directory.
> cd /volume1/backups/Dst/

3. Check the status. You should get "detect-bad".
> sqlite3 Config/target_info.db "select status from target_info"

4. Change to HyperBackup Vault's bin folder and run synoimgbkptool.
> cd /var/packages/HyperBackupVault/target/bin
nohup ./synoimgbkptool -r /volume1/backups -t Dst -R detect > /volume1/\@tmp/recover.output &
You may get "nohup: ignoring input and redirecting stderr to stdout". It's ok to ignore this.

5. Wait for synoimgbkptool to complete running. It can take a very long time, 3 to 4 days. You can check with the following command.
> ps aux | grep synoimgbkptool

If you see something similar to the following, it is still running.


Otherwise, you should see something similar to this.

You can now exit the SSH to Dst.

Next, SSH into Src.
1. Change to root.
> sudo -i

2. Change to HyperBackup's config directory.
> cd /var/synobackup/config/

3. Edit task_state.conf (using vi) to the following.
last_state="Backupable"
state="Backupable"


You can now exit SSH to Src.

Finally, log in to the admin portal. Start HyperBackup and choose "Check backup integrity". It can take a long time (3 to 4 days). Once that's completed without errors, you should be able to resume your backup schedule.

Hope that helps.
I signed up for an account here to thank you for this post. I had the same Destination Corrupted error, and running the synoimgbkptool with the recovery option fixed it. So, Thank You!

However, I'm curious. When running an integrity check via the GUI, multiple instances of synoimgbkptool are run. Given that, why isn't there a way to run this with the recovery option in the GUI? Seems quite silly I had to stumble upon this post in an unofficial forum when it should have just been a checkbox in the GUI.
 
Last edited:
I had a similar issue for a DS418j (source) and DS416 (destination). You could try the following steps. YMMV.

"Src" is the name for the source NAS and "Dst" is the name for the destination NAS.

Before proceeding,
a. Disable any backup schedules.
b. Check that your disks and memory are all performing well. If there is any hardware fault, fix it first.

SSH into Dst to perform the following steps.
1. Change to root.
> sudo -i

2. Change into backup directory.
> cd /volume1/backups/Dst/

3. Check the status. You should get "detect-bad".
> sqlite3 Config/target_info.db "select status from target_info"

4. Change to HyperBackup Vault's bin folder and run synoimgbkptool.
> cd /var/packages/HyperBackupVault/target/bin
nohup ./synoimgbkptool -r /volume1/backups -t Dst -R detect > /volume1/\@tmp/recover.output &
You may get "nohup: ignoring input and redirecting stderr to stdout". It's ok to ignore this.

5. Wait for synoimgbkptool to complete running. It can take a very long time, 3 to 4 days. You can check with the following command.
> ps aux | grep synoimgbkptool

If you see something similar to the following, it is still running.


Otherwise, you should see something similar to this.

You can now exit the SSH to Dst.

Next, SSH into Src.
1. Change to root.
> sudo -i

2. Change to HyperBackup's config directory.
> cd /var/synobackup/config/

3. Edit task_state.conf (using vi) to the following.
last_state="Backupable"
state="Backupable"


You can now exit SSH to Src.

Finally, log in to the admin portal. Start HyperBackup and choose "Check backup integrity". It can take a long time (3 to 4 days). Once that's completed without errors, you should be able to resume your backup schedule.

Hope that helps.
For me in the Src
/var/synobackup/config/
Does not exist

Any ideas?


Edit: With dsm 7.x the file is here: /volume1/@appdata/HyperBackup/config/task_state.conf
 
I had a similar issue for a DS418j (source) and DS416 (destination). You could try the following steps. YMMV.

"Src" is the name for the source NAS and "Dst" is the name for the destination NAS.

Before proceeding,
a. Disable any backup schedules.
b. Check that your disks and memory are all performing well. If there is any hardware fault, fix it first.

SSH into Dst to perform the following steps.
1. Change to root.
> sudo -i

2. Change into backup directory.
> cd /volume1/backups/Dst/

3. Check the status. You should get "detect-bad".
> sqlite3 Config/target_info.db "select status from target_info"

4. Change to HyperBackup Vault's bin folder and run synoimgbkptool.
> cd /var/packages/HyperBackupVault/target/bin
nohup ./synoimgbkptool -r /volume1/backups -t Dst -R detect > /volume1/\@tmp/recover.output &
You may get "nohup: ignoring input and redirecting stderr to stdout". It's ok to ignore this.

5. Wait for synoimgbkptool to complete running. It can take a very long time, 3 to 4 days. You can check with the following command.
> ps aux | grep synoimgbkptool

If you see something similar to the following, it is still running.


Otherwise, you should see something similar to this.

You can now exit the SSH to Dst.

Next, SSH into Src.
1. Change to root.
> sudo -i

2. Change to HyperBackup's config directory.
> cd /var/synobackup/config/

3. Edit task_state.conf (using vi) to the following.
last_state="Backupable"
state="Backupable"


You can now exit SSH to Src.

Finally, log in to the admin portal. Start HyperBackup and choose "Check backup integrity". It can take a long time (3 to 4 days). Once that's completed without errors, you should be able to resume your backup schedule.

Hope that helps.
This is a god-sent. Thank you so much.

I will add that I have 20+ tasks and it was not obvious which one needed to have its "last_state" and "state" changed.
To find that, you can use the following command on the Src NAS :
Code:
cat /usr/syno/etc/synobackup.conf
It will show a list of all tasks and you should be able to find the one with issues.

Cheers,
Edouard
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

  • Solved
Rusty! You made that too easy.... I was so close...
Replies
2
Views
1,674
had a quick read still unsure on what redundant storage ive created though it did take over 6 hours to add...
Replies
37
Views
2,042
It is a windows issue. Try to clear the network logins en clear DNS Go to windows credentials manager and...
Replies
1
Views
536
  • Solved
Hello everyone Thank you all for taking the time to reply with the information. Though as well as editing...
Replies
5
Views
2,162
Thanks for the uber fast reply. No, I didn't mix the bays up. 2-4 were 8 TB drives and 1 was the oldest...
Replies
2
Views
4,183

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!

Back
Top