excel temporary files stays on Drive when user as no delete permission

Currently reading
excel temporary files stays on Drive when user as no delete permission

Operating system
  1. Windows
Mobile operating system
  1. Android
Im looking for someone smart that can help create a script to delete 0kb files , files created by excel when opening any excel file let me explain.

every time you open a excel file it creates a temporary ghost file called Auto recovery , the problem on drive is that people that don't have DELETE access each time they view a excel sheet or edit the temporary white 0kb auto recovery file stays there, now for the regular user this is prob not an issue, it can be resolved in excel by disabling this feature, but for me with over 90 users and thousands of directories and files it's not an option.

synology engineer replied :
this problem happens only on users who have no delete permissions. This is expected, the temporary file will be deleted on the client-side only but it remains on the server-side due to no delete permissions.

This temporary file has no filename extension thus there is no way to set up a filter not to sync this kind of temporary file.

In this case, you really need to delete them one by one. Or you may consider writing a script and setting up a Scheduled Task to move these 0-byte files to a specific folder. You could check files under this folder to make sure all files are expected then remove them manually.

scheduled task.png

Im not sure how to do a Script to delete these 0kb files but sometimes they are 1mb
here are some example of the files. users who edited these excel files but the auto recovery stayed.

Annotation 2020-03-10 094140.jpg

anyone have an idea?


  • scheduled task.png
    scheduled task.png
    31 KB · Views: 22
Try this command from a SSH session but replace the <your folder> with whatever it should be.

find "<your folder>" -size 0 -type f -wholename "*/*" ! -wholename "*/*.*" -ls

find is the command to search for stuff from <your folder> and below it.

This command is looking for zero sized (-size 0) files (-type f) whose full path matches */* but doesn't match */*.* ... this finds files that have no extension. We then present the found files as if we did a list (-ls) command.

Using a single -wholename, -name, or -iname search with subtler regular expressions can be done but I was having any success and haven't time to tweak this to be more elegant.

So this just lists the found files, within nested folders, that, hopefully, match the files you want to delete or move elsewhere.

This command replaces the -ls list command with -delete which will delete the found files. Be careful using this!
find "<your folder>" -size 0 -type f -wholename "*/*" ! -wholename "*/*.*" -delete

A bit safer is to move the found files by passing the the list to a command (-exec execute the following). This one moves (mv) the found files ({}) to a folder you define (<holding folder>), finishing the command with \;.
find "<your folder>" -size 0 -type f -wholename "*/*" ! -wholename "*/*.*" -exec mv {} "<holding folder>" \;

Provided you are sure that you have updated these commands with the correct paths for the folders then you can use them in a Scheduled Task.

I normal start the Scheduled Task script with the first line to say it's running in a Bash shell. Does no harm.
your commands here....
Very thorough as always, Mr. @fredbert. I was thinking of similar commands (not as detailed though) with
-maxdepth 1
to limit the directory to the named one, otherwise if I’m not mistaken, the find command will dive into subdirectories too.
What do you think? You’re the scriptnator man :)

I have a feeling that @TwistedEndz wants to use the graphical DSM to add the script, which is possible but won’t be as detailed. I think he’s after a one-liner, to nuke the zero files :)
something like:
find /directory_name -maxdepth 1 -size 0k -delete

Whatever you do, @TwistedEndz, please be careful. @fredbert script that moves the files is the way to go if you’re fine with accessing your terminal.
Last edited:
Folder depth can be limited if this isn't for a large nested structure. The 'wholename' test can be used to structure the match to be in known subfolders of every folder at some point from the starting point, such as "/volume1/homes/*/Drive/*/<etc etc etc>".

I would test it out on the SSH command line first. Then add a Scheduled Task as a script: doing the list test first and sending the output to a file...

find "<your folder>" -size 0 -type f -wholename "*/*" ! -wholename "*/*.*" -ls > /volume1/homes/<myuser>/files_output.txt

Once happy that only the right files are being found then I'd modify the script action.


The trick is knowing when it's safe to delete the temporary files. Run at night when hopefully no-one is working or has left a file open? Add a test to restrict to files older than XX?

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads

Using Drive Client 3.4.0 on Win 10. By default, it seems like the client is copying my entire home profile...
Hello . Suppose 2 Users A and B (belong the same group users) , Suppose a folder belonging to A in " My...
Not too long ago, I was a beginner too. 1st, I will tell you that it is possible to copy or move files...
There could be many reasons why this happens. Some issue with the package is usually the cause as the...
Like the above person commented this is also partly on me for not testing this workflow out before using...
Synology did not mentioned anything like that. It's correct. Some macOS Application create hidden folders...
  • Question
The issue: No matter what Client B creates, A cannot read despite being uniformly the same non-default...

Welcome to SynoForum.com!

SynoForum.com is an unofficial Synology forum for NAS owners and enthusiasts.

Registration is free, easy and fast!