Sync aborts if even one single unreadable folder is encountered

I found a couple of discussions about this issue from back in 2017:

The issue is marked as partially fixed/solved, but I am still encountering it in 2023. Specifically, when trying to sync, if rclone encounters a folder that is unreadable due to permissions, it doesn't just skip over that folder and continue with the other folders which are readable. Instead it aborts the sync for all of the sibling folders in that part of the directory tree.

So that if you have folder "parentfolder00" which has 99 subfolders called "subfolder01, subfolder02... subfolder99" etc... and try to include the parent in a sync, if it is the case that subfolder01 is unreadable for any reason, the sync will skip all 98 other folders which are alongside the unreadable subfolder 01, even if the other 98 are perfectly readable and have no errors. Other folders above this level of the directory tree will continue to be synced, but subfolder02 through subfolder99 will be ignored entirely, even though they and their contents are readable.

In fact, it isn't even only the sync command that is affected by this bug. You can even test it with a simple 'lsf' command like so:

rclone lsf /home/fred/parentfolder00 which the output is:

2023/07/04 23:47:35 ERROR : : lstat /home/fred/parentfolder00/subfolder01: permission denied
2023/07/04 23:47:35 ERROR : : error listing: failed to read directory entry: failed to read directory "/home/fred/parentfolder00/subfolder01": lstat /home/fred/parentfolder00/subfolder01: permission denied
2023/07/04 23:47:35 Failed to lsf with 3 errors: last error was: error in ListJSON: failed to read directory entry: failed to read directory "/home/fred/parentfolder00/subfolder01": lstat /home/fred/parentfolder00/subfolder01: permission denied

A simple lsf will fail to show even the existence of the other 98 subfolders, as soon as it encounters the one single unreadable subfolder.

Back in the 2017 threads there is mention of implementing a --skip-unreadable flag, but it doesn't ever appear to have been created. Really it should just be the default behavior of rclone to automatically skip unreadable folders and continue with the rest, but at the very least a flag would be useful to handle these instances.

Is there any known fix/workaround for this issue, other than specifically excluding the unreadable folders? That works fine if there are only one or two, but if there are dozens, it would be ideal to have rclone simply skip/ignore them every time.

What rclone version are you using?

This doesn't seem to be the case for me

cd /tmp
mkdir unreadable
cd unreadable/
for i in $(seq -w 99); do mkdir $i ; done
for i in $(seq -w 99); do touch $i/$i.txt ; done
rclone lsf -R .
chmod 000 01
rclone lsf -R .

This produces a listing of everything except the 01 directory.

I'm trying this with rclone v1.63.0

I am using version 1.62.2 - Here is some additional info about the situation I am encountering:

One of the problematic folders appears to be unreadable because it is corrupted in some way. Strangely I can read it as a regular user and see some details about it:

ls -ald .gvfs
dr-x------ 2 user1 user1 0 Jul  4 14:26 .gvfs

Attempting to do the same as root is denied:

sudo ls -ald .gvfs
ls: cannot access '.gvfs': Permission denied

Listing all folders as root shows some info about it (other folders redacted)

sudo ls -al
d?????????   ? ?     ?               ?            ?  .gvfs

This listing is what makes me think the folder is corrupted.

Similarly to above, using

rclone lsf /home/user1/ fine and lists all folders. But doing it as root produces:

sudo rclone lsf /home/user1/
2023/07/06 14:57:26 ERROR : : lstat /home/user1/.gvfs: permission denied
2023/07/06 14:57:26 ERROR : : error listing: failed to read directory entry: failed to read directory "/home/user1/.gvfs": lstat /home/user1/.gvfs: permission denied
2023/07/06 14:57:26 Failed to lsf with 3 errors: last error was: error in ListJSON: failed to read directory entry: failed to read directory "/home/user1/.gvfs": lstat /home/user1/.gvfs: permission denied

Any ideas? I need to use rclone as root/sudo in order to back up some system folders and files which are owned by root, so that is my only option. However it aborts as soon as it encounters a corrupted/unreadable folder and the backup ultimately fails.

Currently I can work around the issue by specifically excluding the corrupted folder:

sudo rclone lsf --exclude .gvfs /home/user1/

...which also works for the Sync command. This is fine as long as there are only one or two such corrupted folders and I know exactly what they are to exclude them in advance. But in case rclone encounters such folders randomly during a sync, I won't be able to predict their names ahead of time. So, it would be useful for such folders to just be skipped gracefully, instead of halting the sync.

Just as a side background note, some tools such as restic/borg do seem able to gracefully skip over these same folders while still copying everything else, so in theory it should be possible to implement this same behavior in rclone... I think?

I think I've managed to fix this issue - please give this a go

v1.64.0-beta.7133.34d754bdd.fix-local-lstat on branch fix-local-lstat (uploaded in 15-30 mins)

I haven't been able to recreate the problem you are having locally. You have a directory entry which can be listed but gives an error when lstat() is called on it. I don't think this is very common and I haven't managed to make one with normal unix tools.

Fix is tested and confirmed working, all other directory entries are now listed properly and only the corrupted entry reports an unreadable error. Thanks very much for taking the time to address such an odd corner-case issue which probably extremely few users ever encounter, but which was causing sync problems for me because of the corrupted folders.

Indeed, it's a very strange situation and these corrupted folder entries cannot even be deleted with rm -rf, either as root or regular user. I initially thought it was just a permission problem, but it turns out to be a far rarer scenario. It seems a difficult thing to either create or repair, and I'm not exactly sure what events produced it in the first place. Web-searching suggests possibly trying to erase the inode entry directly, which I'll try later. But in any event, I am glad to have rclone syncing working again despite the corruption weirdness. Thanks again.

Thanks for testing :slight_smile:

I've merged this to master now which means it will be in the latest beta in 15-30 minutes and released in v1.63.1

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.