Read failure error, despite trying to exclude the problematic item

Dear all

I see this message:

ERROR : : error reading source directory: failed to read directory "": lstat
+/home/<my user name>/.gvfs: permission denied

Yet, my job uses the option filter-from with a file that contains - /.gvfs/**.

Rclone 1.46
Linux Mint 19.1 x64 Cinnamon

You are filtering all files in that folder so it probably had to check the regex.

Ah, so do I want the following?

- /.gvfs

i.e., no slash, no asterisks? If so, will that still exclude any folder anywhere that is called .gvfs? Thanks.

What’s the actual command you are running? What version of rclone?

Thanks. I gave the Rclone version above.

The actual command, well, it’s a bash script, and the most relevant bits of the script are:



			"--filter-from $RL_LSTS/dots_filter-from" \
			'--max-depth=3' \
			'--max-size 25M' \

And the filter file contains, as said

- /.gvfs/**

But really it’s the exclusion syntax that I would like help with.

Using .gvfs in the filter list gives the same error.

OK, I think I have found the problem. My filter-from file had

+ /.*
+ /.*/*

after various attempts at excluding .gvfs.

It does seem to me though that the filtering documentation could be more lucid.

And . . in fact, no, I am still getting the error. So I’ll try the above, i.e. putting the ‘include all .items’ before the ‘exclude .gvfs’, and i will use this format for the latter: .gvfs/**.

The filter rules are checked in order starting at the top for each item, so you want the excludes first probably.

I think this problem may be in the local backend though - what is failing is it doing an lstat on the directory .gvfs - that is rclone trying to read info about the directory, not the directory.

Can you do stat .gvfs and post the results? I’d like to see exactly what the permissions are.

Thank you, Nick.

stat .gvfs in the relevant folder yields stat: cannot stat '.gvfs': Permission denied

I would like my job to circumvent that error, in that I wish rclone to tell me about other problems, but not about this sort of permissions problem. (Indeed, I have rclone set to mail me the result of jobs; and I don’t want to hear by mail about that problem. For, it is not that I want the folder - which has to do with network stuff - backed up. Now ‘has to do with network stuff’ is vague. That is because I have a poor understanding of gvfs. I have .gvfs items in my system because I resorted to using gvfs when other networking methods broke.)

What’s the ls -al on the file look like? I tried to reproduce it and I can stat the file but get the same error.

Odd question, why would a file be there the user doesn’t own in there home!

@, er, Animosity:

Thank you for trying to reproduce the problem.

why would a file be there the user doesn’t own in there home!

I don’t know; as I said, I do not understand the file system in question. (I just know that unlike other methods of networking that I have tried, it does not hang all the time.)

What’s the ls -al on the file look like?

$ ls -al .gvfs
ls: cannot access ‘.gvfs’: Permission denied
$ sudo ls -al .gvfs
[sudo] password for [user]:
total 4
dr-x------ 2 root root 0 Apr 10 15:51 .
drwx------ 98 [user] [user] 4096 Apr 11 00:23 …

Sorry, I meant:

sudo ls -al | grep .vfs

and do that at the directory that has it present.

It should give you:

[felix@gemini new]$ sudo ls -al | grep blah
drwx------  2 root  root     6 Apr 10 16:47 .blah

.gvfs is used by some gnome desktop thing for mounting FUSE filesystems.

So there might actually be something mounted on .gvfs.

What if you add the -x or --one-file-system flag to rclone to tell it to stick to one filesystem - does that help?

Can you post the contents of your filter-from file?

Thanks, ncw.

I have bookmarks in my file manager, Nemo, for various remote shares. Those shares get mounted (if that is the terminology I want) when I click those bookmarks (and not before). At that point, an icon for the share appears on my desktop. When I have been getting the error in question, no such share was mounted.

Here is the filter-from file (as it is currently; I’ve been fiddling with it).

# ==============================================
# TYPE: filter-from

# 								+ <include pattern>
# 								- <exclude pattern>
#								Add /** to exclude a folder whatever its path
#								Root (/) is the topmost folder specified by the backup job:
#								/home/<user>
#								dots
# Rules are processed in order they are defined
# ===============================================

- .cache
- .dropbox
- .gvfs # keep getting errors with this one.
- .gnupg
- .lock
- .old
- .Trash-0
- cache

- /.bash_history
- /.recoll/**
- /.xsession-errors

# Excludes everything else; we do NEED THAT.
# (May be a good idea also, in the rclone command, to set a shallow 'max-depth'.)
- *

+ /.*
+ /.*/*


Using --one-file-system seems not to help, as one can see from the following output from my backup script.

NAME of script                  TEST                                                     
JOB to run                      dots_COPY                                 
SOURCE path                     /home/<user>                                           
DESTINATION path                enc-b2:/X1/home/<user>                                 
COMMAND TYPE                    COPY                                                     
OPTIONS, constant               --skip-links --fast-list --checkers 12                   
                                --drive-chunk-size=256K --log-level=NOTICE               
                                --local-no-check-updated --retries=2 --timeout=4m        
                                --tpslimit 0 --tpslimit-burst 1 --drive-use-trash=false  
OPTIONS, variable               --one-file-system --filter-from                          
                                --max-depth=3 --max-size 25M                             
OPTIONS, log and display        --log-file /tmp/tmp.MdcaGB9zhF -P --stats=0              
MAIL                            true                                                     
Rclone version                  rclone v1.46                                             

0 / 0 Bytes, -, 0 Bytes/s, ETA -                                                                                           

dots_COPY - finished WITH ERROR CODE 1 in 0 hour(s),5 minute(s) and 13 second(s).

2019/04/11 12:42:39 ERROR : : error reading source directory: failed to read directory "": lstat /home/<user>/.gvfs: permission denied
2019/04/11 12:42:39 ERROR : Attempt 1/2 failed with 1 errors
2019/04/11 12:45:12 ERROR : : error reading source directory: failed to read directory "": lstat /home/<user>/.gvfs: permission denied
2019/04/11 12:45:12 ERROR : Attempt 2/2 failed with 1 errors

Can you share the output of the command I asked about? Regardless of how the file is created, the user should still own it.

I think what you had originally

- /.gvfs/**

Should be correct here.


$ rclone ls .
2019/04/11 13:20:24 Failed to ls: failed to open directory "unreadable": open /tmp/unreadable: permission denied
$ rclone ls . --filter '- /unreadable/**'
#files are listed without error
$ stat unreadable
  File: unreadable
  Size: 4096      	Blocks: 8          IO Block: 4096   directory
Device: fd01h/64769d	Inode: 6234126     Links: 2
Access: (0500/dr-x------)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2019-04-11 09:53:49.571689831 +0100
Modify: 2019-04-11 09:53:49.571689831 +0100
Change: 2019-04-11 09:54:28.683814656 +0100
 Birth: -
$  sudo ls -al | grep .vfs
dr-x------   2 root     root          0 Apr 10 15:51 .gvfs

Thanks, but I am confused on two fronts.

(1) is - <foo>/** the correct syntax for excluding all instances in all locations of folder foo?
(2) I do not understand your example (your ‘eg’ and the material that followed it).

I think you are trying to fix the wrong thing.

You should change that file to be owned by the user.

Are you even using gvfs-mount or something like that? That would generate that folder and it seems it ran as root, which it should not be.

Thanks. All that I did, gvfs-wise, was to put some gvfs-style addresses into my file manager’s location bar and then bookmark the result. I will try giving more access to my user to the relevant folders. I will report back.