Failed to set modification time of cached file

This is going to be hard to articulate, because I can't provide the code I am using, but here goes.

I am running some python that downloads a bunch of stuff from one of my VPS instances, and copies it to encrypted google drive storage.

I do this via rclone mount crypt: ~/rclone --vfs-cache-mode writes and then downloading to the ~/rclone path.

My python script makes use of win32-setctime to ensure the timestamp on my downloaded files stays the same as their source.

As of today, I am getting the following error:

2020/07/14 15:56:14 ERROR : rclone/file.ext: Failed to set modification time of cached file: chtimes /home/user/.cache/rclone/vfs/crypt/file.ext : no such file or directory

I believe I am experiencing this:

However I am now on 1.52.2 and still experiencing the same issue.

Can you try the latest beta? That part of the VFS code has been extensively re-written since v1.52.2.

If it is still a problem there then I'll need a bit more to go on to reproduce. Ideally a sequence of steps I can follow to reproduce the problem. Note that you can use rclone touch to set timestamps which might be useful in any repro!

I'm now running the latest beta, and still getting the same error:

2020/07/24 23:10:25 ERROR : [path to file]: vfs cache: failed to set modification time of cached file: chtimes /home/jm/.cache/rclone/vfs/[path to file]: no such file or directory

Essentially what my script is doing is crawling a web directory, and checking files in that directory against my rclone bucket, and if they are duplicates, it does not re-download them.

The relevant part is:

def check_duplicates(download_path, content_length):
found = False
if os.path.isfile(download_path):
content_length = int(content_length)
local_size = os.path.getsize(download_path)
if local_size == content_length:
found = True
return found

So, basically, I thought what should be happening is rclone would pulling the entire file in to VFS to check its size. The VFS directory definitely fills with files.
I don't do any operations on these files other than check their length and when using rclone mount so I would expect them to be quickly purged (I don't see updated modification times or anything of that nature either). However, these errors suggest to me that rclone is actually not pulling the file in to VFS?

I also get this error in the beta (but not the current release):

2020/07/24 23:17:14 ERROR :[path to file]: vfs cache: setModTime: failed to save item info: vfs cache item: failed to write metadata: open [cache path]/vfsMeta/[path to file]: no such file or directory

You don't open the file or read from it in that snippet so it shouldn't be loading the file into the VFS at all. Do you do something else with the file?

Do you think you could show (or send me) a log showing the problem?

Ideally as small as possible!

Alternatively can you find a sequence of shell commands which replicates this problem?

Hmm, I wonder if that is related. That seems to be saying that the directory it was trying to make the metadata in disappeared.

Do you run other rclone's simultaneous to the rclone mount which use a --vfs-cache-mode on the same remote:?

VFS doesn't download any file that the OS touches? I assumed any attempt to access the file (or its metadata) would cause it to be pulled in to the VFS cache...

I do not run multiple instances of rclone on that machine. I do occasionally have that directory mounted with --read-only on my laptop whilst the directory is mounted with --vfs-cache-mode xxx on a cloud instance.

The script simply checks to see if the remote file exists locally by comparing path and size and, if so, it skips the download. I don't do anything else with already-downloaded files after they have been compared... if that routine finds a match based on those criteria, the script does not touch that saved file and does nothing to it. The script then restarts to check the next file in the directory.

What type of log are you looking for?
Rclone's output consists entirely of what I sent above, but I'm happy to send more.

Could this be a permissions issue? I do not run rclone as root. I do not use --allow-other either.
permissions on the cache are drwx------ 3 user user
a quick chmod 777 of the vfs directory didn't impact the issue, but if it's worth diving down that rabbit hole, I can try

You are running --vfs-cache-mode writes I think?

If the OS reads a file it will not be brought into the cache. The cache is only used for file writes.

If you are using the beta then you can use --vfs-cache-mode full and it will bring files into the cache (or parts of them) if they are read.

Ideally for a complete log (from start to quit of rclone) with -vv of a single event.

Probably not, it is the wrong error message. Is the cache directory owner the same as the one running rclone?

Mounting with —no-modtime and --no-seek Seems to have fixed this but I have no idea why...

I’d love to send more info but I have no idea what to send as the only type of error I saw was the one I copy/pasted earlier.

I also see a lot of:
Lookup: node=<nil>, err=no such file or directory

What does that indicate?

It occurs each file in the Rclone FUSE mount is compared with the remote http source.

--no-modtime won't set the times on files which was the error you were seeing.

I think the error is probably harmless, I'm sure we'll track it down eventually.

That is the OS looking for files which aren't there so nothing to worry about.

I ran into the same messages.... just to provide (maybe) some additional info

On my setup I have a rclone mount, that is "exposed" to 4 instances of my app through mergerfs (combining the rclone mount with a separate local folder for each instance)

Rclone mount: /remote/
Local folders: /local1/, /local2/, /local3/, /local4/

mergefs: /merge1/ is /local1/ + /remote/folder1/

my mount command (using systemd) is this:

/usr/bin/rclone mount \
    --log-level=${LOGLEVEL} --log-file=${LOGFILE} \
    --config=/home/<USER>/.config/rclone/rclone.conf \
    --allow-other \
    --poll-interval 15s \
    --timeout 1m \
    --cache-chunk-path=/caches/${CACHE_FOLDER}/chunks \
    --cache-dir=/caches/${CACHE_FOLDER}/vfs \
    --cache-db-path=/caches/${CACHE_FOLDER}/db \
    --drive-use-trash=false \
    --user-agent mybox \
    --gid=1000 --uid=1000 \
    --tpslimit ${TPSLIMIT} --transfers ${TRANSFERS} \
    --vfs-read-chunk-size ${CHUNK_SIZE} --vfs-read-chunk-size-limit ${CHUNK_TOTAL_SIZE} \
    --vfs-cache-mode writes \
    --dir-cache-time=360m --cache-info-age=1d \
    --cache-rps=${RPSLIMIT} \
    ${REMOTE}:/ ${LOCAL_PATH}

Besides 4 instances of my app running, I have a bash script that takes care of moving files from local folder to remote.
it runs continuously, but is fed a list of files that are about 3 minutes old since creation
every now and then I see this in rclone.log

2020/08/14 18:45:48 INFO  :  <folder2/file> : Moved (server side)
2020/08/14 18:45:48 ERROR : <folder2/file>: Failed to set modification time of cached file: chtimes /caches/gdrive/vfs/vfs/<REMOTE>/crypto/<folder2/file>: no such file or directory

After the fact, I manually check if the file and folder exist, both in local and remote, and they do.

If this is, in fact, something different, let me know and I'll start another thread :slight_smile:

I don't do anything else with already-downloaded files after they have been compared... if that routine finds a match based on those criteria, the script does not touch that saved file and does nothing to it. The script then restarts to check the next file in the directory.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.