Rclone mount log full of `Doesn't support copy` and `Couldn't delete: No such file or directory`

What is the problem you are having with rclone?

I am using rclone mount and the log is full of errors like these two:

ERROR : oji8hlo6nkbit7mqhkhbfo1cr8/fuk028kg7ppq1i7bll7fkh0eqc/dhacat5443po1je3cq0op49n2iih1htslopaofs5de0in4fm0li0/9036l1dgodng3tiv8ips7v52bk/al5vvjg31bem54hmqm0071b2asv27mus266ma5venh2vcrb3o26kdd7kavhnbon5dif0t3nbk2a44: parent remote (Local file system at /srv/rclone/upload) doesn't support Copy
May 26 14:53:20 lpt rclone[2922208]: ERROR : some/file/name: Couldn't delete: remove /srv/rclone/upload/oji8hlo6nkbit7mqhkhbfo1cr8/fuk028kg7ppq1i7bll7fkh0eqc/dhacat5443po1je3cq0op49n2iih1htslopaofs5de0in4fm0li0/9036l1dgodng3tiv8ips7v52bk/al5vvjg31bem54hmqm0071b2asv27mus266ma5venh2vcrb3o26kdd7kavhnbon5dif0t3nbk2a44: no such file or directory

The file system at /src/rclone/upload is a standard xfs fs:

/dev/nvme0n1p2 on / type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota

What does doesn't support Copy mean?

What is your rclone version (output from rclone version)

root@lpt:/tmp/tmp.rwofCK7IEc# rclone version
rclone v1.55.1
- os/type: linux
- os/arch: amd64
- go/version: go1.16.3
- go/linking: static
- go/tags: none

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 20.04.2 LTS (Focal Fossa) amd64

Which cloud storage system are you using? (eg Google Drive)

backblaze s3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

ExecStart=/usr/bin/rclone mount \
        --config=/root/.config/rclone/rclone.conf \
        --allow-other \
        --cache-tmp-upload-path=/srv/rclone/upload \
        --cache-chunk-path=/srv/rclone/chunks \
        --cache-workers=8 \
        --cache-dir=/srv/rclone/cache \
        --cache-db-path=/srv/rclone/db/backblaze-j1900-cache.db \
        --no-modtime \
        --rc \
        --rc-enable-metrics \
        --rc-addr=127.0.0.1:5572 \
        backblaze-j1900-crypt:/ /srv/rclone/data \
        --s3-no-check-bucket 
ExecStop=/bin/fusermount -uz /srv/rclone/data

The rclone config contents with secrets removed.

[backblaze-j1900-data]
type = s3
provider = Other
env_auth = false
access_key_id = <redacted>
secret_access_key = <redacted>
endpoint = https://s3.us-west-002.backblazeb2.com
acl = private

[backblaze-j1900-cache]
type = cache
remote = backblaze-j1900-data:j1900-data
info_age = 1y
chunk_total_size = 250G

[backblaze-j1900-crypt]
type = crypt
remote = backblaze-j1900-cache:
filename_encryption = standard
directory_name_encryption = true
password = <redacted>

A log from the command with the -vv flag

ERROR : oji8hlo6nkbit7mqhkhbfo1cr8/fuk028kg7ppq1i7bll7fkh0eqc/dhacat5443po1je3cq0op49n2iih1htslopaofs5de0in4fm0li0/9036l1dgodng3tiv8ips7v52bk/al5vvjg31bem54hmqm0071b2asv27mus266ma5venh2vcrb3o26kdd7kavhnbon5dif0t3nbk2a44: parent remote (Local file system at /srv/rclone/upload) doesn't support Copy
May 26 14:53:20 lpt rclone[2922208]: ERROR : some/file/name: Couldn't delete: remove /srv/rclone/upload/oji8hlo6nkbit7mqhkhbfo1cr8/fuk028kg7ppq1i7bll7fkh0eqc/dhacat5443po1je3cq0op49n2iih1htslopaofs5de0in4fm0li0/9036l1dgodng3tiv8ips7v52bk/al5vvjg31bem54hmqm0071b2asv27mus266ma5venh2vcrb3o26kdd7kavhnbon5dif0t3nbk2a44: no such file or directory

I very much appreciate your help!

Welcome to the forum.

Does the errors disappear if you remove the --cache-tmp-upload-path option, at least the "doesn't support Copy" error?

Your observation seems a bit similar to that in issue #3206, with commit from pull request #4242..

hello and welcome to the forum,

what do you plan to do with the mount?

the cache remote has been depreciated, has known bugs that will never get fixed as documented here
so unless you are 1000% sure you need it and can deal with the bugs, switch the the vfs cache.

Hey, yes, that helped, both the errors went away. It is actually more in line what I want, because writes don't return until they finished which is what I want here. Thank you!

Thanks for pointing that out. The reason I am using it is that I want a read cache as I pay for download bandwidth but do not have sufficient local storage to mirror everything.

So your suggestion is to use the VFS cache described here?

Is it possible to configure vfs-cache-max-age=infinity? I want this to be a LRU cache with a size limit and not to do anything based on the age of objects (except maybe occasionally checking if they are current, but ideally just through metadata, otherwise this will become expensive).

i do not use any cache for streaming media.

if removing --cache-tmp-upload-path fixed the problem and otherwise works as you want, then no need to switch to vfs

1 Like

Ok. Thank you both for the help!

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.