Rclone mount S3 Cloudflare R2 hangs

What is the problem you are having with rclone?

I'm trying to mount Cloudflare R2 so that I can then browse it locally and further attempt to back up with duplicacy, but the mount command just hangs and then from time to time prints out a Statfs line. In the meantime, the mount is not accessible:

/mnt> l | grep cloud
ls: cannot access 'cloudflare_r2': Permission denied
d?????????  ? ?    ?       ?            ? cloudflare_r2/

I'm able to rclone sync or rclone ls instantly, but mount just hangs. Any ideas?

Run the command 'rclone version' and share the full output of the command.

rclone version
rclone v1.60.0
- os/version: opensuse-leap 15.3 (64 bit)
- os/kernel: 5.19.2-x86_64-linode156 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.19.2
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Cloudflare R2

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount "Cloudflare R2:/" /mnt/cloudflare_r2/ -vv

The rclone config contents with secrets removed.

[Cloudflare R2]
type = s3
provider = Cloudflare
access_key_id = 
secret_access_key = 
endpoint = 

A log from the command with the -vv flag

rclone mount "Cloudflare R2:/" /mnt/cloudflare_r2/ -vv
2022/10/24 17:02:59 DEBUG : rclone: Version "v1.60.0" starting with parameters ["rclone" "mount" "Cloudflare R2:/" "/mnt/cloudflare_r2/" "-vv"]
2022/10/24 17:02:59 DEBUG : Creating backend with remote "Cloudflare R2:/"
2022/10/24 17:02:59 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2022/10/24 17:02:59 DEBUG : fs cache: renaming cache item "Cloudflare R2:/" to be canonical "Cloudflare R2:"
2022/10/24 17:02:59 INFO  : S3 root: poll-interval is not supported by this remote
2022/10/24 17:02:59 DEBUG : S3 root: Mounting on "/mnt/cloudflare_r2/"
2022/10/24 17:02:59 DEBUG : : Root:
2022/10/24 17:02:59 DEBUG : : >Root: node=/, err=<nil>
2022/10/24 17:03:20 DEBUG : : Statfs:
2022/10/24 17:03:20 DEBUG : : >Statfs: stat={Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Bsize:4096 Namelen:255 Frsize:4096}, err=<nil>
2022/10/24 17:03:23 DEBUG : : Statfs:
2022/10/24 17:03:23 DEBUG : : >Statfs: stat={Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Bsize:4096 Namelen:255 Frsize:4096}, err=<nil>
2022/10/24 17:04:20 DEBUG : : Statfs:
2022/10/24 17:04:20 DEBUG : : >Statfs: stat={Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Bsize:4096 Namelen:255 Frsize:4096}, err=<nil>
2022/10/24 17:04:23 DEBUG : : Statfs:
2022/10/24 17:04:23 DEBUG : : >Statfs: stat={Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Bsize:4096 Namelen:255 Frsize:4096}, err=<nil>
2022/10/24 17:05:20 DEBUG : : Statfs:
2022/10/24 17:05:20 DEBUG : : >Statfs: stat={Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Bsize:4096 Namelen:255 Frsize:4096}, err=<nil>
2022/10/24 17:05:23 DEBUG : : Statfs:
2022/10/24 17:05:23 DEBUG : : >Statfs: stat={Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Bsize:4096 Namelen:255 Frsize:4096}, err=<nil>
1 Like

I just tried mounting a small subdir in case rclone was trying to read the whole bucket or something, and it's still hanging.

hi,

rclone mount does not return to the command prompt.
might try --daemon

might try --allow-other

1 Like

Thanks, it was --allow-other. I was mounting as root and using a different user to access the mount. Once I used root or --allow-other, I was able to access it no problem.

Now I'm curious if there's a way to optimize for speed here so that duplicacy backs up this mount as quickly as possible.

well, that can be hard to answer, so many variables.
imho, need to establish a base line.
--- what is the result of a speedtest?
--- using rclone copy, what is the average speed of transfers to cloudflare?
--- compare that to the speed of the rclone mount
--- what is the mix of files, many small, a few large or what?

might increase --transfers, --chunk-size

fwiw, i prefer to backup to local storage and then rclone sync/copy/move --immutable those files to cloud.

and depending on how duplicay works, might need to use a vfs file cache mode such as --vfs-cache-mode=writes. take a look at the debug log about possible need for that.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.