What is the problem you are having with rclone?
I am experiencing issues with rclone mount getting stuck. After trying out different parameters I am seeing that when reading from AWS S3 backend, it always gets stuck on the same point.
In short summary the setup:
- A container mounts the a folder with vfs-cache full.
- This container uses a volume which is mounted with `rshared`, so that changes from different containers can be seen by rclone
- application, running in a separate container accesses the file sin the directory (this application mounts the same docker volume as rclone)
I have attached an inotify monitor to the path (in the rclone container), just before opening my project form the application (see file-inotify_monitoring-log in attached gist) . Pleas note that the event starts happening at `08:48:36` as illustrated by inotify.
Result: from this point onwards the app hangs forever, I have no errors in my app that point to something being wrong.
Run the command 'rclone version' and share the full output of the command.
More importantly this is running in the official docker container from rclone a link to this (see file-docker_inspect-log in attached gist link)
rclone --version
rclone v1.73.1
- os/version: alpine 3.23.3 (64 bit)
- os/kernel: 6.8.0-1050-aws (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.26.0
- go/linking: static
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
Using AWS S3
The command you were trying to run (eg rclone copy /tmp remote:tmp)
Note: credentials are regenerated on each run so they don't need obfuscating
rclone --config /tmp/rclone.conf mount MOUNT_REMOTE:osparc-master-click/07049712-2380-11f1-8938-0242ac170027/c2a482a7-5957-5d0b-a5ed-6889921e2fda/workspace /dy-volumes/home/smu/work/workspace --vfs-cache-mode full --vfs-read-ahead 16M --vfs-cache-max-size 482947891200 --vfs-cache-min-free-space 5G --vfs-cache-poll-interval 1m --vfs-write-back 5s --cache-dir /vfs-cache/0 --dir-cache-time 10m --attr-timeout 1m --no-modtime --max-buffer-memory 16M --retries 3 --retries-sleep 30s --transfers 200 --buffer-size 16M --checkers 8 --s3-upload-concurrency 5 --s3-chunk-size 16M --order-by size,mixed --rc --rc-addr=0.0.0.0:8000 --rc-enable-metrics --rc-user='044ce39e-ddc3-4ca6-a71c-64151cf602d4' --rc-pass='e24c68ec-bf84-48e7-824a-0e4abda601b5' --allow-non-empty --allow-other --max-connections 300 --vfs-read-chunk-streams 100 --vfs-read-chunk-size 4M -vv --s3-upload-cutoff 0
Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.
[MOUNT_REMOTE]
type = s3
access_key_id = XXX
secret_access_key = XXX
region = us-east-1
acl = private
provider = AWS
### Double check the config for sensitive info before posting publicly
A log from the command that you were trying to run with the -vv flag
Gist coating all logs asking for help with rclone mount issue ยท GitHub
Log form the command CSV format file-rclone_mount_container-log
Log form inotify monitoring the folder at the time of the event file-inotify_monitoring-log
For reference I was using the following command to generate above
inotifywait -m -r --timefmt '%T' --format '%T %w%f %e' /dy-volumes/home/smu/work/workspace/WPT\ Exposure.smash_Results/
Additional context about my use use
- Our platform already has a system in place, based on rclone sync, for download and uploading files (works like a charm). We want to replace it with rclone mount due to the fact that users have very big working directories.
- One of the core ideas behind the platform is to make sure that whatever data users have is given back to them when the reopen their "applications". The platform's main goal is to allow non technical to run different types of application with zero setup. We have no say over how the files are accessed or what they contain, nor how big they are.
- This link showcases the sh command used in the rclone container (it's mainly here to deal with edge cases and allow clean shutdown)
- the remote control interface is being used to figure out when rclone is no longer active before shutting down the user's application (and rclone). The
POST core/statsis being monitored to determine that the rclone mount finished before shutting it down. - Since the lifecycle is tied to rclone's internals, when rclone gets stuck, the platform also can't cleanly close the user's application and gets stuck as well.
I can easily
- try out new rclone parameters and report with a gist containing the results
- run more monitoring tools in the containers