Sharing cache-dir between multiple identical `rclone mount`s, each in a user's homedir

Aloha, I'm wondering to if its (relatively) safe+sane to share a single --cache-dir=/tmp/rclone.cachedir between multiple (identical other than dest mount point) rclone mount processes?

Until now, I have been using the default cache-dir (in user's home directories), but because my users have very high overlap in daily file access patterns, I'd like to use a shared cache-dir, IF its safe to have multiple rclone mount commands share one dir!

The reason this isn't a single mount point linked into user homdirs: I'm using a hacked up version of the csi-rclone (GitHub - wunderio/csi-rclone: CSI driver for rclone), and there's a CSI pod per k8s hardware node, which then mounts onto K8S/GKE assigned temporary file paths, that are then injected into the pods. Thus, there's a single "docker" container on each hardware node that's running 1-20 rclone mount s3:/mybucket /tmp/[DIFFERENT_HERE_PER_USER]/mybucket.

I'd like to have all these rclone mount commands on the same docker container share a single --cache-dir=/tmp/rclone.cachedir: is this a tested configuration, known to be insane, known to be sane, or....?

Run the command 'rclone version' and share the full output of the command.

rclone v1.58.0

Which cloud storage system are you using? (eg Google Drive)

S3 with an accelerated endpoint

The command you were trying to run (eg rclone copy /tmp remote:tmp)

The rclone config contents with secrets removed.

rclone \
  mount \
    :s3:ceres-flights \
    # The following will vary per user, i.e. each `rclone mount` process is mounted to a different temp path
    /var/lib/kubelet/pods/b330532b-a3b1-44aa-8f22-bee6c9b98365/volumes/kubernetes.io~csi/ceres-flights/mount \
    --allow-non-empty=true \
    --allow-other=true \
    --attr-timeout=4s \
    --buffer-size=64M \
    --cache-chunk-clean-interval=15m \
    --cache-dir=/tmp/ceres-flights.cache-dir \
    --cache-info-age=72h \
    --checksum=false \
    --daemon \
    --dir-cache-time=15m0s \
    --fast-list=true \
    --human-readable=true \
    --max-read-ahead=256M \
    --poll-interval=0 \
    --s3-chunk-size=256M \
    --s3-disable-checksum=true \
    --s3-endpoint=https://s3.us-west-2.amazonaws.com \
    --s3-env-auth=true \
    --s3-memory-pool-flush-time=5m0s \
    --s3-memory-pool-use-mmap=true \
    --s3-provider=AWS \
    --s3-region=us-west-2 \
    --s3-upload-concurrency=2 \
    --s3-upload-cutoff=256M \
    --s3-use-accelerate-endpoint=true \
    --stats-one-line-date=true \
    --streaming-upload-cutoff=64M \
    --transfers=32 \
    --umask=2 \
    --update=true \
    --use-mmap=true \
    --use-server-modtime=true \
    --user-agent=rclone-ceres-flights/v1 \
    --vfs-cache-max-age=4h \
    --vfs-cache-max-size=64G \
    --vfs-cache-mode=full \
    --vfs-read-ahead=256M \
    --vfs-read-chunk-size-limit=1G \
    --vfs-read-chunk-size=256M \
    --vfs-write-back=30s \
    --vfs-write-wait=4s \
    --write-back-cache=true

Thanks for your help! rclone mount is VERY cool, and I'm really really enjoying the quality fo the documentation. Thank you so much, its amazing how far I've gotten with a complex system design without having to reach out :bowing_man:

No, it's a one to one mapping so each rclone instance should have it's own cache dir as they are not implemented to be shared.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.