Rclone Container Help

Posted this on r/docker, but havent gotten much:

I have one container utilizing rclone (which just mounts my remote drive to a volume). I want to use that volume in another container (for now, it's a test ubuntu container). But when i launch the second container, the directory the volume is mapped to is empty: Here's my run commands:

docker run -d \
    --name rclonemount \
    --volume /srv/dev-disk-by-label-xdata/docker/rclone:/rclone \
    --volume /etc/passwd:/etc/passwd:ro \
    --volume /etc/group:/etc/group:ro \
    --volume rclone-cvault:/mnt/c-vault \
    --device /dev/fuse \
    --cap-add SYS_ADMIN \
    --security-opt apparmor:unconfined \
    rclone/rclone \
    mount c-vault: /mnt/c-vault --config /rclone/config/rclone.conf --allow-other --attr-timeout 1000h --buffer-size 64M --dir-cache-time 1000h --poll-interval 15s --log-level INFO --log-file /rclone/logs/mount.log --timeout 1h --umask 002 --rc --rc-addr 127.0.0.1:5572
docker run -dti --name=t1 --volume rclone-cvault:/mnt/c-vault weaveworks/ubuntu

After exec-ing into container rclonemount I see that /mnt/c-vault (the dir mapped to the named volume) has the correct files and dirs from my my remote drive. Excellent.

But if I exec into container t1 , /mnt/c-vault (also mapped to the named volume) shows as empty. Any reason why this is occurring?

This has come up before on the forum...

I think this should solve your problem - let us know what happens!

I was able to solve it (kind of) by using the :shared tag, but I was hoping to do it without having to bind it to a local directory (just use a named volume instead). But this works well enough. And I created a mergerfs container (kennyparsons/docker-mergerfs) to handle my local buffer merged with my rclone remote. Works really well.

version: "2"
services:

 mergerfs:
   image: kennyparsons/docker-mergerfs
   container_name: mergerfs
   hostname: mergerfs
   cap_add:
     - SYS_ADMIN
   devices:
     - /dev/fuse
   restart: always
   volumes:
     - /root/test/folder1:/mnt/folder1
     - /root/test/folder2:/mnt/folder2
     - /srv/dev-disk-by-label-xdata/docker/rclone/mnt/c-vault:/mnt/c-vault
     - /srv/dev-disk-by-label-xdata/docker/rclone/mnt/gvault:/mnt/gvault:shared
   environment:
     - MOUNTPOINT=gvault
     - OPTIONS=async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=auto-full
     - SOURCEDIRS=/mnt/folder1:/mnt/folder2:/mnt/c-vault

 rclonemount:
    container_name: rclonemount
    hostname: rclonemount
    image: rclone/rclone
    cap_add:
      - SYS_ADMIN
    devices:
      - /dev/fuse
    security_opt:
      - apparmor:unconfined
    volumes:
      - /srv/dev-disk-by-label-xdata/docker/rclone:/rclone
      - /etc/passwd:/etc/passwd:ro
      - /etc/group:/etc/group:ro
      - /srv/dev-disk-by-label-xdata/docker/rclone/mnt/c-vault:/mnt/c-vault:shared
    command: "mount c-vault: /mnt/c-vault --config /rclone/config/rclone.conf --allow-other --attr-timeout 1000h --buffer-size 64M --dir-cache-time 1000h --poll-interval 15s --log-level INFO --log-file /rclone/logs/mount.log --timeout 1h --umask 002 --rc --rc-addr 127.0.0.1:5572"

Great

Sorry my docker foo isn't powerful enough for that question :wink:

Great and thanks for sharing the config.

@Animosity022 this whole project stemmed from your recommendation on your github scripts/mounts. I'm trying to containerize the services to be as flexible as possible when deploying. have a look and let me know what you think.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.