Rclone Docker volume ignores path when server is restarted

What is the problem you are having with rclone?

I installed rclone docker plugin using the managed instance. I configured the volume to use the path bucket/dir (config details is at the bottom). And everything works fine. When I list the files in the volume (docker exec borg ls -la \backup) I see the files as expected.

When I restart the server and I attempt to list the files, I see buckets instead; something like this:

.
..
bucket
another-bucket
yet-another-bucket

Upon server restart, the volume that is created totally ignores the path that was configured. I have to delete all containers using the volumes, delete the volumes (I have a few of them) and recreate them in order to get the correct path mounted.

Version

rclone/docker-volume-rclone:amd64 (latest as of today) installed using the managed approach according to Docker Volume Plugin

Note: This issue has persisted for several months now. I just tested with the latest release of the plugin

Which cloud storage system are you using?

S3 for OVH

The rclone config contents with secrets removed.

services:
  borg:
    image: ghcr.io/borgmatic-collective/borgmatic
    container_name: borg
    volumes:
      - backup:/backup
      - **dir-to-backup**:/data
    env_file: .env
    restart: unless-stopped


volumes:
  backup:
    name: backup
    driver: rclone:latest
    driver_opts:
      type: s3
      path: bucket/dir
      allow_other: 'true'
      vfs-cache-mode: full
      poll-interval: 0
      s3-provider: Other
      s3-env-auth: 'false'
      s3-access-key-id: **secret**
      s3-secret-access-key: **secret**
      s3-acl: private
      s3-region: de
      s3-location-constraint: de
      s3-endpoint: https://s3.de.io.cloud.ovh.net/

Output of docker volume inspect backup

I inspected the volume before and after the restart, and they show the path as expected

[
    {
        "CreatedAt": "2025-05-02T22:07:07Z",
        "Driver": "rclone:latest",
        "Labels": {
            "com.docker.compose.project": "tools",
            "com.docker.compose.version": "1.27.4",
            "com.docker.compose.volume": "backup"
        },
        "Mountpoint": "/mnt/backup",
        "Name": "backup",
        "Options": {
            "allow_other": "true",
            "path": "**bucket**/**dir**",
            "poll-interval": "0",
            "s3-access-key-id": "**secret**",
            "s3-acl": "private",
            "s3-endpoint": "https://s3.de.io.cloud.ovh.net/",
            "s3-env-auth": "false",
            "s3-location-constraint": "de",
            "s3-provider": "Other",
            "s3-region": "de",
            "s3-secret-access-key": "**secret**",
            "type": "s3",
            "vfs-cache-mode": "full"
        },
        "Scope": "local",
        "Status": {
            "Mounts": []
        }
    }
]

Yes, has always happened to me. Unlike docker NFS volumes and others - the rclone volume does not like reconnecting to the remote if the server goes down. You have to delete the volume and remake it.

I just incorporate that into the workload via ansible.

- name: Register downloads volume
  community.docker.docker_volume_info:
    name: '{{ downloads_volume }}'
  register: downloads_volume_result

- name: Remove existing downloads volume
  when: downloads_volume_result.exists
  community.docker.docker_volume:
    name: '{{ downloads_volume }}'
    state: absent
  register: remove_download_volume
  retries: 5
  delay: 10
  until: remove_download_volume is succeeded

And just remake the volume as I'm spinning up the containers that depend on it.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.