No data persistence with docker swarm

What is the problem you are having with rclone?

Currently running Rclone as a S3 driver for Docker Swarm. Configuration is 3 servers writing to one bucket. It works. Sort of. When I shutdown or move the service using the shared volume (in this case : Portainer), it ask to create credentials again, leading me to think that it doesn't reuse properly the data in the S3 bucket.

Run the command 'rclone version' and share the full output of the command.

Note, I was previously running an old DEV version, following the answer hereunder, I've updated :

rclone v1.69.2
- os/version: debian 12.10 (64 bit)
- os/kernel: 6.1.0-33-arm64 (aarch64)
- os/type: linux
- os/arch: arm64 (ARMv8 compatible)
- go/version: go1.24.2
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Hetzner S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Here's the docker file I'm trying to run :

version: '3.9'
services:
  agent:
    image: portainer/agent:lts
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /var/lib/docker/volumes:/var/lib/docker/volumes
    networks:
      - agent_network
    deploy:
      mode: global
      placement:
        constraints: [node.platform.os == linux]
  portainer:
    image: portainer/portainer-ce:lts
    command: -H tcp://tasks.agent:9001 --tlsskipverify
    ports:
      - "9443:9443"
      - "9000:9000"
      - "8000:8000"
    volumes:
      - portainer_data:/data
    networks:
      - agent_network
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]
networks:
  agent_network:
    driver: overlay
    attachable: true
volumes:
  portainer_data:
    driver: rclone
    driver_opts:
      remote: 's3fs:portainer-data'
      allow_other: 'true'
      vfs_cache_mode: full
      poll_interval: 0

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

    [s3fs]
    type = s3
    provider = Other
    access_key_id = REDACTED
    secret_access_key = REDACTED
    endpoint = hel1.your-objectstorage.com
    acl = private
    region = hel1

A log from the command that you were trying to run with the -vv flag

No specific log, as said, it works, but when the portainer service switches server, it acts as if it was a new server.

welcome to the forum,

that is a many years old, custom compiled, dev version of rclone from an out of date repository.

  1. uninstall that ancient version
  2. https://rclone.org/install/#script-installation
  3. test again

Hello,

I updated the version, now running the following version :

rclone v1.69.2
- os/version: debian 12.10 (64 bit)
- os/kernel: 6.1.0-33-arm64 (aarch64)
- os/type: linux
- os/arch: arm64 (ARMv8 compatible)
- go/version: go1.24.2
- go/linking: static
- go/tags: none

Also did a try with a simple SFTP share, same problem. So either it's my docker compose config, or it's my rclone config. I'm at a loss ...