Rclone get killed randomly when trying to sync from AWS S3 with DO Space

What is the problem you are having with rclone?

Using Dockerized rclone/rclone:1.69

Trying to sync from source (AWS S3) to backup (Digital Ocean Space)

Randomly get killed after few hours run.

Run the command 'rclone version' and share the full output of the command.

/data # rclone version
rclone v1.69.1
- os/version: alpine 3.21.2 (64 bit)
- os/kernel: 5.10.25-linuxkit (aarch64)
- os/type: linux
- os/arch: arm64 (ARMv8 compatible)
- go/version: go1.24.0
- go/linking: static
- go/tags: none

Are you on the latest version of rclone? You can validate by checking the version listed here: Rclone downloads

Yes. Latest version

Which cloud storage system are you using? (eg Google Drive)

Using AWS S3 as primary source. Using Digital Ocean Space as backup.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync s3:wenote-b spaces:wenote-b --metadata --verbose

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

/data # rclone config redacted
[s3]
type = s3
env_auth = false
access_key_id = XXX
secret_access_key = XXX
region = eu-central-1
location_constraint = eu-central-1
acl = private
provider = AWS

[spaces]
type = s3
env_auth = false
access_key_id = XXX
secret_access_key = XXX
endpoint = fra1.digitaloceanspaces.com
acl = private
provider = DigitalOcean

[wasabi]
type = s3
env_auth = false
access_key_id = XXX
secret_access_key = XXX
region = eu-central-1
endpoint = s3.eu-central-1.wasabisys.com
acl = private
provider = Wasabi
### Double check the config for sensitive info before posting publicly
/data # 

A log from the command that you were trying to run with the -vv flag

/data # rclone sync s3:wenote-b spaces:wenote-b --metadata --verbose -v
2025/03/11 16:34:51 DEBUG : rclone: Version "v1.69.1" starting with parameters ["rclone" "sync" "s3:wenote-b" "spaces:wenote-b" "--metadata" "--verbose" "-v"]
2025/03/11 16:34:51 DEBUG : Creating backend with remote "s3:wenote-b"
2025/03/11 16:34:51 DEBUG : Using config file from "/config/rclone/rclone.conf"
2025/03/11 16:34:51 DEBUG : Creating backend with remote "spaces:wenote-b"
2025/03/11 16:34:54 DEBUG : user-1004/android-wenote-sqlite.zip: Size and modification time the same (differ by 0s, within tolerance 1ns)
2025/03/11 16:34:54 DEBUG : user-1004/android-wenote-sqlite.zip: Unchanged skipping
2025/03/11 16:34:54 DEBUG : user-10/android-wenote-sqlite.zip: Size and modification time the same (differ by 0s, within tolerance 1ns)
2025/03/11 16:34:54 DEBUG : user-10/android-wenote-sqlite.zip: Unchanged skipping
2025/03/11 16:34:54 DEBUG : user-100/android-wenote-sqlite.zip: Size and modification time the same (differ by 0s, within tolerance 1ns)

p/s I have ensure there is sufficient resource when running rclone. Here is the output from top

Mem: 1145624K used, 891108K free, 124056K shrd, 10968K buff, 272344K cached
CPU:   2% usr   1% sys   0% nic  95% idle   0% io   0% irq   0% sirq
Load average: 0.25 0.34 0.33 3/891 119
  PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
  101    14 root     S    1258m  63%   2   4% rclone sync s3:wenote-b spaces:wenote-b --metadata --verbose
   14     0 root     S     1792   0%   3   0% sh
  111     0 root     S     1764   0%   3   0% sh
  119   111 root     R     1692   0%   0   0% top
    1     0 root     S     1684   0%   0   0% /usr/sbin/crond -f -l 8

welcome to the forum,

Hi, the number of files should be in the range of 600,000 to 700,000

The file is usually image file less than 1MB

Currently, when it fails, I will just keep re-run the same command again.

My question is, will the destination be "eventually" properly sync, or some files will be corrupted?

Thanks. I will try your suggestions.

if rclone is getting killed with OOM, have to assume there could be corrupt partial uploads taking up space in cloud.

--s3-leave-parts-on-error

I didn't use --s3-leave-parts-on-error in my command, and it seems the default value is false.

If DigitalOcean Spaces implements this correctly, I assume there won't be any corrupted files.

Even if there is corruption, can rclone detect it and replace the corrupted file from the source?

IMO, not correct to assume.
if rclone hard crashes, then rclone might not be able to "call abort upload on a failure"
my point is no way to know for sure.


maybe or maybe not. never tested that.

Remove unfinished multipart uploads

Hi, I manage to resolve the OOM issue by following the steps at

Going forward, may I know what is the recommended way, to keep both S3 and DO's Space in sync, daily?

Is using

rclone sync s3:wenote-b spaces:wenote-b --metadata --verbose

still being recommended? Or, I should use the workaround mentioned in Big syncs with millions of files · rclone/rclone Wiki · GitHub ?

Thank you.

Short term, workaround you mentioned is really only way.

Moving forward hopefully the proper fix already mentioned earlier will be included in the latest rclone release. If you want you can try it now. Here you can download beta version with relevant patch.

And please provide feedback here: