Switching from Synology Cloud Sync to rclone

I have been using Synology Cloud Sync for a while to sync data from my NAS to S3.

Is it possible to switch from Cloud Sync to rclone and have it not reupload data? One problem I can see is that the two products use different metadata tags to track source timestamps.

Thank you in advance for any ideas/experiences…..

it should be possible.
need to run a simple test on just one single flie using -vv --dry-run


might try https://rclone.org/s3/#avoiding-head-requests-to-read-the-modification-time

Yeah I tried --dry-run and it looks like it wants to reupload all the data again.

Thanks for the link to that documentation. It looks promising. Will try it out and report back…

Tried the --checksum option but it shows:

--checksum is in use but the source and destination have no hashes in common

(Source is SMB, target is S3. Not sure of the problem.)

The --size-only option appears to work ok, but maybe not the ideal option.

Surprising (to me) is that Synology Cloud Sync and rclone both use the S3 metdata tag of x-amz-meta-mtime. Guess there are differences in precision or something.

i am also surprised.
i have a couple of synboxes, but never used their cloud sync.


for a deeper look, can use

  • rclone lsf --format= on the source and on the dest file
  • --dump=headers

What should I be using after the equal sign? (Before the next parameter, which would be the source or destination specifier, of course..)

pick the same file, on the source and the dst.
for both, post output of rclone lsf --format=sthpM -vv

Hash works on S3 but apparently not on SMB.

S3:

# rclone lsf --format=sthpM -vv aws-rclone:/redacted-s3-bucket-name/Duplicati/redacted-folder-name/duplicati-20170927T234403Z.dlist.zip.aes
2025/11/04 09:00:35 DEBUG : rclone: Version "v1.71.2" starting with parameters ["rclone" "lsf" "--format=sthpM" "-vv" "aws-rclone:/redacted-s3-bucket-name/Duplicati/redacted-folder-name/duplicati-20170927T234403Z.dlist.zip.aes"]
2025/11/04 09:00:35 DEBUG : Creating backend with remote "aws-rclone:/redacted-s3-bucket-name/Duplicati/redacted-folder-name/duplicati-20170927T234403Z.dlist.zip.aes"
2025/11/04 09:00:35 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2025/11/04 09:00:35 DEBUG : fs cache: renaming child cache item "aws-rclone:/redacted-s3-bucket-name/Duplicati/redacted-folder-name/duplicati-20170927T234403Z.dlist.zip.aes" to be canonical for parent "aws-rclone:redacted-s3-bucket-name/Duplicati/redacted-folder-name"
[...clip...]
158733;2017-09-27 16:46:00;611e89e459f1668c4a246ed4b04e8864;duplicati-20170927T234403Z.dlist.zip.aes;{"btime":"2021-11-11T02:05:23Z","content-type":"application/octet-stream","mtime":"2017-09-27T16:46:00-07:00","tier":"STANDARD_IA"}

SMB:

# rclone lsf --format=sthpM -vv nas:/Backups/Duplicati/redacted-folder-name/duplicati-20170927T234403Z.dlist.zip.aes
2025/11/04 08:59:45 DEBUG : rclone: Version "v1.71.2" starting with parameters ["rclone" "lsf" "--format=sthpM" "-vv" "nas:/Backups/Duplicati/redacted-folder-name/duplicati-20170927T234403Z.dlist.zip.aes"]
2025/11/04 08:59:45 DEBUG : Creating backend with remote "nas:/Backups/Duplicati/redacted-folder-name/duplicati-20170927T234403Z.dlist.zip.aes"
2025/11/04 08:59:45 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2025/11/04 08:59:45 DEBUG : smb://rclone@nas:445/Backups/Duplicati/redacted-folder-name: Using root directory "Backups/Duplicati/redacted-folder-name"
2025/11/04 08:59:45 DEBUG : fs cache: renaming child cache item "nas:/Backups/Duplicati/redacted-folder-name/duplicati-20170927T234403Z.dlist.zip.aes" to be canonical for parent "nas:Backups/Duplicati/redacted-folder-name"
[...clip...]
2025/11/04 08:59:45 ERROR : duplicati-20170927T234403Z.dlist.zip.aes: Failed to read hash: hash type not supported
158733;2017-09-27 16:46:00;;duplicati-20170927T234403Z.dlist.zip.aes;{}

i have a howto guide about that.
https://forum.rclone.org/t/how-to-access-smb-samba-with-rclone/42754


both files have the same 2017-09-27 16:46:00 ??


post the output of rclone config redacted

Yes, although rclone is probably getting that date from the object metadata tag on the S3 side, right... (which if true means maybe it is interpreting the cloud sync timestamp properly?)

[aws-rclone]
type = s3
provider = AWS
access_key_id = XXX
secret_access_key = XXX
region = us-west-2
acl = private

[nas]
type = smb
host = XXX
user = XXX
pass = XXX

I’ll take a look at the guide you posted.. thanks!

maybe.
for just that one single file, run rclone copy --dry-run -vv and post the debug log.

Interesting, I might need to test with more files.

# rclone copy --dry-run -vv nas:/Backups/Duplicati/redacted-folder-name/duplicati-20170927T234403Z.dlist.zip.aes aws-rclone:/redacted-s3-bucket-name/Duplicati/redacted-folder-name
2025/11/04 11:27:41 DEBUG : rclone: Version "v1.71.2" starting with parameters ["rclone" "copy" "--dry-run" "-vv" "nas:/Backups/Duplicati/redacted-folder-name/duplicati-20170927T234403Z.dlist.zip.aes" "aws-rclone:/redacted-s3-bucket-name/Duplicati/redacted-folder-name"]
2025/11/04 11:27:41 DEBUG : Creating backend with remote "nas:/Backups/Duplicati/redacted-folder-name/duplicati-20170927T234403Z.dlist.zip.aes"
2025/11/04 11:27:41 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2025/11/04 11:27:41 DEBUG : smb://rclone@nas:445/Backups/Duplicati/redacted-folder-name: Using root directory "Backups/Duplicati/redacted-folder-name"
2025/11/04 11:27:41 DEBUG : fs cache: renaming child cache item "nas:/Backups/Duplicati/redacted-folder-name/duplicati-20170927T234403Z.dlist.zip.aes" to be canonical for parent "nas:Backups/Duplicati/redacted-folder-name"
2025/11/04 11:27:41 DEBUG : Creating backend with remote "aws-rclone:/redacted-s3-bucket-name/Duplicati/redacted-folder-name"
2025/11/04 11:27:41 DEBUG : fs cache: renaming cache item "aws-rclone:/redacted-s3-bucket-name/Duplicati/redacted-folder-name" to be canonical "aws-rclone:redacted-s3-bucket-name/Duplicati/redacted-folder-name"
2025/11/04 11:27:41 DEBUG : duplicati-20170927T234403Z.dlist.zip.aes: Size and modification time the same (differ by 0s, within tolerance 1ms)
2025/11/04 11:27:41 DEBUG : duplicati-20170927T234403Z.dlist.zip.aes: Unchanged skipping
2025/11/04 11:27:41 NOTICE: 
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Elapsed time:         0.0s

2025/11/04 11:27:41 DEBUG : 8 go routines active
2025/11/04 11:27:41 DEBUG : smb://rclone@nas:445/Backups/Duplicati/redacted-folder-name: Closing 1 unused connections

There are 589 files in this folder, and rclone wants to xfer 501 of them due to the modification timestamp being more than 1ms out of sync. The amount out of sync varies wildly:

Modification times differ by 19522h56m28s: 2019-10-26 21:23:36 -0700 PDT, 2022-01-17 15:20:04 +0000 UTC
Modification times differ by 15588h17m38s: 2020-04-07 20:02:26 -0700 PDT, 2022-01-17 15:20:04 +0000 UTC
Modification times differ by 3.6994964s: 2025-10-31 00:00:25.3005036 -0700 PDT, 2025-10-31 07:00:29 +0000 UTC
Modification times differ by 1.5182517s: 2025-11-03 13:00:08.4817483 -0800 PST, 2025-11-03 21:00:10 +0000 UTC
Modification times differ by 3218h18m58s: 2021-09-05 06:01:06 -0700 PDT, 2022-01-17 15:20:04 +0000 UTC
Modification times differ by 7976h18m57s: 2021-02-18 23:01:07 -0800 PST, 2022-01-17 15:20:04 +0000 UTC
Modification times differ by 1.1254861s: 2025-11-03 01:00:11.8745139 -0800 PST, 2025-11-03 09:00:13 +0000 UTC
Modification times differ by 1.5920738s: 2025-11-03 23:00:13.4079262 -0800 PST, 2025-11-04 07:00:15 +0000 UTC
Modification times differ by 2602h19m28s: 2021-09-30 22:00:37 -0700 PDT, 2022-01-17 15:20:05 +0000 UTC
Modification times differ by 1.3693084s: 2025-10-29 12:00:20.6306916 -0700 PDT, 2025-10-29 19:00:22 +0000 UTC
Modification times differ by 19522h56m29s: 2019-10-26 21:23:36 -0700 PDT, 2022-01-17 15:20:05 +0000 UTC
Modification times differ by 1348h17m1.944625s: 2021-11-22 03:03:03.055375 -0800 PST, 2022-01-17 15:20:05 +0000 UTC
Modification times differ by 1.6245446s: 2025-07-22 22:24:54.3754554 -0700 PDT, 2025-07-23 05:24:56 +0000 UTC
Modification times differ by 844.4459ms: 2025-10-29 00:00:24.1555541 -0700 PDT, 2025-10-29 07:00:25 +0000 UTC
Modification times differ by 1.5546133s: 2025-11-03 09:00:16.4453867 -0800 PST, 2025-11-03 17:00:18 +0000 UTC

I'll look at the metadata tag more closely, but maybe Synology Cloud Sync adjusted how they tag at some point. I have been using it for almost 8 years...

yeah, i can see that now.
maybe as a test, using checksums

I just realized something. A few years back I switched from B2 to S3, and I almost certainly used rclone to facilitate the transfer. That's why some of the S3 objects have the x-amz-meta-mtime metadata tag, not because Synology Cloud Sync uses it. :person_facepalming:

I don't see any metadata tag for the objects uploaded in the past few years, so I guess Cloud Sync doesn't do it that way at all.

Checksum option sounds good, but I need to see if I can get it to work on the SMB side. (I haven't looked at your how-to yet.)

ah ok, now, that solves the puzzle.


what is the total size of all files to by synced?

Looks like about 780MB, or 40% of the total folder size, would need to be transferred. This folder is quite small compared to the others. I definitely don't want to reupload data. I sync several TBs right now with Cloud Sync.

I checked your how-to and that makes sense. One question: will rclone update the tagging on the S3 side so that it doesn't have to verify checksums next time?

thanks.


not what you are asking?
do the test yourself and then you will know...


and check out --update and --use-server-modtime

Fair enough.

The answer is no. Using --checksum it will realize it doesn't have to upload the file again, but there is no effort to update the metadata tag on the remote side. As such, I will have to continue to use the --checksum option going forward, which is less than ideal.

Would be nice if it could somehow just update the metadata tag so that future rclone runs don't need to calculate checksums for the source side.

yeah, calculating checksums takes time and resources.

perhaps a few ideas.

  1. i run rclone on my synbox itself. so check-summing would be local to the nas itself. no need for nas:
  2. let's say you run rclone once a day:
    to reduce the number of files to be checked, might try --max-age=1d
user99@bnas:~$ rclone version
rclone v1.71.2
- os/version: unknown
- os/kernel: 4.4.180+ (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.25.3
- go/linking: static
- go/tags: none