Move local source to different server

What is the problem you are having with rclone?

I'm planning to migrate my NAS from an old single-HDD Synology Diskstation to a Raspberry Pi 4 4GB with an external SSD. This should drastically reduce the total power consumption.

I've been using rclone to backup the ~500GB of data on the NAS/SMB to B2 for the past years.

What is the suggested way of moving the local rclone source to a new hardware? I'd not want rclone to assume that all files on the new installation need to be reuploaded to B2. Does rclone store some file hashes in a hidden folder that I should transfer to the new server?

I cannot imagine that simply copying the data files via SMB to the new server and also copying the rclone.conf will do the trick, probably it needs some more magic?

Any help is very much appreciated. Thanks.

Run the command 'rclone version' and share the full output of the command.

rclone v1.64.0
- os/version: unknown
- os/kernel: 4.4.302+ (aarch64)
- os/type: linux
- os/arch: arm64 (ARMv8 compatible)
- go/version: go1.21.1
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

B2

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[rcloneB2]
type = b2
account = XXX
key = XXX
endpoint =

[rcloneB2_crypt]
type = crypt
remote = rcloneB2:rclone-diskstation-2018
filename_encryption = standard
directory_name_encryption = true
password = XXX
password2 = XXX

It is actually exactly that. SMB/SFTP/NFS/rsync or whatever works best for you and copy data across - done. No magic needed.

Nope - unless you used hasher remote. But I can't see any in your config.

As long as you point the same source root directory to the same root destination directory nothing will change regarding your B2 backup.

Thanks. As I assume rclone won't just compare filenames and filesizes of local source and B2 destination, it will then create hashes of all files and compare to the hashes stored in the cloud? This might be a long process on a Pi, won't it? Is there any way to prevent this or speed this up? Or is there another trick that allows rclone to quickly compare the contents of all files?

Wrong assumption. This is what rclone exactly does by default + modtime. You can force hashes comparison if you wish.

from your description your Synology NAS was not speed daemon neither. Difficult to say which one will be faster.

You can use whatever you feel like you trust but if not sure what you are doing I suggest you stick to default (size and modtime).

-c, --checksum  Check for changes with size & checksum (if available, or fallback to size only).
--ignore-size  Ignore size when skipping use mod-time or checksum
--size-only   Skip based on size only, not mod-time or checksum

You can also always try with --dry-run and experiment with various settings.

Thanks, I will stick to default. So whenever rclone syncs it will compare the file sizes and modtimes of all files on source and destination? I was thinking too complicated then, as I thought it has a hidden place where hashes for all files (and also a hash for each directory based on its content) are stored, and a sync would first compare the directory hashes to quickly identify where something has changed. But it seems that comparing all file metadata is a much quicker process as I thought.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.