I'm planning to migrate my NAS from an old single-HDD Synology Diskstation to a Raspberry Pi 4 4GB with an external SSD. This should drastically reduce the total power consumption.
I've been using rclone to backup the ~500GB of data on the NAS/SMB to B2 for the past years.
What is the suggested way of moving the local rclone source to a new hardware? I'd not want rclone to assume that all files on the new installation need to be reuploaded to B2. Does rclone store some file hashes in a hidden folder that I should transfer to the new server?
I cannot imagine that simply copying the data files via SMB to the new server and also copying the rclone.conf will do the trick, probably it needs some more magic?
Any help is very much appreciated. Thanks.
Run the command 'rclone version' and share the full output of the command.
Thanks. As I assume rclone won't just compare filenames and filesizes of local source and B2 destination, it will then create hashes of all files and compare to the hashes stored in the cloud? This might be a long process on a Pi, won't it? Is there any way to prevent this or speed this up? Or is there another trick that allows rclone to quickly compare the contents of all files?
Wrong assumption. This is what rclone exactly does by default + modtime. You can force hashes comparison if you wish.
from your description your Synology NAS was not speed daemon neither. Difficult to say which one will be faster.
You can use whatever you feel like you trust but if not sure what you are doing I suggest you stick to default (size and modtime).
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--ignore-size Ignore size when skipping use mod-time or checksum
--size-only Skip based on size only, not mod-time or checksum
You can also always try with --dry-run and experiment with various settings.
Thanks, I will stick to default. So whenever rclone syncs it will compare the file sizes and modtimes of all files on source and destination? I was thinking too complicated then, as I thought it has a hidden place where hashes for all files (and also a hash for each directory based on its content) are stored, and a sync would first compare the directory hashes to quickly identify where something has changed. But it seems that comparing all file metadata is a much quicker process as I thought.