Checksumming locally rather than remote?

Here's my scenario:
• Synology NAS. It has a slow CPU, and only 2 GB of RAM but tens of TB of data to be copied to B2.
• Linux machine with a fast CPU, lots of RAM, and a fast SSD.

Using rclone with B2 as a remote on the Synology works, but checksumming is quite slow due to the limited CPU power.

On the Linux machine, I tried using the Synology as an sftp remote (with --sftp-path-override) and b2 as another remote and it functionally worked, but the sha1sum process on the Synology was still the bottleneck.

Is there an rclone configuration where I can copy files in whole from the Synology to the Linux server, then perform the checksumming on the Linux machine which is far more powerful and decide if they need to be uploaded or not?

You can force --checksum (assuming checksum are available for comparison on your target destination or can be calculated locally). If you do this, the checksumming will be done on the machine that runs rclone.

So I suppose if you store on the NAS but want the Linux machine to checksum for you, you could have it be the uploading conduit. Of course the NAS still has to read the files. This is IO intensive and unavoidable, but it's not CPU intensive.

hello and welcome to the forum,
@thestigma has a good suggestion, that could work.

if your goal is to upload the data on the synology nas to b2, the nas itself has native support for b2.

Rclone by default will use size/modtime comparisons rather than checksums to work out if a file needs to be uploaded or not.

WIth a recent version of rclone these work fine on b2. I wonder why you aren't using that?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.