SFTP/S3 Performance Tuning

I'm looking for suggestions on tuning rclone serve sftp performance on a S3 backend. I'm testing on two different EC2 machines in an AWS VPC (i.e., fast networking) using command-lines similar to the following:

  • server: rclone serve sftp s3:bucket-name --read-only
  • client: rclone copy sftp:folder ./folder -P --transfers 10

Copying 30 GB of data from S3 to SFTP to local disk works fine, but is relatively slow compared to using HTTP instead of SFTP (i.e., rclone serve http). Are there some flags I can explore to improve the SFTP performance?

hello,
sftp can do checksumming of files transferred, perhaps that overheard is slowing the transfer?
" SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote’s PATH."
check the log file
if you do not want to use checksum, then --no-checksum

is there a reason you need to use sftp?
i guess you do not want the client to access the s3 bucket?

1 Like

Thanks for the suggestion on disabling checksum! I'll give that a try.

Since rclone serve http is fast enough, I will like use that over SFTP, but it seems odd that there is such a large difference in speed. I would like SFTP as an option for my clients, though, in case that is the preferred interface. Access to the S3 bucket is not an issue, as I will make it read-only - this scenario is purely for copying data out.

Disabling checksum via --no-checksum increases performance by 3-4 times. Thanks for the tip!

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.