Slow speed transfer FTP to S3


#1

I am performing a sync from an FTP source location to a s3 target. With these parameters below, but the transfer rate is very low, below 3 Mb / s could you help me?

Command applied:

/usr/bin/rclone sync -uv --timeout=600s --drive-chunk-size=512M --drive-upload-cutoff=1G --buffer-size=1G --ignore-existing --transfers=10 --checkers=64 --ignore-size --progress source: destination:bucket-1502971486 --log-file /var/log/rclone/sync_storage-PROD.log

Version Rclone
rclone v1.45

os/arch: linux/amd64
go version: go1.11.2

SO Client
Debian GNU / Linux 7 operating system \ n \ l

Thank you


#2

I see you made an issue about this too.

I’ll reply here and close the issue

I am performing a sync from an FTP source location to a s3 target. With these parameters below, but the transfer rate is very low, below 3 Mb / s could you help me?

How are you measuring the transfer?

Rclone measures transfers in Mega Bytes /s normally.

Try transferring from the FTP server to your local disk - what sort of speed do you get there?

–drive-chunk-size=512M --drive-upload-cutoff=1G

Remove these options as you aren’t using the google drive backend

–buffer-size=1G

This seems excessive? 1GB of memory per transfer? I’d remove it completely as the default of 16MB works quite well.


#3

How are you measuring the transfer?
10 TB.

I would like to know if we can force the transfer rate to a higher value, it follows the adjustment made in the script / command

/usr/bin/rclone sync -uv --timeout=600s --ignore-existing --transfers=20 --checkers=6 --ignore-size --progress source: destination:bucket --log-file /var/log/rclone/sync_storage-PROD.log

Best Regards


#4

rclone is going to attempt to transfer as fast as your source can provide the data and the destination can accept it. Unless you’ve deliberately limited rclone it will be attempting at the highest rate per connection. You may want to check the bandwidth from your client to both the S3 and the ftp source. Your client will download locally while uploading to the S3 bucket so the bottleneck is somewhere there.