How to prevent rClone transfers to Googel Workspace from timing out ruTorrent?

FeralHosting, my seedbox provider recently changed their kernel so that rClone transfers to Google Workspace now do a lot of IO requests which when doing only 3 file transfers at once will crash ruTorrent preventing it from downloading anything. I can set the bandwidth limit to 60 MB/s but it makes no difference. The only solution is to set transfers to 1 so only 1 file transfers at once. However that only does a max transfer speed of 36 MB/s and takes 10 hours to complete the transfer instead of 4 hours when transferring 3 files at once.

Are there any other settings I can use to limit the IO strain of rClone so I can do 2-3 transfers at once? I found these settings but are there more?:

  1. --tpslimit: I think the default value for --tpslimit is set to 10 transactions per second. This means that by default, rclone would make up to 10 API requests per second to the cloud storage provider.

  2. --drive-chunk-size: I think the default value for --drive-chunk-size is set to 8M
    (8 megabytes). This means that when uploading or downloading files to/from Google Drive, rclone would process files in chunks of 8 megabytes by default.

What should I change the above settings to?

why now run a few simple tests on your own, copy a single file, changing the flags and values.
find the optimum sweet spot in a few minutes.

how do you know that is the exact issue, instead of feralhosting throttling rclone connections?
is that documented somewhere?

They told me that's what happened and there's no way for them to fix it.

how are you transfering the files?

command line terminal
or
https://forum.rclone.org/t/how-to-move-a-downloaded-torrent-to-the-cloud/23493

in any event, as i suggested,
take a single source file, transfer it to gdrive, changing the flags and the values.
until you find the sweet spot.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.