Estimated transactions/sec with --tpslimit 0?

Forgive my lack of understanding here...

I'm uploading a website to GDrive as a backup, and there's about a million files that are < 4kb to upload. With -P, I keep getting, mid transfer a message that says "-killed."
I haven't searched the docs to find out what that is (I know that would be wise, but this isn't mission critical or urgent).

I'm thinking about just rate limiting the process with --transfers 1 and setting a --tpslimit of some kind.

Trouble is, I can't figure out what a reasonable tpslimit might be. Is there any estimation on how many tps might occur with it set to default 0?

Even better, is there a recommended way to push these up to GDrive efficiently?


Millions of files is going to be awfully slow with Google as it does 2-3 per second at best.

There really isn't much you need to do other than run rclone copy as the defaults handle everything pretty well.

I'm not sure what you mean by 'killed' as that isn't something from rclone.

I'm guessing you're running out of memory and the oom killer is terminating rclone.

You may want to tar those files and upload a tar file. Otherwise that will be very slow.

You can use rclone cat/rcat for the standard in of rclone from tar to do it in one shot.

Kinda like
Tar -zcvf - / --exclude-from=/tmp/exc | rclone rcat remote:/data.tar.gz -v

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.