--max-transfer not that strict

Hi, Right now I’m using --max-transfer 749G as part of my uploading scripts and it works like a charm. Nevertheless, I’ve observed that Google doesn’t (ever??) complain immediately about the 750G limit. My understanding is that it only complains after 750G limit been reached if you still try to upload new files. In other words, it seems like they allow you to finish transferring your current files (I guess it’s just a very permissive check in order to allow up to 5TB files to be uploaded).

My problem with --max-transfer as it is right now is that most of the times I have three big files at a significant percentage done (let’s say 70% of three 60GB files) and then it just stops, so in the end, I’ve only transferred like 630GB.

If my assumptions were right, my suggestion would be to have a flag (–finish-initiated-transfers??) to indicate rclone that we want to finish the current transfers but not to initiate new ones instead of just stopping.

It’s better to just use the bwlimit and let it run over the course of a day and you don’t hit the limit that way.

Well, in my case I want this uploads to happen over night as it has some impact on my server performance. That’s the reason why I’m using bwlimit 60M, so I upload to my two accounts while nobody is using the server (still leaving half my bandwidth for other apps).

I’ve tried limiting bwlimit to 8M but it’s not ideal for me as I can see performance impact. I can try again with nice and ionice to see if the performance issues are gone.

Anyhow I still think the suggested flag may be useful in scenarios where you want to ship the data ASAP (for example a setup across 2 different servers)

You can always reduce your transfers so yo have less waste at the end if that’s an issue.

The goal of max-transfer from my understanding is to stop at the max and not continue.

I kind of disagree with you, the whole point of using that is to avoid getting api bans IMHO. If hose bans don’t come as long as you don’t try to upload new files after the limit has been reached, why not offer a mechanism to do so?

Regarding the reduction of transfers, yes, I can use 1 transfer an minimise the waste, but it’s still going to be a waste that I’m not sure we need to have

So in this case, you are speaking to a particular backend and a very particular issue. The goal for max transfer is broader than a single Google Backend though.

You don’t get ‘bans’ for hitting the limit. You get a 403 rate limit as you hit your quota for the day.

 Not deleting source as copy failed: googleapi: Error 403:
 User rate limit exceeded., userRateLimitExceeded

You have to think a bit broader too as rclone has no idea if you transferred 100G already, 500G already, 0G already as its session based.

I don’t use any limits or anything as I let it fail overnight if it does and just pick up the following night if I have an issue with hitting a quota.

I also do not understand how limiting bw to 8M creates more load on your system than using 60M as that doesn’t make much sense.

If you think it’s a nice feature, by all means, open a github issue and put in the feature request.

The flag is also written to confirm the use of ‘maximum’:

--max-transfer SizeSuffix                      Maximum size of data to transfer. (default off)

I think this is the issue you are looking for

Comments welcome there :slight_smile:

@ncw I didn’t find that, thanks for linking it

You have to think a bit broader too as rclone has no idea if you transferred 100G already, 500G already, 0G already as its session based.

Agree, but of course, I’m talking in the context of a call, if I call it twice I’m not expecting it to work.

I also do not understand how limiting bw to 8M creates more load on your system than using 60M as that doesn’t make much sense.

It doesn’t create more load but in both cases, my disk usage goes higher than 90% and usage of Plex/Emby becomes impossible.

Anyhow, I’ve been playing around with nice and ionice and it looks like now I can still use Plex while uploading, even at 60M, so I might move back to 8M and keep it running as you suggest.

By limiting the bw to 8M, it jumps up the IO? I’m not sure why that would be the case. Are you using iotop or something to see that? What’s the exact command you were running and what is the OS?

I’m running rclone (mount and scripts) on a docker container, base image is alpine, with s6 overlay and crontab on top. My host is a Hetzner SB28 from auction, running Ubuntu Server 18.04:

1 x Dedicated Root Server SB28
* Intel Core i7-2600
* 2x HDD SATA 3,0 TB Enterprise
* 2x RAM 8192 MB DDR3
* NIC 1 Gbit
* - Intel 82574L
* Location: FSN1

The command I’m running is this one:

/usr/sbin/rclone move /local-media jim:
–config /config/rclone.conf
–log-file /logs/upload-media.log
–checkers 3
–fast-list
-v
–exclude /downloads/**
–max-transfer 749G
–drive-chunk-size 128M
–bwlimit 8M
–tpslimit 3
–transfers 3
–delete-empty-src-dirs

The information comes from netdata, not sure if 100% accurate, but every time it shows those numbers the server is unusable, and it only happens while uploading.
Anyhow, now I added nice and ionice to the upload scripts and it looks like it’s not leaving plex irresponsive (even at 60M bwlimit)

nice -n 19 ionice -c2 -n7

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.