Rclone v1.60.1 stalls and does not exit correctly when using --max-transfer

I am using rclone with these flags:

--delete-empty-src-dirs --max-transfer=100G --cutoff-mode=soft --transfers 1 --checkers 1

However, after the first file has been transferred (450GB), rclone just stalls, repeating the same info over and over:

2022/12/12 09:58:51 INFO  :
Transferred:      455.572 GiB / 473.516 GiB, 96%, 209.381 KiB/s, ETA 1d57m46s
Errors:                 2 (no need to retry)
Checks:                 3 / 3, 100%
Deleted:                1 (files), 0 (dirs)
Renamed:                1
lapsed time:  21h52m30.6s

I would expect rclone to just exit at this point.

This behaviour was introduced after I upgraded to the latest rclone version (v1.60.1), the version I used until recently didn't exhibit this problem. This was probably a one year old version or something like that - I overwrote it and I am unable to trace the exact version number of it.

$ rclone --version
rclone v1.60.1
- os/version: unknown
- os/kernel: 3.10.105 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.19.3
- go/linking: static
- go/tags: none

I tried to add a --max-duration=3h flag also, but this changed nothing - rclone still just stalls after the first file has been transferred.

Is there an additional flag I need to add to the command line to get --max-transfer and --max-duration to work?

Hi Steffen,

I don't think we can help you or fix a potential bug without getting all the info requested in the forum template.

I am specifically missing the full debug log and the redacted config of the used remote.

Thanks - I was hoping this was a known issue or maybe easy to reproduce.

I will add some debug when it happens next, should happen again in a day or two.

Ok, I have uploaded an example of a full log file here.

https://drive.google.com/file/d/1IrtzAV8b1tl9T9sF1B9vGpAr9P-6FSxa/view

The transfers that stall all share this part, after the first file transfer is done:

2022/12/13 18:17:31 INFO  : 
Transferred:   	  562.432 GiB / 900.175 GiB, 62%, 10.986 MiB/s, ETA 8h44m40s
Checks:                 0 / 1, 0%
Transferred:            0 / 10002, 0%
Elapsed time:   18h21m0.7s
Checking:

Transferring:
 *            file1.tar.gz: 99% /562.455Gi, 10.986Mi/s, 2s

2022/12/13 18:17:31 DEBUG : file1.tar.gz: Sending chunk 603912667136 length 8388608
2022/12/13 18:17:32 DEBUG : file1.tar.gz: Sending chunk 603921055744 length 8388608
2022/12/13 18:17:33 DEBUG : file1.tar.gz: Sending chunk 603929444352 length 1742848
2022/12/13 18:17:37 DEBUG : file1.tar.gz: md5 = 69ebc1605bc1537f340773ab4e8133en OK
2022/12/13 18:17:37 INFO  : file1.tar.gz: Copied (new)
2022/12/13 18:17:41 INFO  : 
Transferred:   	  562.455 GiB / 900.175 GiB, 62%, 6.905 MiB/s, ETA 13h54m42s
Checks:                 0 / 1, 0%
Deleted:                1 (files), 0 (dirs)
Transferred:            1 / 10002, 0%
Elapsed time:  18h21m10.7s
Checking:
 *            file1.tar.gz: checking

2022/12/13 18:17:45 INFO  : file1.tar.gz: Deleted
2022/12/13 18:17:45 ERROR : dir1/file2.tar.gz: Not deleting source as copy failed: max transfer limit reached as set by --max-transfer
2022/12/13 18:17:45 NOTICE: max transfer limit reached as set by --max-transfer - stopping transfers
2022/12/13 18:17:51 INFO  : 
Transferred:   	  562.455 GiB / 900.201 GiB, 62%, 3.621 MiB/s, ETA 1d2h31m40s
Errors:                 2 (no need to retry)
Checks:                 3 / 3, 100%
Deleted:                1 (files), 0 (dirs)
Renamed:                1
Transferred:            1 / 10002, 0%
Elapsed time:  18h21m20.7s

2022/12/13 18:18:01 INFO  : 
Transferred:   	  562.455 GiB / 900.201 GiB, 62%, 1.899 MiB/s, ETA 2d2h34m52s
Errors:                 2 (no need to retry)
Checks:                 3 / 3, 100%
Deleted:                1 (files), 0 (dirs)
Renamed:                1
Transferred:            1 / 10002, 0%
Elapsed time:  18h21m30.7s

So basically, it appears rclone has the (correct) intention to stop the remaining transfers:

2022/12/13 18:17:45 NOTICE: max transfer limit reached as set by --max-transfer - stopping transfers

But, for some reason, it just never actually stops the transfers / the session.

You've asked it specifically not to stop transfers once it hits the limit.

Specifying --cutoff-mode=soft will stop starting new transfers when Rclone reaches the limit.

Thanks, Animosity022 - yep, I have asked rclone specifically to complete the ongoing transfer gracefully (which it does), and then stop starting new transfers (which is also does).

So the problem I have left is that rclone stalls forever at this point (simply starts doing nothing) instead of just exiting as expected.

It's checksumming the file to validate the md5sums match.

You'll see Disk IO going on.

Nope, no activity at all. The disk is quiet after the file is deleted at 18:17:45.

2022/12/13 18:17:45 INFO : file1.tar.gz: Deleted

What's your rclone.conf look like minus the secrets/keys? Basically what Ole asked for earlier would be helpful.

Guessing it's a Google Drive remote based on the name.

My small test case works fine.

felix@gemini:~/test$ rclone copy /home/felix/test GD:blah2 -vvv --max-transfer=1b --cutoff-mode=soft --transfers 1
2022/12/13 14:36:58 DEBUG : Setting --config "/opt/rclone/rclone.conf" from environment variable RCLONE_CONFIG="/opt/rclone/rclone.conf"
2022/12/13 14:36:58 DEBUG : rclone: Version "v1.60.1" starting with parameters ["rclone" "copy" "/home/felix/test" "GD:blah2" "-vvv" "--max-transfer=1b" "--cutoff-mode=soft" "--transfers" "1"]
2022/12/13 14:36:58 DEBUG : Creating backend with remote "/home/felix/test"
2022/12/13 14:36:58 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2022/12/13 14:36:58 DEBUG : Creating backend with remote "GD:blah2"
2022/12/13 14:36:58 DEBUG : Google drive root 'blah2': Waiting for checks to finish
2022/12/13 14:36:58 DEBUG : Google drive root 'blah2': Waiting for transfers to finish
2022/12/13 14:37:00 DEBUG : hosts: md5 = f0b78c96a144844b9258ea1f50f0e940 OK
2022/12/13 14:37:00 INFO  : hosts: Copied (new)
2022/12/13 14:37:00 NOTICE: max transfer limit reached as set by --max-transfer - stopping transfers
2022/12/13 14:37:00 ERROR : Can't retry any of the errors - not attempting retries
2022/12/13 14:37:00 INFO  :
Transferred:   	        545 B / 1.064 KiB, 50%, 544 B/s, ETA 1s
Errors:                 2 (no need to retry)
Transferred:            1 / 3, 33%
Elapsed time:         2.1s

2022/12/13 14:37:00 DEBUG : 5 go routines active
2022/12/13 14:37:00 Failed to copy with 2 errors: last error was: max transfer limit reached as set by --max-transfer

I guess the config file is plain as they come:

[google_drive_dest]
type = drive
scope = drive
service_account_file = /path/file.json
team_drive = SOMELETTERSANDNUMBERS

Thanks for testing - I cannot recreate the problem for just small files either:

$ mkdir a
$ truncate -s 5M a/5MB.file
$ mkdir a/a
$ truncate -s 5M a/a/5MB.file
$ truncate -s 5M a/a/10MB.file
$ rclone --config rclone.conf move a google_drive_dest:incoming "--delete-empty-src-dirs" "--max-transfer=1M" "--cutoff-mode=soft" "--transfers" "1" "--checkers" "1" "--stats" "10s" "-vvv" "--bwlimit" "11M" "--copy-links"
2022/12/13 20:46:14 INFO  : Starting bandwidth limiter at 11Mi Byte/s
2022/12/13 20:46:14 DEBUG : rclone: Version "v1.60.1" starting with parameters ["rclone" "--config" "rclone.conf" "move" "a" "google_drive_dest:incoming" "--delete-empty-src-dirs" "--max-transfer=1M" "--cutoff-mode=soft" "--transfers" "1" "--checkers" "1" "--stats" "10s" "-vvv" "--bwlimit" "11M" "--copy-links"]
2022/12/13 20:46:14 DEBUG : Creating backend with remote "a"
2022/12/13 20:46:14 DEBUG : Using config file from "rclone.conf"
2022/12/13 20:46:14 DEBUG : local: detected overridden config - adding "{12rtk}" suffix to name
2022/12/13 20:46:14 DEBUG : fs cache: renaming cache item "a" to be canonical "local{12rtk}:/source/a"
2022/12/13 20:46:14 DEBUG : Creating backend with remote "google_drive_dest:incoming"
2022/12/13 20:46:15 DEBUG : Google drive root 'incoming': Waiting for checks to finish
2022/12/13 20:46:15 DEBUG : Google drive root 'incoming': Waiting for transfers to finish
2022/12/13 20:46:19 DEBUG : 5MB.file: md5 = 5f363e0e58a95f06cbe9bbc662c5dfb6 OK
2022/12/13 20:46:19 INFO  : 5MB.file: Copied (new)
2022/12/13 20:46:19 INFO  : 5MB.file: Deleted
2022/12/13 20:46:19 ERROR : a/5MB.file: Not deleting source as copy failed: max transfer limit reached as set by --max-transfer
2022/12/13 20:46:19 NOTICE: max transfer limit reached as set by --max-transfer - stopping transfers
2022/12/13 20:46:19 ERROR : Local file system at /source/a: not deleting directories as there were IO errors
2022/12/13 20:46:19 ERROR : Attempt 1/3 failed with 3 errors and: not deleting directories as there were IO errors
2022/12/13 20:46:20 DEBUG : Google drive root 'incoming': Waiting for checks to finish
2022/12/13 20:46:20 DEBUG : Google drive root 'incoming': Waiting for transfers to finish
2022/12/13 20:46:20 ERROR : a/5MB.file: Not deleting source as copy failed: max transfer limit reached as set by --max-transfer
2022/12/13 20:46:20 NOTICE: max transfer limit reached as set by --max-transfer - stopping transfers
2022/12/13 20:46:20 ERROR : Local file system at /source/a: not deleting directories as there were IO errors
2022/12/13 20:46:20 ERROR : Attempt 2/3 failed with 3 errors and: not deleting directories as there were IO errors
2022/12/13 20:46:20 DEBUG : Google drive root 'incoming': Waiting for checks to finish
2022/12/13 20:46:20 DEBUG : Google drive root 'incoming': Waiting for transfers to finish
2022/12/13 20:46:20 ERROR : a/5MB.file: Not deleting source as copy failed: max transfer limit reached as set by --max-transfer
2022/12/13 20:46:20 NOTICE: max transfer limit reached as set by --max-transfer - stopping transfers
2022/12/13 20:46:20 ERROR : Local file system at /source/a: not deleting directories as there were IO errors
2022/12/13 20:46:20 ERROR : Attempt 3/3 failed with 3 errors and: not deleting directories as there were IO errors
2022/12/13 20:46:20 INFO  :
Transferred:            5 MiB / 5 MiB, 100%, 1.250 MiB/s, ETA 0s
Errors:                 3 (retrying may help)
Checks:                 5 / 5, 100%
Deleted:                1 (files), 0 (dirs)
Renamed:                1
Transferred:            1 / 1, 100%
Elapsed time:         6.0s

2022/12/13 20:46:20 DEBUG : 9 go routines active
2022/12/13 20:46:20 Failed to move with 3 errors: last error was: not deleting directories as there were IO errors
$

A difference I notice is that the part with the IO errors is not seen in the long-running rclone transfer that stalls:

2022/12/13 20:46:19 ERROR : Local file system at /source/a: not deleting directories as there were IO errors
2022/12/13 20:46:19 ERROR : Attempt 1/3 failed with 3 errors and: not deleting directories as there were IO errors

Once the file is there and you aren't using a crypt remote, you should be able to pull the same md5sum from Drive.

rclone md5sum GD:hosts
f0b78c96a144844b9258ea1f50f0e940  hosts

I'd be curious if it's doing something on that side taking a long time and rclone is waiting. I don't really use Google Drive anymore nor have I ever transferred files that big to there.

Ah, you mean rclone could be waiting for something to happen at the server side (Google)?

I wonder if there is perhaps a flag to disable this waiting-for-something, then?

At this point rclone already deleted the file locally anyway, so I fail to see the point in waiting around any further.

Perhaps - seems like an odd / longshot imo.

Assuming you care about the data, I don't think so. You can:

      --ignore-checksum                      Skip post copy check of checksums

Hmmm, that's a good point as if it deleted the file, I'd assume it already got the checksum confirmed but I'm not 100% sure. That's probably something to flesh out as well.

What's the goal with max transfers? I'd imagine there is something wonky with the cutoff mode being soft and the items in flight if I had to take a guess as that's been changed around in the last few releases.

I notice I find some comfort in this being related to large files only, although in some so far unexplained way.

I probably won't be pursuing this any further - thanks for your help! :slight_smile:

Ah, it is related to the 750GB maximum daily upload quota per user that Google is enforcing.

I just want a user to transfer files from a set of different folders during a day, and this parameter is a nice way of avoiding that rclone will get "stuck" transferring files from one folder only.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.