--cutoff-mode=soft not working as expected (I think)

What is the problem you are having with rclone?

Using the --cutoff-mode=soft, max-transfer=100G and --transfers=8. Once the max transfer limit is reached it seems the transfers are killed before they finished. What I was thinking would happen is no new transfers would start and the current ongoing transfers would be allowed to finish. The issue happens every single time so its easy to reproduce.

What is your rclone version (output from rclone version)

rclone v1.53.0

  • os/arch: linux/amd64
  • go version: go1.15

Which OS you are using and how many bits (eg Windows 7, 64 bit)

4.19.0-8-amd64 #1 SMP Debian 4.19.98-1 (2020-01-26) x86_64

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy 'pyplexmove1:Plex/backup/!Finished' 'pyteammove1crypt:Current/backup/!Finished' \
--stats-file-name-length 0 \
--checkers 4 \
--transfers 8 \
--size-only \
--stats=1m \
--max-transfer=100G \
--cutoff-mode=soft \

The rclone config contents with secrets removed.

type = drive
client_id = Redacted
client_secret = Redacted
scope = drive
token = Redacted
team_drive =  Redacted
root_folder_id = 
service_account_file =  Redacted 

type = crypt
remote = pyteammove1:encrypt
password =  Redacted
password2 =  Redacted

A log from the command with the -vv flag

This is not a -vv log and its just an excerpt of the final part of the -v log where the failure happens. Working on getting a -vv log.

Transferred:   	  102.506G / 270.556 GBytes, 38%, 79.561 MBytes/s, ETA 36m2s
Transferred:           33 / 71, 46%
Elapsed time:      22m3.6s
 * File 7: 96% /3.906G, 9.943M/s, 14s
 * File 8: 38% /4.219G, 9.624M/s, 4m34s
 * File 1: 35% /4.246G, 9.977M/s, 4m40s
 * File 2: 24% /4.043G, 10.047M/s, 5m12s
 * File 3: 23% /4.125G, 10.232M/s, 5m17s
 * File 4: 21% /3.892G, 10.347M/s, 5m1s
 * File 5:  9% /4.182G, 9.612M/s, 6m42s
 * File 6: 10% /3.974G, 10.719M/s, 5m40s

2020/09/04 09:32:20 INFO  : File 7: Copied (new)
2020/09/04 09:32:20 ERROR : Cancelling sync due to fatal error: Max transfer limit reached as set by --max-transfer
2020/09/04 09:32:20 ERROR : File 5: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ABg5-UzEf6g4-qLBdmFukaddb4PCTdwANdHOypxJl8W9sZfHaDejqy5McpzeWJL8PV_0Q-_TLjmaC6FWig4m9__TM-0H0VFp2w": context canceled
2020/09/04 09:32:20 ERROR : File 4: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ABg5-UwyLCdE54srakSqysILlJaw6eUTDLiszc-OI0xDy7OyjnxGNPVz_iLHjceky4UFEZiTAg-oZaebA2Ad9c_P99jAdNIdjQ": context canceled
2020/09/04 09:32:20 ERROR : File 2: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ABg5-UxWJQ2Q0-L3tZe-LN_4oG5QG-pm0Q5iWzeib1C4rZxMjeCCQIlFeqaLx3G0rdyYGy4mWNrcxBZQOrsRYLY3ioFI8dG4Kg": context canceled
2020/09/04 09:32:20 ERROR : File 6: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ABg5-UxvQ6i7Sbm6Bb06qSxwuDZlNcveSlxK0sXwBTEN4M2ao2xHNhmLT92mEHkbrlIkwqiOVD-huGCywasLrFkfqomfMMYpsg": context canceled
2020/09/04 09:32:20 ERROR : File 3: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ABg5-UwQfHEFEiTjJQhVoynxhNRKV-SY-wE1uE2L1kpRL45euE7VanupOwmsiHDTipCil4uyLWseNOqxiDjM14DhoLTWOe2vqQ": context canceled
2020/09/04 09:32:20 ERROR : File 1: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ABg5-UyCejR6uc_wnmMg-VQLpEco7K2kuhQWZJsr2gAWhGA1KubooZguOLXcn793oDjDCFY6C0oG2knrUg42N77ZTY1Km32Bzw": context canceled
2020/09/04 09:32:20 ERROR : File 8: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ABg5-UxWhnF-YJwixjLY_m8FfsKNGPTyqfZvrLZC5Xh9jv4H7I8QRVT0RPbAvAg5Ua6lsMMzw_s-OXFKpUlgLj4wWAHD_Bsoqw": context canceled
2020/09/04 09:32:20 ERROR : Fatal error received - not attempting retries
2020/09/04 09:32:20 INFO  : 
Transferred:   	  103.657G / 245.948 GBytes, 42%, 79.546 MBytes/s, ETA 30m31s
Errors:                 9 (fatal error encountered)
Transferred:           34 / 63, 54%
Elapsed time:     22m18.7s

2020/09/04 09:32:20 Failed to copy with 9 errors: last error was: Max transfer limit reached as set by --max-transfer

Right but I thought --cutoff-mode=soft would make it so at 100G new transfers stopped starting and the current ongoing transfers would be allowed to finish.

This the flag info that makes me think it should work like this:


This modifies the behavior of --max-transfer Defaults to --cutoff-mode=hard .

Specifying --cutoff-mode=hard will stop transferring immediately when Rclone reaches the limit.

Specifying --cutoff-mode=soft will stop starting new transfers when Rclone reaches the limit.

Specifying --cutoff-mode=cautious will try to prevent Rclone from reaching the limit.

I think --max-transfer flag by itself is equal to --max-transfer and --cutoff-mode=hard which would stop all transfers immediately as you pointed out, but adding --cuttoff-mode=soft is intended to change that behavior to let the ongoing transfers end or at least is what I get from the documentation but I'm not sure this is actually the case, there might be some other intended function for the --cutoff-mode flag.

ok, now i understand what mean.

can you post a debug log, not the output from the command line.



Was really existed about the cutoff-mode but it seems to either miss options or I didn't understood it correctly.

Use case :
Let says I want to sync a folder (1Tb) on Gdrive containing 50Gb files.
I would run a sync with --max-transfer=750G --transfers 10 for exemple.

What I notice :

  • --cutoff-mode=hard : Stop all transfers when reaching 750G
    ---> because it start copying a new file has the first one finished, the real sync could be far less. Probably around 500G
  • --cutoff-mode=soft : Stop all new transfers when reaching limit
    ---> using the 750G, will be the same has hard as remaining transfer would get api error
    ---> using 500G will see "Cancelling sync due to fatal error: Max transfer limit reached as set by --max-transfer" after reach 500G, so I don't see the soft part. (As the OP of this thread)
  • --cutoff-mode= cautious Stop all new transfers before reaching the limit
    ---> using the 750G, not sure I got the stop condition there.

What I would expect :

  • --cutoff-mode=hard : Stop all transfers when reaching 750G
  • --cutoff-mode=soft : Stop all new transfers when reaching a limit, (new argument ?), Stop all transfers when reaching 750G
  • --cutoff-mode= cautious Stop all new transfers that overpass the limit considering all the remaining G on the others transferts, then it should never reach 750G.
    This mode could have 2 options :
  1. either look for next file that could fit

  2. or stop looking and just finish ongoing transferts

    rclone version
    rclone v1.52.3

    • os/arch: linux/amd64
    • go version: go1.14.7


Perhaps @kageurufu can take a peek? :slight_smile:

So for cautious, it's what I thought. If new transfert with the remaining transfert are superiors to the max, it stop everything.

if fs.Config.MaxTransfer >= 0 && (accounting.Stats(ctx).GetBytes() >= int64(fs.Config.MaxTransfer) ||
		(fs.Config.CutoffMode == fs.CutoffModeCautious && accounting.Stats(ctx).GetBytesWithPending()+src.Size() >= int64(fs.Config.MaxTransfer))) {
		return nil, accounting.ErrorMaxTransferLimitReachedFatal

By looking at the commit, it seems never intended to keep the ongoing transferts, or look a file that will satisfy the size limit, even if that seems a good feature. :wink:

The following seems to match --cutoff-mode=hard
if fs.Config.MaxTransfer >= 0 && (accounting.Stats(ctx).GetBytes() >= int64(fs.Config.MaxTransfer

but accounting there that :

if fs.Config.CutoffMode == fs.CutoffModeHard {
	acc.values.max = int64((fs.Config.MaxTransfer))

Maybe it should compare to acc.values.max instead on the condition ? :thinking:

I haven't figure out where it spawn new transferts, but it certainly miss a condition there with at least CutoffMode.soft.

PS: just realised @kageurufu did the PR. Hopefully you guys wont mind I try to understand. go not in my skillset, or I would have gladly help further :slight_smile:

PS2: https://github.com/rclone/rclone/issues/2672#issuecomment-606281513 Might be why the soft cut off seems to be an hard one running with several transferts.

To summarize :

  • Issue with cutoff=soft, all inprogress transferts stop after firet one finished.
  • Issue with cutoff=cautious, all in progress transfer stop if next transfert reach max-transfert instead of skipping it.
    Both situation don't terminate gracefully but with error.

Find a topic related to this from June :

I look around a bit more and might have mistaken.

The hard mode seem to be triggered by :
bytesUntilLimit = acc.values.max - acc.stats.GetBytes()


// of bytes remaining to read.
func (acc *Account) checkReadBefore() (bytesUntilLimit int64, err error) {
	// Check to see if context is cancelled
	if err = acc.ctx.Err(); err != nil {
		return 0, err
	if acc.values.max >= 0 {
		bytesUntilLimit = acc.values.max - acc.stats.GetBytes()
		if bytesUntilLimit < 0 {
			return bytesUntilLimit, ErrorMaxTransferLimitReachedFatal
	} else {
		bytesUntilLimit = 1 << 62
	// Set start time.
	if acc.values.start.IsZero() {
		acc.values.start = time.Now()
	return bytesUntilLimit, nil

Incorrect soft mode is enforce by that :
fs.Config.MaxTransfer >= 0 && (accounting.Stats(ctx).GetBytes() >= int64(fs.Config.MaxTransfer)

  • fs.Config.MaxTransfer : from the command argument not the account
  • accounting.Stats(ctx).GetBytes() : Bytres from succesfull transfered files

If so instead of returning ErrorMaxTransferLimitReachedFatal would it be preferable to only remove the new transfert with a message like 'Skip Already Reach max transfert limit' ? When no more file in queue, it will gracefully stop when other files are done. This way, it will correct cautious also, by accepting file that are small enough instead of erroring on the first one.

What do you think ?

PS : haven't figure out why the ErrorMaxTransferLimitReachedFatal cancel other transferts...

@ncw Should I create an issue on GitHub ? (Yeah, that's a ping in disguise :blush:)

I have a vague memory that there is a bug in this area but I can't find it, so feel free to make an issue.

This is the bit of code you are looking for...

What that says that if the transfer mode is hit then end with accounting.ErrorMaxTransferLimitReachedFatal - that isn't such a good idea as that will stop everything immediately as you've seen.

What we want is a different sort of error which signals to the sync routines that it should no longer queue any new files, or maybe this check should be in the sync routine.

If you stick it in an issue, I'll have a go at fixing it!

Will do it this evening :slight_smile: Thank you

1 Like

Were you able to create the issue?

Will be in 1.54. :grinning:

Great. Thanks for taking over. I saw all the testing you did over at GitHub.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.