--max-duration and --retries not working correctly

What is the problem you are having with rclone?

I created a Task Scheduler to open a .bat file and run at 02:00 AM:

cd "C:\Users\Administrator\Desktop\Rclone 1.57\"
start rclone sync X gdrivecrypt:Y --fast-list --check-first --checkers=10 --order-by modtime,ascending --bwlimit 2M --log-file=HD5.txt --log-level DEBUG --max-duration=6h -P --retries-sleep=30m --cutoff-mode=soft

At 08:00 AM rclone should cutoff in soft mode currently transfers and after that just stop (waits the currently transfers to finish and then just stop). Also, Im using --retries-sleep=30m because sometimes my internet disconnect and in case it happens I want rclone to retry the entire sync.

But what is happening is that rclone is getting "context deadline exceeded" as a ERROR during the currently transfers, then, after 30min (the sleep time of retries) since there was a ERROR, rclone start again the sync, with a new deadline of 6h.

I want --retries to stop seeing --max-duration + --cutoffomode=soft as a ERROR, to stop this endless loop of ERROR + new deadline.

Many Thanks!

What is your rclone version (output from rclone version)

rclone v1.57.0

  • os/version: Microsoft Windows Server 2019 Standard 1809 (64 bit)
  • os/kernel: 10.0.17763.2300 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.17.2
  • go/linking: dynamic
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync X gdrivecrypt:Y --fast-list --check-first --checkers=10 --order-by modtime,ascending --bwlimit 2M --log-file=HD5.txt --log-level DEBUG --max-duration=6h -P --retries-sleep=30m --cutoff-mode=soft

A log from the command with the -vv flag

2021/11/26 08:00:04 ERROR : Filmes/Filmes Internacionais/The Conjuring 2 (2016)/The Conjuring 2 1080p BluRay DD5.1 x264-HDMaNiAcS.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycduNbDTL0pK5-7QIsGSlTYmdcxvdFsM9voM0s5mfqMHlFknFdRfH2_k52Dc6L6jnh8QkhhlRtVHqvfOpwB76zws": context deadline exceeded
2021/11/26 08:00:04 ERROR : Filmes/Filmes Internacionais/Batman v Superman Dawn of Justice (2016)/Batman.v.Superman.Dawn.of.Justice.2016.Extended.1080p.BluRay.DD5.1.x264-HDMaNiAcS.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdvtR7FZ2WEXk0VA6DZFGb7xW_FYS_NyP2WT-sRsoCUraQKH8A1OS9Idqhg21TG3F8L9VWc5NdDteRk6H0goQdQ": context deadline exceeded
2021/11/26 08:00:04 ERROR : Filmes/Filmes Internacionais/Equals (2015)/Equals.2015.1080p.BluRay.DTS.x264-HDMaNiAcS.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdv57aYwPmo6eJmtFE7jELSJ5JCYAncwidh79n_vlTEDTWMMIgm4iLZ52ordZNO56Mcg5CsTxsE4MadK3XCfz0g": context deadline exceeded
2021/11/26 08:00:04 ERROR : Filmes/Filmes Internacionais/Captain America Civil War (2016)/Captain.America.Civil.War.2016.1080p.BluRay.DTS.x264-HDMaNiAcS.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtvLclPI6OD5MnCWGe4yuWvsBx6ODV4kFgvSabqXnNH2KVqKx5uttJ-ohBWADC8Xqua6ebQXWeznhuBY5_vBkM": context deadline exceeded
2021/11/26 08:00:04 ERROR : Encrypted drive 'gdrivecrypt:HD5': not deleting files as there were IO errors
2021/11/26 08:00:04 ERROR : Encrypted drive 'gdrivecrypt:HD5': not deleting directories as there were IO errors
2021/11/26 08:00:04 ERROR : Attempt 1/3 failed with 4 errors and: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtvLclPI6OD5MnCWGe4yuWvsBx6ODV4kFgvSabqXnNH2KVqKx5uttJ-ohBWADC8Xqua6ebQXWeznhuBY5_vBkM": context deadline exceeded
2021/11/26 08:30:04 INFO  : Encrypted drive 'gdrivecrypt:HD5': Running all checks before starting transfers
2021/11/26 08:30:04 INFO  : Encrypted drive 'gdrivecrypt:HD5': Transfer session deadline: 2021/11/26 14:30:04
2021/11/26 08:30:05 DEBUG : Creating backend with remote "gdrive:XXXXX"

Can you post a full log and not just a snippet? I can only get a small glimpse of what's going on.

Sure, I had to use another website to post the log because pastebin dont let me for some unknown reason.

https://controlc.com/5408fff4

I removed some portions because the total size was 15MB.

Im getting alot of low level retries because Im doing a sync of 5 hard disks at the same time (the log is 1 only), maybe google drive have a limit for api requests?

thanks!

That's also why we ask for a rclone.conf as I don't know if you have your own client ID/secret.

Depending on your Quota page, you have to see what your limit of hits per 100 seconds is as it's different for different folks. If you aren't using your own client ID and secret, you want to do that.

The log seems to just stop before being finished so that's only a partial log again :frowning:

Perhaps just pick one directory and run that end to end, share the full log if that recreates your issue.

Maybe you don't want rclone to do a high level retry at all so set --retries 1? (The number is how many total attempts so is 1 rather than 0!)

--retries 1 probably will fix this problem, but it will kill the possibility of a high level retry if my internet disconnects, so fixed one thing and broke another.

I just want rclone to not interpret current transfers as an error when --max-duration and --cutoffomode=soft is used, triggering --retries, I want a --retries if there is a legitimate error.

I will post a full log like Animosity022 asked, with a short transfer/time to see if it helps.

Thanks!

Full log:

rclone: Version "v1.57.0" starting with parameters ["rclone" "sync" "J:\PoolPart.ccce5054-e57c-4e3f-97db-f5132c7e2df7\Plexflix\Filmes\Filmes Internacionais\A Writer's Odyssey (2021)" "gdrivecrypt:HD6\Filmes\Filmes Internacionais\A Writer's Odyssey (2021)" "--fast-list" "--check-first" "--checkers=10" "--order-by" "modtime,ascending" "--bwlimit" "5M" "--log-file=HDTeste.txt" "--log-level" "DEBUG" "--max-duration=10m" "-P" "--retries-sleep=20m" "--cutoff-mode=soft"]

I stopped it after a few hours because its a endless loop.

For what I saw, --max-duration isnt working at all, after 10min the transfer just stop, even with --cutoff-mode=soft it dont let the transfer end. After 20min it start again the transfer from the begin.

Looks like the problem is bigger than I thought, --max-duration not working and --retries not not respecting him.

--max-duration=TIME
Rclone will stop scheduling new transfers when it has run for the duration specified.
Defaults to off.
When the limit is reached any existing transfers will complete.
Rclone won't exit with an error if the transfer limit is reached.

I'm not sure it's doing what you think it is.

max-duration with soft will continue all transfers until they finish:

Rclone will stop scheduling new transfers when it has run for the duration specified.
Specifying `--cutoff-mode=soft` will stop starting new transfers when Rclone reaches the limit.

You also have some retries going on in there as it's only transferring one file.

So you have this here:

2021/11/28 14:26:38 ERROR : a.writers.odyssey.2021.imax.version.1080p.bluray.x264-cinephilia.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtRAMszNoALiNCVcw98nmjVqP_Fa5PJSt6WpykhU3Mt_XWPJ0sJm_8Zji0cOHiG1gpPrblNFryY0u1tr3Qw3JA": context deadline exceeded
2021/11/28 14:26:38 ERROR : Encrypted drive 'gdrivecrypt:HD6/Filmes/Filmes Internacionais/A Writer's Odyssey (2021)': not deleting files as there were IO errors
2021/11/28 14:26:38 ERROR : Encrypted drive 'gdrivecrypt:HD6/Filmes/Filmes Internacionais/A Writer's Odyssey (2021)': not deleting directories as there were IO errors
2021/11/28 14:26:38 ERROR : Attempt 1/3 failed with 1 errors and: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtRAMszNoALiNCVcw98nmjVqP_Fa5PJSt6WpykhU3Mt_XWPJ0sJm_8Zji0cOHiG1gpPrblNFryY0u1tr3Qw3JA": context deadline exceeded

and you have a 20 minute retry timer which makes it take 20 minutes to try as you can see from the next line.

2021/11/28 14:26:38 ERROR : Attempt 1/3 failed with 1 errors and: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtRAMszNoALiNCVcw98nmjVqP_Fa5PJSt6WpykhU3Mt_XWPJ0sJm_8Zji0cOHiG1gpPrblNFryY0u1tr3Qw3JA": context deadline exceeded
2021/11/28 14:46:38 INFO  : Encrypted drive 'gdrivecrypt:HD6/Filmes/Filmes Internacionais/A Writer's Odyssey (2021)': Running all checks before starting transfers
2021/11/28 14:46:38 INFO  : Encrypted drive 'gdrivecrypt:HD6/Filmes/Filmes Internacionais/A Writer's Odyssey (2021)': Transfer session deadline: 2021/11/28 14:56:38
2021/11/28 14:46:39 DEBUG : Encrypted drive 'gdrivecrypt:HD6/Filmes/Filmes Internacionais/A Writer's Odyssey (2021)': Waiting for checks to finish
2021/11/28 14:46:39 INFO  : Encrypted drive 'gdrivecrypt:HD6/Filmes/Filmes Internacionais/A Writer's Odyssey (2021)': Checks finished, now starting transfers
2021/11/28 14:46:39 DEBUG : Encrypted drive 'gdrivecrypt:HD6/Filmes/Filmes Internacionais/A Writer's Odyssey (2021)': Waiting for transfers to finish
2021/11/28 14:46:39 DEBUG : qblkamf54gbmv5aci1e72153v4669261384uaf3q7258ulvuvjg33j2caovu4lposee40p6miisud0njr7qb5kfatc08mev4gp6946np46jn3vv2iur5cn65jci140cq: Sending chunk 0 length 8388608
2021/11/28 14:46:41 DEBUG : qblkamf54gbmv5aci1e72153v4669261384uaf3q7258ulvuvjg33j2caovu4lposee40p6miisud0njr7qb5kfatc08mev4gp6946np46jn3vv2iur5cn65jci140cq: Sending chunk 8388608 length 8388608
2021/11/28 14:46:42 DEBUG : qblkamf54gbmv5aci1e72153v4669261384uaf3q7258ulvuvjg33j2caovu4lposee40p6miisud0njr7qb5kfatc08mev4gp6946np46jn3vv2iur5cn65jci140cq: Sending chunk 16777216 length 8388608

See the 1/3 attempts.

In 10 minutes you fail again:

2021/11/28 14:56:38 ERROR : a.writers.odyssey.2021.imax.version.1080p.bluray.x264-cinephilia.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtLS1Gz_1Q5umHmQkl-yv7AKEm3AYPXW_Qos3Oh1dMYrZp_z163AeW7CnlfTKLLcrwlK3kL_IcXsSi8JswqeLA": context deadline exceeded
2021/11/28 14:56:38 ERROR : Encrypted drive 'gdrivecrypt:HD6/Filmes/Filmes Internacionais/A Writer's Odyssey (2021)': not deleting files as there were IO errors
2021/11/28 14:56:38 ERROR : Encrypted drive 'gdrivecrypt:HD6/Filmes/Filmes Internacionais/A Writer's Odyssey (2021)': not deleting directories as there were IO errors
2021/11/28 14:56:38 ERROR : Attempt 2/3 failed with 1 errors and: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtLS1Gz_1Q5umHmQkl-yv7AKEm3AYPXW_Qos3Oh1dMYrZp_z163AeW7CnlfTKLLcrwlK3kL_IcXsSi8JswqeLA": context deadline exceeded
2021/11/28 15:16:38 INFO  : Encrypted drive 'gdrivecrypt:HD6/Filmes/Filmes Internacionais/A Writer's Odyssey (2021)': Running all checks before starting transfers

and it sleeps again for 20 minutes

2021/11/28 14:56:38 ERROR : Attempt 2/3 failed with 1 errors and: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtLS1Gz_1Q5umHmQkl-yv7AKEm3AYPXW_Qos3Oh1dMYrZp_z163AeW7CnlfTKLLcrwlK3kL_IcXsSi8JswqeLA": context deadline exceeded
2021/11/28 15:16:38 INFO  : Encrypted drive 'gdrivecrypt:HD6/Filmes/Filmes Internacionais/A Writer's Odyssey (2021)': Running all checks before starting transfers
2021/11/28 15:16:38 INFO  : Encrypted drive 'gdrivecrypt:HD6/Filmes/Filmes Internacionais/A Writer's Odyssey (2021)': Transfer session deadline: 2021/11/28 15:26:38
2021/11/28 15:16:39 DEBUG : Encrypted drive 'gdrivecrypt:HD6/Filmes/Filmes Internacionais/A Writer's Odyssey (2021)': Waiting for checks to finish
2021/11/28 15:16:39 INFO  : Encrypted drive 'gdrivecrypt:HD6/Filmes/Filmes Internacionais/A Writer's Odyssey (2021)': Checks finished, now starting transfers
2021/11/28 15:16:39 DEBUG : Encrypted drive 'gdrivecrypt:HD6/Filmes/Filmes Internacionais/A Writer's Odyssey (2021)': Waiting for transfers to finish
2021/11/28 15:16:39 DEBUG : qblkamf54gbmv5aci1e72153v4669261384uaf3q7258ulvuvjg33j2caovu4lposee40p6miisud0njr7qb5kfatc08mev4gp6946np46jn3vv2iur5cn65jci140cq: Sending chunk 0 length 8388608
2021/11/28 15:16:40 DEBUG : qblkamf54gbmv5aci1e72153v4669261384uaf3q7258ulvuvjg33j2caovu4lposee40p6miisud0njr7qb5kfatc08mev4gp6946np46jn3vv2iur5cn65jci140cq: Sending chunk 8388608 length 8388608
2021/11/28 15:16:42 DEBUG : qblkamf54gbmv5aci1e72153v4669261384uaf3q7258ulvuvjg33j2caovu4lposee40p6miisud0njr7qb5kfatc08mev4gp6946np46jn3vv2iur5cn65jci140cq: Sending chunk 16777216 length 8388608
2021/11/28 15:16:46 DEBUG : qblkamf54gbmv5aci1e72153v4669261384uaf3q7258ulvuvjg33j2caovu4lposee40p6miisud0njr7qb5kfatc08mev4gp6946np46jn3vv2iur5cn65jci140cq: Sending chunk 25165824 length 8388608
2021/11/28 15:16:48 DEBUG : qblkamf54gbmv5aci1e72153v4669261384uaf3q7258ulvuvjg33j2caovu4lposee40p6miisud0njr7qb5kfatc08mev4gp6946np46jn3vv2iur5cn65jci140cq: Sending chunk 33554432 length 8388608

So you made 2 out of 3 attempts based on the retry timer you have setup. By default, it'll try 3 times on failed file so you need to wait longer since your values are wonky.

grep Attempt wnrZf4si.txt
2021/11/28 14:26:38 ERROR : Attempt 1/3 failed with 1 errors and: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtRAMszNoALiNCVcw98nmjVqP_Fa5PJSt6WpykhU3Mt_XWPJ0sJm_8Zji0cOHiG1gpPrblNFryY0u1tr3Qw3JA": context deadline exceeded
2021/11/28 14:56:38 ERROR : Attempt 2/3 failed with 1 errors and: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtLS1Gz_1Q5umHmQkl-yv7AKEm3AYPXW_Qos3Oh1dMYrZp_z163AeW7CnlfTKLLcrwlK3kL_IcXsSi8JswqeLA": context deadline exceeded

It's doing exactly what you asked it unfortunately which is why a full debug log is super helpful as we can walk through the problem and explain it.

Hello, thanks for all the help :grinning:

New complete log file as requested:

  1. 2021/11/28 22:19:24 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rclone" "sync" "J:\PoolPart.ccce5054-e57c-4e3f-97db-f5132c7e2df7\Plexflix\Filmes\Filmes Internacionais\A Writer's Odyssey (2021)" "gdrivecrypt:HD6\Filmes\Filmes Internacionais\A Writer's Odyssey (2021)" "--fast-list" "--check-first" "--checkers=10" "--order-by" "modtime,ascending" "--bwlimit" "5M" "--log-file=HDTeste1.txt" "--log-level" "DEBUG" "--max-duration=5m" "-P" "--retries-sleep=5m" "--cutoff-mode=soft"]

I still think --max-duration=TIME is broken:

--max-duration=TIME
Rclone will stop scheduling new transfers when it has run for the duration specified.
Defaults to off.
When the limit is reached any existing transfers will complete.
Rclone won't exit with an error if the transfer limit is reached.

Im trying to upload 1 file, after 5min it should stop scheduling new transfers (since its only 1 file, nothing to do here, ok it works) and it should let the 1 file upload finish (existing transfer acording to text), this isnt happening, after 5min the file upload fails with "ERROR : a.writers.odyssey.2021.imax.version.1080p.bluray.x264-cinephilia.mkv: Failed to copy: Post alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtYBNEpuhKVwuB-WuXPJhd6s4wtwmqrP0DC_rNsCp8Df2WaTcVvzeHqN0dRr9945abuaLlUVPgoIc03OUo3pPk: context deadline exceeded"

After that it will wait 5min to restart the entire sync (sleep time), then it will start again the transfer, but since the time is not enough to upload the file, after 5min (--max-duration time) it will stop again, in the end it will never upload any files if the --max-duration is lower than the time need to upload entire file.

If this isnt a bug, then --max-duration dont do what I need, I need some command to stop rclone opening new transfers when it reach X time without getting a error and starting again everything with a new timer, thats all. Also, some mechanism to retry the entire sync in case of internet blackout.

Im not fluent in english, maybe I'm not understanding something.

Thanks!

It's the exact reason I said above in my previous post.

It tries 3 times.

etexter@seraphite Downloads % grep Attempt L4s3ptNg.txt
2021/11/28 22:24:28 ERROR : Attempt 1/3 failed with 1 errors and: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtYBNEpuhKVwuB-WuXPJhd6s4wtwmqrP0DC_rNsCp8Df2WaTcVvzeHqN0dRr9945abuaLlUVPgoIc03OUo3pPk": context deadline exceeded
2021/11/28 22:34:28 ERROR : Attempt 2/3 failed with 1 errors and: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtery7BPxq0IuMyUBKKNvAk1-zeubxuZfIM8pcLRW_4TxeGo6Xj88GXIlVkCYPNMAIonKswTFAkBb-Unhio0YY": context deadline exceeded
2021/11/28 22:44:28 ERROR : Attempt 3/3 failed with 1 errors and: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtkLmjO8Y5GiTSBUqE8By4YMxNT85FKe6oMDPNb3n5oOwi8GbvXRVYyQn6gKB1JGXxgBC1Y2kivR7ZO7OGR0aI": context deadline exceeded

:thinking:

Yes, I want rclone to stop identifying this transfer as "1 errors" because I want to finish the transfer, like its on the rclone documentation :thinking:

When the limit is reached any existing transfers will complete.

I dont want errors

@ncw - is this a regression from --max-duration flag is not respected · Issue #4504 · rclone/rclone · GitHub ?

Can you try the same thing with 1.55 as it was fixed there and curious if there is a regression.

I don't think it is a exact regression of that but it is certainly related.

It appears --max-duration and --cutoff-mode soft are broken :frowning: They are acting like --cutoff-mode hard instead.

A bit of git bisect indicates this was broken by this commit

I've had a go at fixing this - could you try this please @bavaja ?

v1.58.0-beta.5914.84bc6d33b.fix-sync-maxduration-soft on branch fix-sync-maxduration-soft (uploaded in 15-30 mins)

Shouldn't this stop after 5 seconds or am I doing something incorrect?

./rclone copy jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv GD: --max-duration=5s --cutoff-mode=hard -vv --bwlimit 1M
2021/11/29 11:49:53 INFO  : Starting bandwidth limiter at 1Mi Byte/s
2021/11/29 11:49:53 DEBUG : rclone: Version "v1.58.0-beta.5914.84bc6d33b.fix-sync-maxduration-soft" starting with parameters ["./rclone" "copy" "jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv" "GD:" "--max-duration=5s" "--cutoff-mode=hard" "-vv" "--bwlimit" "1M"]
2021/11/29 11:49:53 DEBUG : Creating backend with remote "jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv"
2021/11/29 11:49:53 DEBUG : Using config file from "/Users/etexter/.config/rclone/rclone.conf"
2021/11/29 11:49:53 DEBUG : fs cache: adding new entry for parent of "jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv", "/Users/etexter/Downloads"
2021/11/29 11:49:53 DEBUG : Creating backend with remote "GD:"
2021/11/29 11:49:53 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: Need to transfer - File not found at Destination
2021/11/29 11:49:54 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: Sending chunk 0 length 329269226
2021/11/29 11:50:53 INFO  :
Transferred:   	   59.934 MiB / 314.016 MiB, 19%, 1023.904 KiB/s, ETA 4m14s

Thank you!

This 1.58 beta version fixed the problem, everything is working fine now :grinning:

Just a question, --cutoff-mode soft is redundant right? In theory --max-duration is soft from default...

2021/11/29 22:16:06 DEBUG : a.writers.odyssey.2021.imax.version.1080p.bluray.x264-cinephilia.mkv: md5 = b2ed515f22e23f9d0253e9155ed86cc1 OK
2021/11/29 22:16:06 INFO  : a.writers.odyssey.2021.imax.version.1080p.bluray.x264-cinephilia.mkv: Copied (new)
2021/11/29 22:16:06 DEBUG : Waiting for deletions to finish
2021/11/29 22:16:06 ERROR : Can't retry any of the errors - not attempting retries
2021/11/29 22:16:06 INFO  : 
Transferred:   	   18.692 GiB / 18.692 GiB, 100%, 5.769 MiB/s, ETA 0s
Errors:                 1 (no need to retry)
Transferred:            1 / 1, 100%
Elapsed time:     53m59.2s

https://pastebin.com/EeyX7EpW

Thanks for the help @Animosity022 and @ncw

Does cutoff work with max duration or only with max transfers? Docs seem to indicate only transfer?

--cutoff-mode=hard|soft|cautious
This modifies the behavior of --max-transfer Defaults to --cutoff-mode=hard.

Specifying --cutoff-mode=hard will stop transferring immediately when Rclone reaches the limit.

Specifying --cutoff-mode=soft will stop starting new transfers when Rclone reaches the limit.

Specifying --cutoff-mode=cautious will try to prevent Rclone from reaching the limit.

Ah yes, well spotted both of you.

--max-duration is specified as being "soft" by default and --cutoff-mode is specified as only working with --max-transfer...

What the patch above is makes --cutoff-mode work with max-duration and max-transfer which is fine, but it changes the default from soft to hard for max-duration which is not...

Not sure what to do here...

  1. I could make the default for --cutoff-mode switch depending on whether --max-duration or --max-transfer is specified, plus an error message if you specify both without a --cutoff-mode. This is certainly complicating our lives if rclone gains any more --max-XXX flags!

  2. I could leave the patch as it is which changes the default --cutoff-mode for --max-duration. This is a behaviour change. However its been effectively defaulting to hard since v1.53.0 (released in 2020-09-02) and no-one has complained so actually it won't change the actual behaviour of rclone as it is now.

  3. stop --cutoff-mode influencing --max-duration and make it soft like it is at the moment. I think --cutoff-mode is probably useful with --max-duration though.

Thoughts? I think option 2 is probably my favourite but I'd like to hear what you all think.

With the patch, I'm not seeing that though unless I'm doing something wrong. My log from above shows it is acting as 'soft' when I have hard set.

./rclone copy jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv GD: --max-duration=5s --cutoff-mode=hard -vv --bwlimit 1M
2021/11/29 11:49:53 INFO  : Starting bandwidth limiter at 1Mi Byte/s
2021/11/29 11:49:53 DEBUG : rclone: Version "v1.58.0-beta.5914.84bc6d33b.fix-sync-maxduration-soft" starting with parameters ["./rclone" "copy" "jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv" "GD:" "--max-duration=5s" "--cutoff-mode=hard" "-vv" "--bwlimit" "1M"]
2021/11/29 11:49:53 DEBUG : Creating backend with remote "jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv"
2021/11/29 11:49:53 DEBUG : Using config file from "/Users/etexter/.config/rclone/rclone.conf"
2021/11/29 11:49:53 DEBUG : fs cache: adding new entry for parent of "jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv", "/Users/etexter/Downloads"
2021/11/29 11:49:53 DEBUG : Creating backend with remote "GD:"
2021/11/29 11:49:53 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: Need to transfer - File not found at Destination
2021/11/29 11:49:54 DEBUG : jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv: Sending chunk 0 length 329269226
2021/11/29 11:50:53 INFO  :
Transferred:   	   59.934 MiB / 314.016 MiB, 19%, 1023.904 KiB/s, ETA 4m14s

I'd expect hard to stop after 5 seconds.

I'd just default to the docs as I'd leave max duration as soft as anything new will stop and not add more flags so I'm more for 3 as I'd hope folks are using it based on the docs and that would be the expected behavior.

I'd imagine no one notices as much as if you are doing a long transfer, you wouldn't even notice the last few files aborting unless you were really checking your logs. I think the OP was testing with a very small set so that was the way it was noticed.

Maybe you can make hard/soft/cautious default for both --max-transfer and --max-duration (same default for both), then make it possible to use --cutoff-mode soft/hard/cautious to change the behavior of both if the user want to.

Its probably the most laborious option but this way the user can do everything he want to.

In my case I need soft, dont matter if I need to use a new command or switch, my command is already to big to care :rofl:

Thanks!