Max-duration option is triggering exit with error

What is the problem you are having with rclone?

Max-duration option is triggering an exit code different than zero, when in the docs it states that the exit code should be 0.

--max-duration=TIME

Rclone will stop scheduling new transfers when it has run for the duration specified.

Defaults to off.

When the limit is reached any existing transfers will complete.

Rclone won't exit with an error if the transfer limit is reached.

Source: Documentation

Some context:

I'm currently in the process of copying a large amount of data from a drive, and I'm using an image called docker-rclone from the GitHub repository GitHub - pfidr34/docker-rclone: Docker image to use rclone to run cron sync with monitoring to monitor the progress. This image provides a feature that allows me to set up a healthcheck.io URL to receive notifications for successful and failed copies. Unfortunately, I'm encountering an issue where the copy process stops and exits with an error due to the --max-duration parameter that I have set for each scheduled transfer. This triggers a failure notification. I was hoping to clarify the expected exit code for the --max-duration parameter, as the documentation indicates that it should exit with a code of 0.

Run the command 'rclone version' and share the full output of the command.

rclone v1.63.0

  • os/version: alpine 3.10.0 (64 bit)
  • os/kernel: 5.7.0-0.bpo.2-amd64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.20.5
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy ${LOCAL_FS} gdrive:/ --log-level ${LOG_LEVEL} --check-first --order-by size,descending  --stats 5m --use-mmap --timeout 2m --config /config/rclone.conf --max-duration=${MAX_DURATION} --max-transfer ${MAX_TRANSFER} --bwlimit ${BWLIMIT} --drive-skip-gdocs --drive-acknowledge-abuse --cutoff-mode SOFT --fast-list --drive-server-side-across-configs --drive-stop-on-download-limit

The rclone config contents with secrets removed.

[gdrive]
type = drive
scope = drive
service_account_file = /config/keys/new/blackhole-4.json
team_drive = ateamdriveid

A log from the command with the -vv flag

2023/07/13 10:33:51 INFO  : movies/The Lighthouse (2019)/The.Lighthouse.2019.1080p.BluRay.REMUX.AVC.DTS-HD.MA.5.1-FGT.mkv: Multi-thread Copied (new)
2023/07/13 10:33:52 ERROR : Local file system at /mnt/disk1: max transfer duration reached as set by --max-duration
2023/07/13 10:33:52 ERROR : Cancelling sync due to fatal error: max transfer duration reached as set by --max-duration
2023/07/13 10:33:52 ERROR : Fatal error received - not attempting retries
2023/07/13 10:33:52 INFO  : 
Transferred:   	  117.969 GiB / 30.448 TiB, 0%, 0 B/s, ETA -
Errors:                 1 (fatal error encountered)
Checks:                24 / 24, 100%
Transferred:            4 / 9433, 0%
Elapsed time:   1h38m49.7s

2023/07/13 10:33:52 Failed to copy: max transfer duration reached as set by --max-duration

and what was the actual exit code?

I'll try to debug the code later but checking the sync.sh file from the docker image I'm using it's clear the code is different than 0

  # Wrap up healthchecks.io call with complete or failure signal
  if [ -z "$CHECK_URL" ]
  then
    echo "INFO: Define CHECK_URL with https://healthchecks.io to monitor $RCLONE_CMD job"
  else
    if [ "$RETURN_CODE" == 0 ]
    then
      if [ ! -z "$OUTPUT_LOG" ] && [ ! -z "$HC_LOG" ] && [ -f "$LOG_FILE" ]
      then
        echo "INFO: Sending complete signal with logs to healthchecks.io"
        m=$(tail -c 10000 "$LOG_FILE")
	wget $CHECK_URL -O /dev/null --post-data="$m"
      else
	echo "INFO: Sending complete signal to healthchecks.io"
        wget $CHECK_URL -O /dev/null --post-data="SUCCESS"
      fi
    else
      if [ ! -z "$OUTPUT_LOG" ] && [ ! -z "$HC_LOG" ] && [ -f "$LOG_FILE" ]
      then
        echo "INFO: Sending failure signal with logs to healthchecks.io"
        m=$(tail -c 10000 "$LOG_FILE")
        wget $FAIL_URL -O /dev/null --post-data="$m"
      else
	echo "INFO: Sending failure signal to healthchecks.io"
        wget $FAIL_URL -O /dev/null --post-data="Check container logs"
      fi
    fi
  fi

Source: https://github.com/pfidr34/docker-rclone/blob/master/sync.sh

it would be great to check as I think that --max-duration=${MAX_DURATION} --max-transfer ${MAX_TRANSFER} can mess up things when used together - and they should not

Doesn't look like it. I just ran a test without --max-transfer and it exited with a fatal error.

rclone copy gdrive: /mnt/disk1 --config /config/rclone.conf --log-level INFO --check-first --order-by size,descending --stats 5m --use-mmap --timeout 2m --config /config/rclone.conf --max-duration=1h --bwlimit 25M --drive-skip-gdocs --drive-acknowledge-abuse --cutoff-mode SOFT --fast-list --drive-server-side-across-configs --drive-stop-on-download-limit

Transferred: 117.182 GiB / 30.333 TiB, 0%, 0 B/s, ETA -
Errors: 1 (fatal error encountered)
Checks: 28 / 28, 100%
Transferred: 4 / 9429, 0%
Elapsed time: 1h38m10.3s

2023/07/13 18:23:45 Failed to copy: max transfer duration reached as set by --max-duration

You are correct that it does not behave as documented:

$ rclone copy . crypt:test --max-duration=5s -P
2023-07-14 11:19:22 NOTICE:: Failed to cancel multipart upload: ....: context deadline exceeded)
2023-07-14 11:19:22 ERROR : test: Failed to copy:....: context deadline exceeded
2023-07-14 11:19:22 ERROR : Encrypted drive 'crypt:test': max transfer duration reached as set by --max-duration
2023-07-14 11:19:22 ERROR : Fatal error received - not attempting retries
Transferred:   	    4.501 MiB / 4.501 MiB, 100%, 1.125 MiB/s, ETA 0s
Errors:                 2 (fatal error encountered)
Elapsed time:         5.9s
2023/07/14 11:19:22 Failed to copy with 2 errors: last error was: max transfer duration reached as set by --max-duration

# current transfer has been cancelled

$ echo $?
7

$ rclone copy . crypt:test --max-duration=5s -P --cutoff-mode=soft
2023-07-14 11:11:19 ERROR : Encrypted drive 'crypt:test': max transfer duration reached as set by --max-duration
2023-07-14 11:11:19 ERROR : Cancelling sync due to fatal error: max transfer duration reached as set by --max-duration
2023-07-14 11:11:19 ERROR : Fatal error received - not attempting retries
Transferred:   	   30.007 MiB / 30.007 MiB, 100%, 645.276 KiB/s, ETA 0s
Errors:                 1 (fatal error encountered)
Transferred:            1 / 1, 100%
Elapsed time:        42.4s
2023/07/14 11:11:19 Failed to copy: max transfer duration reached as set by --max-duration

# current transfer finished

$ echo $?
7

Docs:

--max-duration=TIME

Rclone will stop scheduling new transfers when it has run for the duration specified.

Defaults to off.

When the limit is reached any existing transfers will complete.

Rclone won't exit with an error if the transfer limit is reached.

So there are few issues here.

  1. It is not documented that --cutoff-mode applies to --max-duration as well:

--cutoff-mode=hard|soft|cautious

This modifies the behavior of --max-transfer Defaults to --cutoff-mode=hard.

  1. Contrary to --max-duration description it does not stop scheduling but terminates immediately and existing transfers are not completed - but as --cutoff-mode works it is only documentation problem

  2. Even when --cutoff-mode=soft is used rclone exits with error code 7

I will update documentation but I am not sure if error code 7 is by design or it is bug? @ncw ? IMHO we should add new error code 10:

List of exit codes

  • 0 - success
  • 1 - Syntax or usage error
  • 2 - Error not otherwise categorised
  • 3 - Directory not found
  • 4 - File not found
  • 5 - Temporary error (one that more retries might fix) (Retry errors)
  • 6 - Less serious errors (like 461 errors from dropbox) (NoRetry errors)
  • 7 - Fatal error (one that more retries won't fix, like account suspended) (Fatal errors)
  • 8 - Transfer exceeded - limit set by --max-transfer reached
  • 9 - Operation successful, but no files transferred
  • 10 - Duration exceeded - limit set by --max-duration reached

I have updated docs:

Thanks! Would be great to have a different status code for the --max-duration option though to differentiate between user intended behavior and actual fatal errors.

Exit code 7 is a fatal error.

It would be easy to make a new exit code for duration exceeded, or maybe we should reword exit code 8 to include max duration too as the concept is the same - sync stopped due to user max config reached.

Here is the magic if you want to have a go @kapitainsky

    case errors.Is(err, accounting.ErrorMaxTransferLimitReached):
            os.Exit(exitcode.TransferExceeded)

That would need make the error public - I'd probably move it to accounting to live with accounting.ErrorMaxTransferLimit

It should be definitely different error code:) if user is using both options it is always better to be able to distinguish error cause.

hahah.. I have tried already

I can unwrap it in cmd.go:

_, unwrapped := fserrors.Cause(err)
fmt.Printf("err = %#v\n", unwrapped)

err = &errors.errorString{s:"max transfer duration reached as set by --max-duration"}

but struggling to reference it to original error..

actually I will include it in my PR and with your help it can be fixed

@dantebarba

I have working version ready (with exit code 10 for --max-duration. If you need it urgently then compile it from this PR:

otherwise hopefully it will be included in future releases.

I've merged this to master now which means it will be in the latest beta in 15-30 minutes and released in v1.64 - thank you :slight_smile:

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.