I am sure I am reading this wrong but if I am correct --max-transfer and cutoff-mode shoult automatically start uploading after 24 hours. But that's not the case and I have to restart the rclone to start the uploading
Run the command 'rclone version' and share the full output of the command.
rclone v1.60.0-beta.6419.1107da724
os/version: alpine 3.15.6 (64 bit)
os/kernel: 4.15.0-192-generic (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.19
go/linking: static
go/tags: none
Which cloud storage system are you using? (eg Google Drive)
Google Drive
The command you were trying to run (eg rclone copy /tmp remote:tmp)
2022/09/08 12:40:33 ERROR : file1.log: vfs cache: failed to upload try #170, will retry in 5m0s: vfs cache: failed to transfer file from cache to remote: max transfer limit reached as set by --max-transfer
2022/09/08 12:40:33 ERROR : file2.log: vfs cache: failed to upload try #170, will retry in 5m0s: vfs cache: failed to transfer file from cache to remote: max transfer limit reached as set by --max-transfer
as i understand it,
once max transfer limit is reached, rclone will not transfer any more files.
and based on that log snippet, that is what rclone is telling you.
one possible workaround is to set --bwlimit so that you never hit 750GiB in 24 hours.
DRIVE = replace it with your Drive
FOLDER = you should to know where is your mount point and change it.
I run this script by a cronjob:
...
15 0 * * * /home/copy-rclone.sh > /dev/null 2>&1
...
if I reach the limit it stops and starts with the next cron again. I don't know this is a solution for you?
The log file is helpful to control everything is running well. sometimes I got a error from the google api if I download to much and reach mit limit but after 4 errors the copy stopped also.
if you use bwlimit you need to setup it like:
...
rclone copy -v /home/FOLDER DRIVE: --bwlimit 8M --ignore-existing --exclude _unpack/
...
--bwlimit 8M = around 8 Mbit equal to 750GB in 24 hours