Making a script and want to have rclone stop transferring after x amount of files. So every 100 files transfered, stop the transfer.
I see theres ' --max-transfer SizeSuffix ' but thats in terms of file size> i want the files to complete during transfer and at least copy a certain amount instead of by size
What is your rclone version (output from rclone version)
Which OS you are using and how many bits (eg Windows 7, 64 bit)
rclone v1.53.3
os/arch: linux/arm
go version: go1.15.5
Which cloud storage system are you using? (eg Google Drive)
GDrive
The command you were trying to run (eg rclone copy /tmp remote:tmp)
unfortunately the way i want my script to work, im always going to have data to be transferred, yet alone tbs worth. Would there be another flag that might do x amount over time?
If you mean amount transferred over time then --bwlimit might be what you are looking for? That will limit the transfer to so many bytes per second. So if you set --bwlimit 1M rclone will transfer no more than 1 MB per second.
If you want to limit files per time, you can't do that exactly but --tpslimit will work pretty nearly. There are usually 2-4 transactions per uploaded file so you can set a file per second limit approximately with that.
That will mean rclone keeps 100 files in the backlog of files to transfer so probably isn't what you want.
No --bwlimit isnt going to work how i want to. I need max upload speed in my case and isnt --tpslimit the same as --transfers?
For --max-backlog, i would like it transfer at 100 at a time. Would it look for 100 files in the queue, upload those, then stop? And if those files are uploaded not count towards the 100 queue?
No, TPS is transaction per second. So you have transfers/checkers/getting directory lists/file sizes/etc all going on and those would be transactions that happen.
You can limit transfers/checkers as that limits the amount of things happening at the same time.
If you limit TPS, that impacts everything.
Say you have 1000 files.
You can configure # of uploads the same time with transfers.
Max backlog would be how far ahead it's going to get a list of things to transfer. So if you set max-backlog to 10, it's going to iterate through those 1000 files and keep 10 files in the backlog for the next items to be uploaded.
I see. So i think tpslimit isnt going to be what i want. Maxbacklog is similar to what i need. Let me explain what exactly the script does. So I have photos and videos in a folder. Lets say it has 1000 files mixed with videos and photos of different sizes of memory.
What im wanting rclone to do is lets say upload 25 photos/videos at a time regardless of size. then itll run a totally different set of commands. After those commands are done, upload another 25 photos/videos, stop rclone completely, run the separate commands, then upload another 25. In a nutshell, in psudocode rclone upload 25 photos
print done
rclone upload another set of 25 photos
print done
If im understanding correctly, with maxbacklog, itll upload 10 at a time like you mentioned, but itll still be running the same rclone command til 1000 files are uploaded in one shot.
So like this rclone upload 10
print done
but done wont get printed til all 1000 files are uploaded
#!/bin/bash
SOURCE=/path/to/source
DEST=remote:dest
FILES=/tmp/files
BATCH=25
rclone lsf --files-only "${SOURCE}" > ${FILES}
while [ -s ${FILES} ]; do
# Get the first ${BATCH}
head -n ${BATCH} ${FILES} > ${FILES}-batch
# Cut the ${BATCH} off the top of `${FILES}`
tail -n +${BATCH} ${FILES} > ${FILES}-new
mv ${FILES}-new ${FILES}
# Now transfer the data
rclone copy --files-from-raw ${FILES}-batch /path/to/source remote:destination
done