STOP and READUSE THIS TEMPLATENO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.
What is the problem you are having with rclone?
I am trying to upload files (mostly raw files and videos) from an external SSD to Google Drive via the copy command. I want to make a copy of the files from my drive to the external drive.
For context, I have about 2.69 TiB of files to move (checked this with the rclone size command). I used to copy command, but it does not appear to attempt to transfer all the files. I'll start copying and it'll be something like 2/443 GiB, which is obviously not the total amount of files in the drive. The total amount it decides to upload also seems pretty random, sometimes showing up as out of 600, or 1 Tib, etc. I cannot figure out why it is not trying to upload the whole of the drive, especially when I use the rclone size command and it seemingly knows the drive has that amount of files.
Run the command 'rclone version' and share the full output of the command.
rclone v1.68.1
os/version: darwin 15.3.1 (64 bit)
os/kernel: 24.3.0 (arm64)
os/type: darwin
os/arch: arm64 (ARMv8 compatible)
go/version: go1.23.1
go/linking: dynamic
go/tags: cmount
Which cloud storage system are you using? (eg Google Drive)
Google Drive
The command you were trying to run (eg rclone copy /tmp remote:tmp)
Add -vv --log-file /path/to/rclone.log to your command and either you see it yourself or post it here and somebody will have a look.
Most likely your transfer is seriously throttled by Google. For the start create your own client_id. Then recreate your remote.
Also,
delete these. I think it is too much for Gdrive. If all works with defaults you can experiment later with increasing it. But maybe you have to even set it lower than defaults. You have to try.
updating rclone to the latest version is also recommended. It is v1.69.1 now.
Sorry, new to rclone and not really familiar with terminal stuff in general. What do I post from the log? Is the log wherever I specify it to be? And also, what do I do with creating my own client ID?
I pasted the command as such: rclone copy /“Volumes/drive” gdrive:/Backups/ExternalDrive --progress --drive-chunk-size=128M -vv --log-file / Users/myuserfolder/rclone.log
keep in mind that gdrive has a hard limit of something like 750GiB/24 hours.
depending on the upload speed of your internet connection, you might hit that limit.
the workaround is to limit the upload bandwidth using --bwlimit=8.5M
I understand the upload limit - but to my understanding, when that limit is hit, i can just rerun the command after the 24 hr cooldown, correct? or is it just better to limit bandwidth
Also, it still doesn't upload the full amount - I just tried and it was out of 1.1Tib, but still not the full 2.69.
Am I doing the copy line wrong? Should I limit it to subfolders at a time?
I know kapitaninsky mentioned this but unsure how I run this - do I just run the copy command, and put this behind the copy command? where is the log, or does the path just creates the log to whatever I set it to? and what do I send from that. sorry, new to this.
no problem. to better understand what is going on, i would run rclone check /Volumes/drive gdrive:/Backups/ExternalDrive --combined=/path/to/combined.txt
Do you have more than 10 000 files to transfer? If yes see
--max-backlog
from Global Flags
Can set --max-backlog bigger at the cost of using more memory (RAM)
Also rclone only transfers and shows files transferred in the current run of rclone. So already uploaded files do not show in transferred amount nor are queued to be uploaded again.
I’ve run into the same problem, and it’s honestly been a huge headache. You’d think uploading an entire external drive would be straightforward with Rclone, but between permission issues, weird rate limits, and inconsistent behavior, it’s anything but.
What’s frustrating is the lack of clear error messages — it just stops or skips files without much context. I’ve tried tweaking flags, chunk sizes, even using service accounts, but the results are still unreliable. This kind of thing really kills confidence in what’s otherwise a powerful tool.
Would love to see better documentation or at least more robust error handling around these kinds of bulk uploads.