I am uploading/backing up my movie collection. Most of my files are 20gb+, with some media reaching almost 100gb per file. I am currently running a very basic command and want to know if I should add any flags to my upload command to optimize for all my large files.
Run the command 'rclone version' and share the full output of the command.
yes
Which cloud storage system are you using? (eg Google Drive)
Google Drive
The command you were trying to run (eg rclone copy /tmp remote:tmp)
Can you elaborate on creating client id / secret? I believe when I setup the config I just hit enter ( nothing entered ). What will this do for uploads?
And yes, pretty much all my files are over 20gb+, what value would you recommend for --checkers? Im only on a 25mbps upload pipe, and im getting a steady 3 meg / sec upload speed, which is essentially saturating my upload but thats ok.
I have just 1 or 2 questions. I wanted to try the --checkers command out because I am going to play around with rclone at work where I have a 1gig upload pipe, I also wanted to implement the --checksum command or -c because the integrity of the data is important to me. Once I upload it, Im deleting some of the data for some room. My question is, does it matter where some of these flags go? It seems some work when putting them right after the rclone command, and others it seems only work at the very end. Anyway, bearing in mind the checkers and checksum commands, this is what I used, and rclone seemed to accept the command but I want to make sure the checksum and checkers command are working, this is what I used to test;
If I simply use the rclone copy command, delete files on the source but not the remote, then run rclone copy again when the folder im using fills up and use the same path on the remote, will rclone see the old files are not on the source anymore and delete them off the remote?