How to "automatize" rclone?

The cache mode most importantly determines if you can open files in read/write mode. (as opposed to read-only or write-only modes).
For example - if you want to open a word document on the Google Drive (crypted or not) and make some changes to it - it will depend on your cache mode. With no cache you will have to download the file, then make changes and then upload. That works - but is a hassle...
With no cache (default) you are basically limited to uploading and downloading files. No direct modifications allowed.

With --vfs-cache-mode writes you can modify files normally like it was a normal harddrive. (in this case, changes files will be temporarily written to harddrive - then uploaded automatically). "writes" mode is fully compatible with all functions the OS expects a disk to be able to perform. Lower modes are not.
I do not recommend "full" cache mode however - for various reasons. Do not make the mistake of thinking this is "better". Avoid this unless you have a very good reason to use it.

To sum up - use --vfs-cache-mode writes unless you know you need something different, because this is the best 100% compatible mode.
you can control where the temporary files go with --cache-dir "C:\VFScache" or equivalent command. (quote signs are required).

As for chunk-sizes...

There are two different uses of this.
One is in the "caching backend". This is a special remote for caching. I recommend not using it unless you know you need to.

The second use-case is for uploading (not downloading) on Google Drive (and some other remotes).
This determined how large blocks of data is transferred at a time.
The default is 8MB. This is quite low unless you have a slow upload (for example DSL connection).
That basically makes the connection reset every 8MB and on higher upload it never quite gets to reach it's full speed because of how TCP inherently behaves (it needs a few seconds to "ramp up" the speed).
I generally recommend 64M or 128M if you have a faster upload speed because this will greatly increase the total upload speed. The higher upload the higher the chunk size should ideally be - but above 128M/256M there is little to no benefit.
The only downside to this is that for example 64M will use up to 64M of RAM during the upload (pr connection - default 4).

You can set this by using the follow setting in rclone.conf
chunk_size = 64M
You can insert this anywhere in the "block" of sett settings for that remote.

--drive-chunk-size 64M will do the same thing as a command-flag if you prefer that.

Yes sure, I always read the documentation, then I ask if I don't understand something..

Ok thanks! I understood :slight_smile:

Understood, I have 20mbps upload but if I play/do other things I limit it to 5mbps when I need. So I should go with 64M?

Another question. I have the "Crypt" remote that I use to store and encrypt my files to "RRROBA - ccsf". I want to server-side copy it to another folder (RRROBA - swccd) to do a backup of it. How should I do this?
I mean.
I have "Crypt" that is "RRROBA - ccsf", so if I do
rclone mount "Crypt:" X:
In "X:" I can see the decrypted files
Maybe I have to make "Crypt2" that is "RRROBA - swccd", then do
rclone copy "Crypt2:" "RRROBA - swccd:"
And then
rclone mount "Crypt2:" X:
To access the files?

EDIT: Another question

Can I use the same folder for multiple mounts?
I mean, can I do

rclone mount folder1: x: --cache-dir "C:\VFScache"

rclone mount folder: y: --cache-dir "C:\VFScache"

rclone mount folder: z: --cache-dir "C:\VFScache"

If you can afford 64M x the number of transfers you use (default 4) then 64M is the "best bang for the buck" setting yes. Close to ideal speeds with relatively limited RAM impact. More is always better, but the benefit will exponentially decrease as you allocate more memory. I would say stick with 64-to-128M depending on available system RAM.

In order for it to be possible to server-side transfer, one of the 2 criteria must be true:

  • Both source and destination are unencrypted
  • Both source and destination is encrypted with the same key/salt

The reason is the server can only copy/move files. It can not change them - so if it files have to be decrypted/re-encrypted they MUST do so via the local PC.

For example.... assume the following remotes:

  • Gdrive1:
  • Gcrypt1: (Gdrive:/Crypt)
  • Gdrive2:
  • Gcrypt2: (Gdrive2/Crypt)

rclone sync Gcrypt1: Gcrypt2: will work, but not as server-side. The data wil decrypted (locally), then decrypted (locally)

rclone sync Gdrive1:/Crypt: Gdrive2:/Crypt would work server-side however. In this case we are blindly syncing files regardless of encryption - so as long as decryptor on the other side has the same setting it will be fine.

1 Like

Sure! Thanks

Ok understood! So I have to use the decrypted remotes to copy and the encrypted remote to mount

You're really helping me a lot thanks, I always read the documentation but this is my first time with "this kind of software" and I can't understand some things simply reading the documentation

I have edited this later, I think you didn't see, it's ok to do this?

Also, I have to do a thing.
I want to do a copy from local folder "D:\Roba" to the remote "Roba - ccsf:\Roba", but I don't want the subfolder "D:\Roba\RRROBA" to be copied. So I've found this post

And tried this command

rclone copy --dry-run -P -v --filter='- /{RRROBA}/' "D:\Roba" "Roba - ccsf:\Roba"

But the log says this

2020/02/07 12:37:57 Failed to load filters: malformed rule "'-"

use this
--filter="- /{RRROBA}/"

This works thanks :slight_smile:

Well.. Another question
There is a way to see the files into a GDrive crypted folder on my Android Smartphone?

i would start a new post, ask the question in a new post.

1 Like

Well, another question (again) lol

Can I use dedupe on a LOCAL folder like D:\Music?

go ahead and try it yourself.
of course, use --dry-run

Yes - each remote will get it's own subfolder inside C:\VFScache
for example:
this happens automatically so there can not be a collision as long as remote names are unique.

Yes - I know there are at least 2-3 solid android apps that incorporate rclone.
Unfortunately I have not explored these a lot yet, so it is a little difficult for me to tell you which is "best".
I would do as asdfdsa says and start a new topic on this - then I am sure you will get good suggestions to try.

Here is just one I remembered the name of - as an example:

i had the OP create a new post about this so share you thought there instead of continuing this post.

You should understand what rclone dedupe is- because I think you are misunderstanding what it does.

rclone dedupe is specifically designed to remove duplicates of files in Cloud-systems that can allow files with identical names to exist (unlike most PC operating systems). Most commonly this can happen on Google Drive sometimes and is not a big problem - but occasionally deduping is a good idea, especially if there are multiple users or systems changing files.

This is NOT a traditional deduping tool like you would use for music or pictures to remove multiple copies of files. It will NOT remove 2 identical files that are stored in 2 different folders for example. It will literally only clean up instances of 2 files with the same name - in the same place (something that can not happen on your local PC on most filesystems).

If that is what you actually want to do - you should go find a deduper software and run that on your mount. That will work fine :slight_smile: rclone dedupe is just not "that kind of dedupe".

1 Like

Understood, thanks :slight_smile:
Guys you helped me really a lot, now it all works and I have 0 problems (at the moment, at least!)
I still have some questions about some "functionalities", but nothing "too important"
1)if I do "rclone copy gdrive1: gdrive2" the server side copy works.
If I mount gdrive1, mount gdrive2 and manually copy files, the server side copy DOESN'T work, is it normal? Am I doing some error?

  1. I have a local folder with some songs that I copy to gdrive. This is the "structure"
    Main folder - > folders of artists - > folders of albums - > songs
    So, just to make an example with random numbers just to try to explain in a better way: in the main folders there are 10 folders of artists, every artists has 10 folders of albums, every album has 10 songs

I have changed the local structure deleting the "album folders", so now there is the main folder, 10 artists folder with 100 songs into artists folders
If I do "rclone sync local:MainFolder gdrive:MainFolder" will it delete all the files in the "gdrive:MainFolder" and then upload them again or just recognize that there are the same files and only the folders have changed, so it will just "server side move" the files?

Thanks :slight_smile:

Up, can you answer me please? :slight_smile:

This is not possible given how mounts currently work. If you are curious I can go into the technical details, but in short it is not possible. It would at least require an rclone-aware application that integrated into the OS and could intercept copying operations. Maybe that will be a thing eventually and would be very nice, but it doesn't exist now.

The next best thing for you may be the use the rclone webGUI however. This has a GUI where you can copy files between remotes and it supports server-side because it is aware of how rclone functions.

Or do it via commandline / script. These are basically the options you have as of now that allow for server-side to function.

If you do a "sync" command from A to B, then rclone will make B exactly identical to A.
It will do this in the way that requires the least amount of work and data movement, so syncing twice in a row for example would always result in the second sync doing nothing.

In other words - it will compare the file attributes of all the files between A and B (size, modtime - and hash if available), and then only transfer those files that do not already exist on B, or exist but are not the latest version (in case it was changed on A after uploading).
Note that it will also delete any file on B that were not on A! So be careful when using sync because if you mess and misunderstand a command you are running it could accidentally delete data you didn't mean to.
You can use the flag --dry-run to do a "simulation run" of what wold happen first if you want to be sure you are doing what you intended.

copy does the same as sync, but does not delete any files on B
move does the same as copy, but deletes the files from A after copying.
It is not so uncommon that people confuse sync and copy functionality, so you should definitely not make that mistake.

1 Like

I am always curious, but I don't want to waste your time, thanks for all your help :slight_smile:

I have tried the GUI but it "doesn't work", I mean, maybe I am doing some error but

I don't see my configs

And can't explore my remote
I used the command

rclone rcd --rc-web-gui

I understood perfectly the difference thanks :slight_smile: My question was a little "more specific" but my example maybe wasn't one of the bests, I'll try to write it again in another way
I have a local folder with inside only a folder with a file
Daily I do
rclone sync "localfolder:" "remotefolder:"
So my remote folder becomes
What happens if I manually cut the file into the "upper folder", so my local folder becomes
And then I do
rclone sync "localfolder:" "remotefolder:" ?

  1. rclone deletes the "remotefolder/folder/text.txt" and then reuploads the text.txt so the result will be "remotefolder/text.txt"
  2. rclone recongizes that the file is the same so it will "server-side move" the text.txt to the upper folder, so the result will be "remotefolder/text.txt" without needing to reupload any file

Thanks :slight_smile:

Where are you keeping your rclone.conf file?
Are you setting up rclone using a user-account and then running the webUI as a system account? (because if so that will reference different config files if you use the default location). That is my best guess for what is happening.

This depends on what options you use.
By default it will delete and then re-upload in this case.

However, assuming that your 2 remotes can have shared hash-sums (and a local can create any hash-sum type on the CPU) then you can use
and this will then allow rclone to be smarter about it and just server-side move the existing file instead.

As the name indicates, this does not only work for files that were moved, but also files that were re-named (but otherwise stayed identical to before).

So why not always use --track-renames if you can? Well, on a filesystem that stores hashes natively this can be used "for free" basically, but most normal user-filesystem do not store hashes natively (that's more of a thing for advanced server systems). This means in order to know them rclone has to read the entirety of all files it needs to sync and calculate the hash on CPU. The CPU load is not a problem, but if you are syncing hundreds of gigabytes or more many times in a day then this is maybe more disk-activity and wear on your drives than it is worth to save some bandwith. But that depends on how fast your bandwith is - how good your drives are ect.
And if you are just syncing a bunch of small files it will be trivial anyway...

So there is not necessarily a "right" answer there of what is best to use.
Except when syncing a cloud-remote to another cloud-remote that uses the same hash. In that case they have hashes natively and thus it's "free", so it will only give you benefits to use. I use this for all my Gdrive-to-Gdrive transfers. If you try to use --track-renames and hashes can not be used (like for example between two clouds using different hash-types) then it will just tell you that it is impossible and fall back to the normal mode of operation.

1 Like

Ehm I don't know what is this lol

Understood, let me say that you explain things really REALLY well, thanks for all of your help :slight_smile:
So, I'm gonna use --track-renames for all my server-side copies
And use it "manually" only when I need it

I have another question.. Rclone uses a lot my disk.
This is my "script" atm

   cd C:\Rclone
    rclone sync "D:\Roba\RRROBA" "RRROBA - g-suit:\RRROBA" --checksum --drive-stop-on-upload-limit
    rclone copy "D:\Roba" "Roba - ccsf:\Roba" --filter="- /{RRROBA}/" --checksum --drive-stop-on-upload-limit -v -P

   ::Server-side copy

   rclone copy "Roba - swccd:" "Streaming - ccsf:" --checksum --drive-stop-on-upload-limit -v -P
   rclone copy "Streaming - swccd:" "Streaming - ccsf:" --checksum --drive-stop-on-upload-limit -v -P
   rclone copy "Streaming - ccsf:" "Streaming - swccd:" --checksum --drive-stop-on-upload-limit -v -P
   rclone copy "Streaming - ccsf:" "Streaming - g-suit:" --checksum --drive-stop-on-upload-limit -v -P
   rclone copy "Streaming - g-suit:" "Streaming - ccsf:" --checksum --drive-stop-on-upload-limit -v -P

So, I have 2 "backups" and 5 "server-side copies"
This script repeats every 30 minutes
It uses a lot my HDD.. 18MB/s
There is a way to "optimize" the process and let it use less disk? Thanks