The cache mode most importantly determines if you can open files in read/write mode. (as opposed to read-only or write-only modes).
For example - if you want to open a word document on the Google Drive (crypted or not) and make some changes to it - it will depend on your cache mode. With no cache you will have to download the file, then make changes and then upload. That works - but is a hassle...
With no cache (default) you are basically limited to uploading and downloading files. No direct modifications allowed.
With --vfs-cache-mode writes you can modify files normally like it was a normal harddrive. (in this case, changes files will be temporarily written to harddrive - then uploaded automatically). "writes" mode is fully compatible with all functions the OS expects a disk to be able to perform. Lower modes are not.
I do not recommend "full" cache mode however - for various reasons. Do not make the mistake of thinking this is "better". Avoid this unless you have a very good reason to use it.
To sum up - use --vfs-cache-mode writes unless you know you need something different, because this is the best 100% compatible mode.
you can control where the temporary files go with --cache-dir "C:\VFScache" or equivalent command. (quote signs are required).
As for chunk-sizes...
There are two different uses of this.
One is in the "caching backend". This is a special remote for caching. I recommend not using it unless you know you need to.
The second use-case is for uploading (not downloading) on Google Drive (and some other remotes).
This determined how large blocks of data is transferred at a time.
The default is 8MB. This is quite low unless you have a slow upload (for example DSL connection).
That basically makes the connection reset every 8MB and on higher upload it never quite gets to reach it's full speed because of how TCP inherently behaves (it needs a few seconds to "ramp up" the speed).
I generally recommend 64M or 128M if you have a faster upload speed because this will greatly increase the total upload speed. The higher upload the higher the chunk size should ideally be - but above 128M/256M there is little to no benefit.
The only downside to this is that for example 64M will use up to 64M of RAM during the upload (pr connection - default 4).
You can set this by using the follow setting in rclone.conf chunk_size = 64M
You can insert this anywhere in the "block" of sett settings for that remote.
--drive-chunk-size 64M will do the same thing as a command-flag if you prefer that.
Yes sure, I always read the documentation, then I ask if I don't understand something..
Ok thanks! I understood
Understood, I have 20mbps upload but if I play/do other things I limit it to 5mbps when I need. So I should go with 64M?
Another question. I have the "Crypt" remote that I use to store and encrypt my files to "RRROBA - ccsf". I want to server-side copy it to another folder (RRROBA - swccd) to do a backup of it. How should I do this?
I have "Crypt" that is "RRROBA - ccsf", so if I do rclone mount "Crypt:" X:
In "X:" I can see the decrypted files
Maybe I have to make "Crypt2" that is "RRROBA - swccd", then do rclone copy "Crypt2:" "RRROBA - swccd:"
And then rclone mount "Crypt2:" X:
To access the files?
EDIT: Another question
Can I use the same folder for multiple mounts?
I mean, can I do
rclone mount folder1: x: --cache-dir "C:\VFScache"
If you can afford 64M x the number of transfers you use (default 4) then 64M is the "best bang for the buck" setting yes. Close to ideal speeds with relatively limited RAM impact. More is always better, but the benefit will exponentially decrease as you allocate more memory. I would say stick with 64-to-128M depending on available system RAM.
In order for it to be possible to server-side transfer, one of the 2 criteria must be true:
Both source and destination are unencrypted
Both source and destination is encrypted with the same key/salt
The reason is the server can only copy/move files. It can not change them - so if it files have to be decrypted/re-encrypted they MUST do so via the local PC.
For example.... assume the following remotes:
rclone sync Gcrypt1: Gcrypt2: will work, but not as server-side. The data wil decrypted (locally), then decrypted (locally)
rclone sync Gdrive1:/Crypt: Gdrive2:/Crypt would work server-side however. In this case we are blindly syncing files regardless of encryption - so as long as decryptor on the other side has the same setting it will be fine.
Yes - each remote will get it's own subfolder inside C:\VFScache
this happens automatically so there can not be a collision as long as remote names are unique.
Yes - I know there are at least 2-3 solid android apps that incorporate rclone.
Unfortunately I have not explored these a lot yet, so it is a little difficult for me to tell you which is "best".
I would do as asdfdsa says and start a new topic on this - then I am sure you will get good suggestions to try.
Here is just one I remembered the name of - as an example:
You should understand what rclone dedupe is- because I think you are misunderstanding what it does.
rclone dedupe is specifically designed to remove duplicates of files in Cloud-systems that can allow files with identical names to exist (unlike most PC operating systems). Most commonly this can happen on Google Drive sometimes and is not a big problem - but occasionally deduping is a good idea, especially if there are multiple users or systems changing files.
This is NOT a traditional deduping tool like you would use for music or pictures to remove multiple copies of files. It will NOT remove 2 identical files that are stored in 2 different folders for example. It will literally only clean up instances of 2 files with the same name - in the same place (something that can not happen on your local PC on most filesystems).
If that is what you actually want to do - you should go find a deduper software and run that on your mount. That will work fine rclone dedupe is just not "that kind of dedupe".
Guys you helped me really a lot, now it all works and I have 0 problems (at the moment, at least!)
I still have some questions about some "functionalities", but nothing "too important"
1)if I do "rclone copy gdrive1: gdrive2" the server side copy works.
If I mount gdrive1, mount gdrive2 and manually copy files, the server side copy DOESN'T work, is it normal? Am I doing some error?
I have a local folder with some songs that I copy to gdrive. This is the "structure"
Main folder - > folders of artists - > folders of albums - > songs
So, just to make an example with random numbers just to try to explain in a better way: in the main folders there are 10 folders of artists, every artists has 10 folders of albums, every album has 10 songs
I have changed the local structure deleting the "album folders", so now there is the main folder, 10 artists folder with 100 songs into artists folders
If I do "rclone sync local:MainFolder gdrive:MainFolder" will it delete all the files in the "gdrive:MainFolder" and then upload them again or just recognize that there are the same files and only the folders have changed, so it will just "server side move" the files?
This is not possible given how mounts currently work. If you are curious I can go into the technical details, but in short it is not possible. It would at least require an rclone-aware application that integrated into the OS and could intercept copying operations. Maybe that will be a thing eventually and would be very nice, but it doesn't exist now.
The next best thing for you may be the use the rclone webGUI however. This has a GUI where you can copy files between remotes and it supports server-side because it is aware of how rclone functions.
Or do it via commandline / script. These are basically the options you have as of now that allow for server-side to function.
If you do a "sync" command from A to B, then rclone will make B exactly identical to A.
It will do this in the way that requires the least amount of work and data movement, so syncing twice in a row for example would always result in the second sync doing nothing.
In other words - it will compare the file attributes of all the files between A and B (size, modtime - and hash if available), and then only transfer those files that do not already exist on B, or exist but are not the latest version (in case it was changed on A after uploading). Note that it will also delete any file on B that were not on A! So be careful when using sync because if you mess and misunderstand a command you are running it could accidentally delete data you didn't mean to.
You can use the flag --dry-run to do a "simulation run" of what wold happen first if you want to be sure you are doing what you intended.
copy does the same as sync, but does not delete any files on B
move does the same as copy, but deletes the files from A after copying.
It is not so uncommon that people confuse sync and copy functionality, so you should definitely not make that mistake.
I understood perfectly the difference thanks My question was a little "more specific" but my example maybe wasn't one of the bests, I'll try to write it again in another way
I have a local folder with inside only a folder with a file "localfolder/folder/text.txt"
Daily I do rclone sync "localfolder:" "remotefolder:"
So my remote folder becomes "remotefolder/folder/text.txt"
What happens if I manually cut the file into the "upper folder", so my local folder becomes "localfolder/text.txt"
And then I do rclone sync "localfolder:" "remotefolder:" ?
rclone deletes the "remotefolder/folder/text.txt" and then reuploads the text.txt so the result will be "remotefolder/text.txt"
rclone recongizes that the file is the same so it will "server-side move" the text.txt to the upper folder, so the result will be "remotefolder/text.txt" without needing to reupload any file