How to "automatize" rclone?

I tried to use --log-file but I can't find the log file..

you have to speciffy the location of the log file

you should read this page and learn about the many flags you can use.
https://rclone.org/docs/#log-file-file
and
https://rclone.org/flags/

and use --dry-run when testing

Well
Now it works, idk really why, here is the log

2020/02/05 22:16:33 NOTICE: 320kbps/Diablo Swing Orchestra/Pandora's Piñata: Duplicate     directory found in destination - ignoring
2020/02/05 22:17:06 NOTICE: 320kbps/Diablo Swing Orchestra/Pandora's Piñata: Duplicate directory found in destination - ignoring
2020/02/05 22:22:39 NOTICE: 320kbps/Diablo Swing Orchestra/Pandora's Piñata: Duplicate directory found in destination - ignoring
2020/02/05 22:43:10 NOTICE: 320kbps/Diablo Swing Orchestra/Pandora's Piñata: Duplicate directory found in destination - ignoring
2020/02/05 22:43:58 NOTICE: Gameplay/Gameplay Encodificati/Borderlands 2/2019-05-03 18-27-12.m4v: Duplicate object found in destination - ignoring
2020/02/05 22:43:58 NOTICE: Gameplay/Gameplay Encodificati/Sea of Thieves/2019-02-14 23-03-32.m4v: Duplicate object found in destination - ignoring
2020/02/05 22:43:58 NOTICE: Gameplay/Gameplay Encodificati/Sea of Thieves/OmG cHeAtErZzzZZ 30fps.m4v: Duplicate object found in destination - ignoring
2020/02/05 22:43:58 NOTICE: Gameplay/Gameplay Encodificati/Sea of Thieves/Sot-2.m4v: Duplicate object found in destination - ignoring

Gonna delete someway those duplicates.. dedupe?

sure, dedupe, but test with --dry-run

Why I should try with --dry-run? Isn't md5sum very reliable?

md5sum does not protect YOU from making mistakes with YOUR rclone command.
read this.


"I know I should have use the --dry-run option. I usually do, but today I'm not being smart."

Yes understood. Just to get a confirmation, md5sum is reliable, right? I mean, if the dedupe command detects and deletes duplicates, are you sure they are actually duplicates and there are no errors, right?

i have not used the dedupe command.

i know others have used it and i never heard about any problems, except human error.
just run it with --dry-run, read the log and be sure rclone will do what you expected.

Understood, thanks
I have other questions, I don't know if I should open another topic or continue asking here..
For example, I want to encrypt all the files in a specific remote folder. I have already seen this
https://rclone.org/crypt/
But I don't understand how to encrypt file in that folders/upload file in that folders while encrypting them..
Also there is a way to mount an encrypted remote folder but having it decrypted in the local mount?

as i understand crypt,
first, you have to create a new remote and then copy/move files into the new remote.

yes, you can mount an encrypted remote

Both dedupe and MD5 is reliable. While MD5 may not be the MOST robust checksum from a technical perspective it is irrelevant in a private collection where the files number up to millions, not hundreds of billions. It will for all practical purposes not fail you in this context.

--dry-run is still a very useful tool regardless.
Mostly because even the most intelligent humans can make mistakes, and a sync commands especially - if formatted badly (against your intentions) can delete data you did not intend to.

If that happens then the only recourse you have is to un-delete that data from the "trashbin" that some (most) cloud-providers have. The problem is that you may not notice until it is too late (15-30days later usually). That works, but it can be a major hassle.

Therefore, any commands that include the potential for deletion of data (move and sync) should be thoroughly tested and preferably left to a script after you ensure it is working as intended. Being sloppy and/or using them manually can too easily lead to manual mistakes that make you go "EEEEK! I ddn't really mean to do that!".
Just because rclone does exactly as you told it to does not mean YOU did not make a mistake :slight_smile:
That is mostly what the --dry-run is made for. use this for any test-run of any command you are not 100% comfortable with and have double-checked.

That is how crypt remotes work by default.
When you use (or mount) a crypt remote it will always:

  • Encrypt all files being uploaded
  • Decrypt all files being downloaded

This also extends to filenames and folder-names, so even though the files are stored with indecipherable garbage-names, they will appear perfectly normal when you view them through the crypt remote.

The idea is that you set up a crypt remote and then it will take care of all the encryption/decryption automatically without you having to worry about it.

The only thing you really need to worry about is to NOT mix encrypted and unencrypted files (as that might cause them to not be visible at all).
You should definitely have a spesific folder dedicated to all your encrypted files.
For example Gdrive:/Crypt

You can then have 2 remotes. One Gdrive: (for unencrypted files) and one Gcrypt: (Gdrive:/Crypt) that contains only encrypted files. I wholeheartedly recommend that you do not mix the two. It is possible - from a technical perspective, but it could be very confusing to anyone but an expert.

1 Like

Understood thanks :slight_smile:

Well, I mean
I have a folder named "RRROBA - ccsf" and I want to encrypt all the files that I put into it. So I created a new remote, named it "RRROBA - ccsf:" and it works
Now, I want to mount this folder, and I managed to do it


As you can see, there is only a file that is encrypted. There is way to see the files as decrypted while mounted? Or another way to browse those files without having to download them

you need to mount a remote of type crypt

that file looks encrypted but you have mounted the wrong remote.

can you do rclone config and post the name of the remotes?

can you share your rclone mount command?
can you share your rclone copy command that copied the file to the crypted remote?

Yea... I mean... assuming that this mount is a CRYPT remote it will encrypt anything you drag&drop to it. (or copy to it in any other way).

It will still LOOK normal via the mount because the name/contents are decrypted for you automatically.
But if you go to the webGUI of the service you use and look at that files (for example Google Drive webpage) then this file will look like garbage (encrypted).

Once a crypt remote has been set up you can use it as if it was not encrypted - but it is, in the background. You just won't see it via the mount because all that is automatic.
To see the "real" files you have to look at the files outside of the crypt remote.

Does that make sense - or do you have followup questions? :slight_smile:

If in doubt - just post your configs from rclone.conf and REDACT any clientID, client secret and crypt password/salt.

Ups, understood, now works :slight_smile:

Sure thanks!

Btw this is the copy command

rclone copy G:\Propix.txt Crypt: -P -v

And the mount command is

rclone mount Crypt: J:

I have not understood when I have to enable the cache and why..
Also, I have seen some setting about "file chunks" or similar but... Well, I don't understand sorry

about the cache, it is optional and depends on what you want to do.

if you want to copy files, you do not need cache.
you can use windows explorer, second copy, fastcopy, robocopy, double commander and many other programs without cache.

for more details read https://rclone.org/commands/rclone_mount/ and then ask your questions.

The cache mode most importantly determines if you can open files in read/write mode. (as opposed to read-only or write-only modes).
For example - if you want to open a word document on the Google Drive (crypted or not) and make some changes to it - it will depend on your cache mode. With no cache you will have to download the file, then make changes and then upload. That works - but is a hassle...
With no cache (default) you are basically limited to uploading and downloading files. No direct modifications allowed.

With --vfs-cache-mode writes you can modify files normally like it was a normal harddrive. (in this case, changes files will be temporarily written to harddrive - then uploaded automatically). "writes" mode is fully compatible with all functions the OS expects a disk to be able to perform. Lower modes are not.
I do not recommend "full" cache mode however - for various reasons. Do not make the mistake of thinking this is "better". Avoid this unless you have a very good reason to use it.

To sum up - use --vfs-cache-mode writes unless you know you need something different, because this is the best 100% compatible mode.
you can control where the temporary files go with --cache-dir "C:\VFScache" or equivalent command. (quote signs are required).

As for chunk-sizes...

There are two different uses of this.
One is in the "caching backend". This is a special remote for caching. I recommend not using it unless you know you need to.

The second use-case is for uploading (not downloading) on Google Drive (and some other remotes).
This determined how large blocks of data is transferred at a time.
The default is 8MB. This is quite low unless you have a slow upload (for example DSL connection).
That basically makes the connection reset every 8MB and on higher upload it never quite gets to reach it's full speed because of how TCP inherently behaves (it needs a few seconds to "ramp up" the speed).
I generally recommend 64M or 128M if you have a faster upload speed because this will greatly increase the total upload speed. The higher upload the higher the chunk size should ideally be - but above 128M/256M there is little to no benefit.
The only downside to this is that for example 64M will use up to 64M of RAM during the upload (pr connection - default 4).

You can set this by using the follow setting in rclone.conf
chunk_size = 64M
You can insert this anywhere in the "block" of sett settings for that remote.

--drive-chunk-size 64M will do the same thing as a command-flag if you prefer that.

Yes sure, I always read the documentation, then I ask if I don't understand something..

Ok thanks! I understood :slight_smile:

Understood, I have 20mbps upload but if I play/do other things I limit it to 5mbps when I need. So I should go with 64M?

Another question. I have the "Crypt" remote that I use to store and encrypt my files to "RRROBA - ccsf". I want to server-side copy it to another folder (RRROBA - swccd) to do a backup of it. How should I do this?
I mean.
I have "Crypt" that is "RRROBA - ccsf", so if I do
rclone mount "Crypt:" X:
In "X:" I can see the decrypted files
Maybe I have to make "Crypt2" that is "RRROBA - swccd", then do
rclone copy "Crypt2:" "RRROBA - swccd:"
And then
rclone mount "Crypt2:" X:
To access the files?

EDIT: Another question

Can I use the same folder for multiple mounts?
I mean, can I do

rclone mount folder1: x: --cache-dir "C:\VFScache"

rclone mount folder: y: --cache-dir "C:\VFScache"

rclone mount folder: z: --cache-dir "C:\VFScache"