GDrive vs Crypt Troubles


I have rclone installed on my Unraid server and I've created two remotes:

google = Google Drive
secure = Crypt Google Drive

I've had success copying files from my server to the google remote, but I'm totally stumped trying to copy to the secure one

I've tested both of the following commands...

rclone --verbose copy "/mnt/user/Arcade" secure/"Arcade"
rclone --verbose copy "/mnt/user/Arcade" google:secure/"Arcade"

When I view the remotes via Krusader, I can see that the files are being created within google/secure/encryptedfilenames

However, when I click directly on the secure remote, it is empty. I'm totally confused, clearly... any help would be greatly appreciated

(I'm a newb, so please let me know if I am not explaining this clearly)

It sounds like you are just just copying unencrypted files to a folder that you are looking at through a crypt remote.

In that case, rclone will try to decrypt files that aren't encrypted in the first place. This will fail miserably of course, and rclone will not even try to show you the garbled data - and will instead omit these files. If you check your log it will no doubt be full of errors indicating this. ERROR - could not decrypt blah blah, invalid base32 at byte something something...

Solution: copy the files to a crypt remote that points to the encrypted folder. Then the files be encrypted when they are copied - and they will be viewable since they then don't fail to de-encrypt. If you need help on how to set up the crypt remote, just let me know.

Today's lesson - always keep your encrypted and unencrypted files in separate folders :slight_smile:

1 Like

Thanks for the thorough reply. I'm still a bit confused, as I acclimate to the rclone language. If you'd elaborate with some instructions, is really appreciate it.

My goal is an unencrypted gdrive remote and an encrypted one. nothing more

If you are confused about what I said, I suggest you just show me the content of your rclone.conf file
if you do not know where to find it, run
rclone config file
and it will tell you where

!!warning!! - rclone.conf may contain some sensitive information, such as clientID, clientSecret, token and crypt keys. You should (for your own security/privacy) [REDACT] these lines so you can safely show the rest to me.

In order to preserve the format, you can select the text and hit the "preformatted text" button when you post it.

Then I can see your current setup - and make any appropriate changes for you if needed. It eliminates a whole lot of guessing on my part.

1 Like
type = drive
scope = drive
token = REDACTED
root_folder_id = REDACTED

type = crypt
remote = google:secure
filename_encryption = standard
directory_name_encryption = true
password = REDACTED
password2 = REDACTED

I am fairly sure this is incorrect and needs to be
remote = google:/secure
(meaning - a folder named "secure" inside my google drive's root directory)

google:bucket/fodler syntax does exist, but only for bucket-based remotes, and Gdrive is not bucket-based, so it needs to be a normal path structure.

Try making that change to the config file and save, and see if that helps you.
The command you should use to copy afterwards should then be:
rclone --verbose -P copy "/mnt/user/Arcade" secure:/Arcade

I made these changes:

  • added -P for you so you get a display of the transfer
  • Removed quotemarks around your arcade folder
  • corrected the destination to use your secure crypt instead of google
1 Like

You get a big shiny gold star for today, sir! Thank you so much. That seems to be working. Now in my GDrive root directory I just have /secure which contains jiberrish when viewed directly on and my decrypted files when viewed via Krusader.

Can you just confirm that that is what is supposed to be happening?


Is this an acceptable script for a large upload?

rclone --transfers=32 --checkers=16 --drive-chunk-size=128M --max-backlog 999999 --bwlimit 8M --verbose -P copy "/mnt/user/Arcade" secure:/Arcade

UPDATE: Likely have an issue in my script above. Getting tons of errors and it seems to keep trying the same files over and over?

That sounds like what was intended yes. is this suitable for you, or would you prefer it structured in another way?

No, it is not. way too many transfers for gdrive. try 4 (which is the default) or maybe 5 at most. Leave checkers at 8 (default).

gdrive has a hard limit of file-accesses pr second, which is 2-3files/sec.
that means you can only START that many transfers pr second though. Many more could run at the same time, but could not be initiated at the same time. Realistically about 4-5 is what it can handle. Going above will just end up waiting for gdrive to allow it. Going WAY above will just diminish performance as the API will be overloaded with requests it can't execute anyway.

Otherwise, it seems ok

1 Like

Thanks, I'm definitely making progress now! So my /Arcade local folder has two large subfolders

  • Games #1
  • Games #2

If I try and upload them one at a time, instead of just the root folder, I'm back to my old problem. Are you seeing any issues with the formatting below?

rclone --transfers=4 --checkers=8 --max-backlog 999999 --bwlimit 8M --verbose -P copy "/mnt/user/Arcade/Games #1/" "secure:/Arcade/Games #1"

What's the correct way to copy a nested local folder, but maintain the subfolder hierarchy?

No I don't see anything obviously wrong here. I might miss a small typo or something but you seem to understand the syntax. What error do you get from this?

That is the way rclone works by default
Let's say you have a folder
(and inside that 2 folders with files)

if you rclone copy /work - then you will also be copying "client", "project" and all subfolders and all files in those folders. This is exactly how copyingh a folder in any OS works, so you should be familiar with it.

The folder structure is preserved. The subfodlers and files on the destination will mirror whatever the structure on the source was.

1 Like

Quick sidebar question... I'm not seeing any checks running with the command below. Am I missing something?

rclone --transfers=4 --checkers=8 --bwlimit 8M --verbose -P copy /mnt/user/Arcade secure:/Arcade

Regarding paths, my question is really about adding folders, selectively, after you've created the root directory. For example, I sync my entire local /Arcade folder and all it's subdirectories. Then, later, I want to copy a totally unrelated folder /BananaPancakes into /Arcade, resulting in /Arcade/BananaPancakes

What's that command look like?

When there are both transfers and checks happening, and -P and -v are enabled, only the transfers are printed out as their own lines I think (probably just to save on the spam). If you look at the -P status there should be a "checked" line there that counts upwards (assuming that there are any files to check on the destination of course). If you are transfering only files that have names that don't match anything on the destination there won't need to be any checks to do comparing of files before transfer.

So I'm pretty sure you are just misunderstanding it a little :slight_smile:

that is pretty trivial. Just something like this:
rclone --transfers=4 --checkers=8 --bwlimit 8M --verbose -P copy /mnt/user/Arcade/BananaPancakes secure:/Arcade/BananaPancakes

but you don't have to copy it from that spesific location. the source can be anywhere, like...
rclone --transfers=4 --checkers=8 --bwlimit 8M --verbose -P copy /home/snacks/BananaPancakes secure:/Arcade/BananaPancakes

neither do they need to have the same name..
rclone --transfers=4 --checkers=8 --bwlimit 8M --verbose -P copy /home/snacks/BananaPancakes secure:/Arcade/MyFavoriteDeserts

So just in general, these paths behave exactly like any normal path on your system. There is nothing special about how they are used or behave, except that remotes start with remotename:

1 Like

UPDATE: OK, just stopped and restarted transfer. Now I'm seeing checks, so I get what you're saying. Checks occur when files exist, already. Do I need to do anything regarding CRC check or transfer verification? Or will it automatically retransmit data if there is a transfer error? I think I incorrectly assumed I needed to tell it to validate data

So I'm not seeing any checks. Checks = 0.

I think the paths stuff is starting to make sense. I just got thrown off with that original error that caused havok on my original commands (that, plus my lack of understanding regarding the formatting of remote:/

Well, checks will only happen if there are already files by the same name as those you are moving there.

This is what happens under the hood:
(1) rclone makes a list of all the local files
(2) rclone asks the remote for a list of all the files in the destination folder you designated
(3) rclone compared these two lists. Are there any overlapping names in the same places? If yes, then we have to check them to see if we should skip, only edit attributes, or upload. This could be needed for 0 files, or all the files depending on what is in the destination folder from before.
(4) rclone now knows what to do for each file - and it executes the plan in the most efficient way (not overwriting a file with an identical file for example, as that would be a waste of time and resources).

Try this experiment:

  • First upload some files to a new empty folder. There should be 0 checks, but several transfers
  • Then, repeat the exact same transfer again. This time there will several checks, but no transfers (because all the files were already there so rclone just skipped them all after comparing). This second time you notice rclone will finish very fast...

As you might imagine, this means you can cancel any transfer at any time and just run the same command again and rclone will figure it out and just resume where it left of... you never need to wait to finish a whole operation if you don't want to. Nothing will break and very little progress will be lost.

rclone will check stuff like this (size and modtime) automatically. It won't count a file as transfered until it is sure it has arrived healthy and whole. It will also usually check checksum (which is the most accurate) if it is a "free" operation (if both the source and destination has precaclulated hashes. your local system almost certainly does not).

You can force a checksum check if you add the flag --checksum . Your local system can use the CPU to calculate this on the fly even if it does not have a filesystem that stored checksum data. Doing this always requires all files be read fully, so it may require the harddrive to work a bit more.

Feel free to use --checksum if you are a little paranoid about it, but it is not really necessary. First of all size+modtime is already pretty accurate. Secondly there are several layers of error protection at work at the transport-layer (TCP) and protocol layer (HTTP). It will be very rare for all these to not be able to detect an error.

rclone does this by default. If errors happen - there are multiple layers of safety that can detect it - and if that happens rclone will retransmit the data. How much progress you lose and need to re-do depends on the upload-chunk size. Unless you are on a very unstable connection this is something you don't have to worry or adjust because transmit errors of one sort or another will happen very infrequently.

somewhat unrelated sidenote:
If you want speed however, I certainly recommend upping the upload chunk size from 8M to 64M as this can give you a pretty massive boost in bandwidth utilization (for files larger than 8MB):

--drive-chunk-size 64M (can alternatively be set in the config using a slightly different format if you prefer)

Do be aware that 64M chunks mean you can potentially use (64MB x numberOfTransfers) megabytes of RAM though. For example 256MB on 4 transfers. Just make sure you don't run out of RAM or rclone will crash.

1 Like

My sincerest thanks. Your responses have been very helpful!

You are very welcome, and welcome back to the forum!
You can show appreciation using the <3 button when you feel like spreading the love! :stuck_out_tongue:

1 Like

Lol, that's a lot of love. Thanks. Feel all fuzzy on the inside now :wink:

So my 10TB transfer just finished. Any issues here -- I see one error that seems to have sorted itself out -- or green light to delete the local copy?

Elapsed time: 7.1sTransferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 0 / 1780817, 100%
Elapsed time: 7.1sTransferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 0 / 1780827, 100%
Elapsed time: 7.1sTransferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 0 / 1780828, 100%
Elapsed time: 7.1sTransferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 0 / 1780828, 100%
Elapsed time: 7.1sTransferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 0 / 1780828, 100%
Elapsed time: 7.1sTransferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 0 / 1780839, 100%
Elapsed time: 7.1sTransferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 0 / 1780839, 100%
Elapsed time: 7.1sTransferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 0 / 1780839, 100%
Elapsed time: 7.1s
2019-11-27 17:27:32 INFO : Encrypted drive 'secure:/Arcade': Waiting for checks to finish
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 0 / 1780842, 100%
Elapsed time: 7.1s
2019-11-27 17:27:32 INFO : Encrypted drive 'secure:/Arcade': Waiting for transfers to finish
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 0 / 1780842, 100%
Elapsed time: 7.1s
2019-11-27 17:27:32 ERROR : Attempt 3/3 succeeded
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 0 / 1780842, 100%
Elapsed time: 7.1sTransferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 0 / 1780842, 100%
Elapsed time: 7.1s
2019/11/27 17:27:32 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 0 / 1780842, 100%
Elapsed time: 7.1s

Script Finished Wed, 27 Nov 2019 17:27:32 -0500

That error was retried and you got no errors in the end so all good!

You can always try rclone check to make sure everything is uploaded properly.

1 Like

Yup - no errors here... those will pop up very clearly as "ERROR: blah blah blah".

Do note however that not all errors are critical errors. Some are kind of expected to happen - like the occasional re-transfer of a file, or the occational 403 error due to making too many requests of the API too fast. These are not problems unless you see a lot of them and they start to go into 4/10 or 5/10 retries or above. 1/10 and 2/10 retries will just inevitably happen sometimes without anything being "wrong" in your setup.

So in short - don't enable debug-log and stare yourself blind trying to remove every error. Ask if you see any particularly prevalent ones or have an error description you are worried about.