Crypt & Cache - Mount while copying (Windows 10)

I have less than 24 hours of rclone experience, so I’m still trying to understand what is going on; but I’m having issues trying to mount and copy at the same time in separate command prompts on Windows 10.

I have a PMS which has been using external HDDs, but realized yesterday I have an alumni account with Gdrive which offers unlimited storage. I have installed WinFSP and rclone, I setup gdrive in rclone - followed by rcache - and finally - rcrypt. With these items completed I have been able to successfully copy files from my local external HDDs to Gdrive and they are indeed encrypted. PMS is also able to read these files when rclone has the gcrypt mounted. I followed this page on how to do all of the above tasks: https://bytesized-hosting.com/pages/rclone-gdrive

The issue I’m having is that I cannot open a command prompt and run “rclone mount --allow-others --allow-non-empty gcrypt: Q: &” and then open a new command prompt and run “rclone copy --verbose F:\ gcrypt:” without getting errors that do not allow the second task to be executed. Thus I cannot have PMS read the mounted drive, and copy files to the gdrive at the same time; I assume this should be possible judging by everything else I’ve read. Can anyone point me to my issue?

Also once gcache is setup, does all of the caching just happen - or are there flags that you need to pass to the mount command to actually have things cached when streaming from gdrive to PMS?

See below for the errors I get when trying to mount and then copy, or copy then mount (they are about the same):

C:\Users\Rob>rclone mount --allow-non-empty --allow-other --read-only gcrypt: Q: &

2019/04/06 13:15:17 bolt.Close(): funlock error: The handle is invalid.

2019/04/06 13:15:17 ERROR : C:\Users\Rob\AppData\Local\rclone\cache-backend\gcache.db: Error opening storage cache. Is there another rclone running on the same remote? failed to open a cache connection to “C:\Users\Rob\AppData\Local\rclone\cache-backend\gcache.db”: timeout

2019/04/06 13:15:18 bolt.Close(): funlock error: The handle is invalid.

2019/04/06 13:15:18 ERROR : C:\Users\Rob\AppData\Local\rclone\cache-backend\gcache.db: Error opening storage cache. Is there another rclone running on the same remote? failed to open a cache connection to “C:\Users\Rob\AppData\Local\rclone\cache-backend\gcache.db”: timeout

2019/04/06 13:15:18 Failed to create file system for “gcrypt:”: failed to make remote gcache:"/crypt" to wrap: failed to start cache db: failed to open a cache connection to “C:\Users\Rob\AppData\Local\rclone\cache-backend\gcache.db”: timeout

is bad as it allows you to overmount things. I’d never use it personally.

The error means you already have a rclone process going as the cache.db only allows one process at a time.

If you truly want to use the cache, you’d need to make another for copy. I wouldn’t use cache at all.

Don’t I need cache to avoid API bans?

No, you do not need cache.

Neither for upstream, or downstream to Plex? It was my understanding that Plex will generate too many API requests without some caching which results in a 24hr ban. So you’d remove the gcache config all together and go straight to gdrive>gcrypt?

Yes, you go GD straight to crypt. Cache works because it uses chunked reading. Standard rclone has chunked reading since middle of 2018.

Assuming I used the tutorial linked above, how can I rclone copy F:\ crypt: and bypass the cache since crypt is built on cache if I understand correctly. Which I probably don’t.

No, the tutorial goes from GD-> Cache -> Crypt

You want to make a new remote that goes directly to your GD.

Don’t I want to create a new one that is gdrive>gcrypt so the files still get encrypted going to drive? Then I’ll point Plex to the gdrive>gcache>gcrypt?

I don’t use cache in my config at all.

I use a straight GD -> Crypt:

[GD]
type = drive
client_id = clientid
client_secret = secret
token = {"access_token":"token","token_type":"Bearer","refresh_token":"1token","expiry":"2019-04-06T17:16:30.34393804-04:00"}

[gcrypt]
type = crypt
remote = GD:media
filename_encryption = standard
password = password
password2 = password
directory_name_encryption = true

I have a folder in the root of my GD and all my media is encrypted.

I had previously seen some other posts of yours where you talked about using vfs though. Are you not even using that cache to avoid the API bans?

I’ve stated multiple times I don’t use cache and shared my config.

You don’t get banned assuming you are using a version later than mid 2018.

Okay great, let me give it a try. I really appreciate your help. I’ll post back after I test it.

Make sure to make your own Client ID/API key if you haven’t:

https://rclone.org/drive/#making-your-own-client-id

I’ll have to so that. One more question for now… I’ve already copied to a cached encrypted folder in Google drive (so in Google drive there is a folder gdrive, and in that crypt). Now that I’m editing my configuration, can I access the those files from rclone?

I used the edited version of gcrypt where it is a child of gdrive instead of gcrypt now, so none of the password or hash changed. I tried mounting it using mount gcrypt:gcrypt/ but that’s empty when I browse to it.

Looks like I figured out the answer to my question actually.

Yep. You can just use the same password for encryption with the remote or just copy it. Either way works.

Client ID/API key is now taken care of.

I was able to get two encrypted remotes setup, one with cache and one without. I’m uploading to the remote via the non cached one, and then mounting the cached one. I keep getting the following type of errors about chunks not found, is there a fix to this?

2019/04/07 09:58:19 ERROR : 31nultn4rbp3he8ecr3tafsucc/ukes6dg2ec4v55vv5orr6ndnr6nru5qo2k443nd61lvjvjung3hg: (7145200/168782483) error (chunk not found 5242880) response
2019/04/07 10:08:11 ERROR : 31nultn4rbp3he8ecr3tafsucc/malrjn24b4n5cr8e3rlu8no3d0djn64bnt4e62cm09ndqe0ferng: (5242880/168736851) error (chunk not found 5242880) response
2019/04/07 10:08:12 ERROR : 31nultn4rbp3he8ecr3tafsucc/3quec0o9u8u5pgsqlur4jkklh6f7dpdje2j6t9pdp6c5cc0k9dug: (5242880/184786177) error (chunk not found 5242880) response
2019/04/07 10:08:14 ERROR : 31nultn4rbp3he8ecr3tafsucc/6hpso23ru9e4lmh3b9k37nttdsgom4c2nu6mdma0cb4enqveq2fg: (5242880/210232718) error (chunk not found 5242880) response
2019/04/07 10:08:16 ERROR : skmll53acrv0ovp2202482pnj8/c95chra1not7d2ui3a6do0l14ig3hc99qa976plec3v1dv41vt3os499vf04ib9pfrlkrotknj7pv0oqrc4t8dv0q3rudvn84kiluj87qmq7vim5mp7foj40srgruemr0l9mvpneifep5gmhg2f1qljhghtnm417kk6861n96s4aups95f0g: (10485760/1503728813) error (chunk not found 10485760) response
2019/04/07 10:08:21 ERROR : skmll53acrv0ovp2202482pnj8/1731p4dk6g4dtb9itr0o0qrmk2cc64bh5bhgf3elvn49d69tfuk56pi435jtjlccamjmr34fkn3t13djbgbcsj4di0pb0jsuvghkbjs5b7egngtcf22g5l9p3tmehpqp07p6ghvnlq7s8f5pq5ikpa6m1fbu36q0t9hgareg2pdk6jkn0g30: (10485760/1362155748) error (chunk not found 10485760) response
2019/04/07 10:08:23 ERROR : skmll53acrv0ovp2202482pnj8/d7g6mvdphl6883av1j81i51h2taeh2jem3fparkjliah6k23vo23d201h6cmhrpb1iaismlf2hlbr4u6rq30rcbktgacqdcn4becbg61kq8cc44arnm9kbc4ucgenojkffv7l55v9gghf090tje3ovo1n37nvg2n9ovpd7gphuprn7t6p8hg: (10485760/1884431265) error (chunk not found 10485760) response
2019/04/07 10:08:27 ERROR : skmll53acrv0ovp2202482pnj8/glqlfqk9cg9243fh311t5lfjmb764l0f84iikpkgggek2vbkrp9a1phg2tpr7bsbknoa5nk9l27j2fl6v24blqeo2of8il57gdv4h8t3t6m068e625997bea9les7fhbvnif8ujrrv6k7shmqhbhuqo2ktvsrv5tko35nvirmvcc3c9eqbr0: (10485760/1612183130) error (chunk not found 10485760) response
2019/04/07 10:08:29 ERROR : skmll53acrv0ovp2202482pnj8/9u813e9jjn71rptat7qj1cujbohsd0cqn2nf3ucjkvou97f1qt6l79i34uboah7v8r2021kr6rmbvu310fuj1hq65f7iift4n8fpemgce707ssk02onke2i2eq47perhsp15h0rhu83d6f40265fgcfinuic2mbnfijku50jtikn8n9ups7g: (10485760/1364607457) error (chunk not found 10485760) response

That means at some point, you had some issues or cleaned up where the chunks were stored.

You can either kill -HUP to recreate the cache or you can run with the --cache-db-purge.

There is really no reason to use the cache mount though as I’d just mount the crypt directly if I were you.

I’m definitely going to give it a shot without - I was still using the cache because it seemed like I was getting faster buffering, but I think it’s just because it was transcoding and I didn’t realize.

One more for you, this file has given me the same error twice now and I’m having issues copying it to the remote.

2019/04/07 11:45:23 ERROR : Sabrina/Chilling.Adventures.of.Sabrina.S01E01.Chapter.One.October.Country.1080p.NF.WEB-DL.DD5.1.x264-NTG.mkv: WriteFileHandle: Truncate: Can't change size without --vfs-cache-mode >= writes