Error opening storage cache. Is there another rclone running on the same remote?

So, let me preface this by saying I really wish I started my rclone journey with a Google team drive instead of a personal or "my drive".

I've been running rclone and mergerfs now for a while with awesome results on my hosted box. The time came for a team drive, and since I didn't want to move like 20TB of stuff over I figured I'd just create a new mount and mergerfs everything together.

However, when the time came to fire everything up and start downloading to the new team drive, I'm getting this error message: "Error opening storage cache. Is there another rclone running on the same remote?". From reading these boards this is due to having a two cache remotes on one box (I basically just duplicated the settings of my drive for the new team drive, cache remote and all) and you apparently cannot do this.

So I'm hoping that someone can help me maintain this juggling act of remotes, cache's and mergerfs mounts so that I don't have to move 20TB of files from one Google drive to another. Can I just delete the cache remote from the my drive remote? Will this still allow Plex to function while keeping the file structures? Or perhaps there's a better way that I am not aware of? Thanks in advance for the help!

Your ideal setup here is probably to Union (or mergerfs) your 2 drives and then have a single cache on top of both. That would be more efficient.

But the quickest, easiest fix would just be to pick another name and folder for your second cache. Then you'd run 2 separate caches - one for each drive. Depending on your use-case this might be perfectly fine.

You just can't ask 2 rclone instances to access to the same cache as it needs an exclusive lock on it.

If you can post your config file (and edit out any sensitive passwords) then I can just show you at least the quick&easy fix directly.

Might need to wait until tomorrow for the answer though unless some else helps before then - as it's getting very late where I am :slight_smile:

PS: You can server-side transfer files between a personal drive and a team-drive I'm fairly certain. You'd still be limited to 750GB/day uploaded, but you wouldn't have to rely on your own bandwidth to do the transfer. Tell me if this interests you and I will elaborate.

Hi @thestigma thanks much for helping me out! I've posted my config here for your review: https://paste.ee/p/i2I3R

Also I'd love to hear about the server side transfers as well - at least it might give me a chance to decide what's best going forward.

You don't seem to have been far off here actually - since you already tried to use a separate cache for each. I think the main problem for the error was that since you never specified a manual location for the cache - the default location was used for both, and that probably made their database files overlap - confusing rclone into effectively treating it as one cache.

Here is a modified template with a few minor fixes and some suggestions. I have not tested this config so I may have made minor mistakes or syntax errors. I'll help you solve them if needed.
I have bolded the important bits and stuff that may require your attention or understanding.

Remove my <---- comments before use obviously. #### comments are ok to use in the config permanently as you please.


########### Standalone Gdrive ##########
[Gdrive]
type = drive
scope = drive
token = {"access_token":"ya29.ImGbB7ZOPzMoErFov3_MXQhUpaL1ryFcQsPB8EbL_2EmVenrDyGeU1XJc4Bwd8pxCN4q3zwCMemdKlOnW21uqOTKP7OD-lgz3NbbWQoxz","token_type":"Bearer","refresh_token":"1/goP_nSubc-vZk3jTFjDGmvKCtRlfVwGAmDjj8Vb5iCEa","expiry":"2019-10-10T04:34:58.912721994+02:00"}
client_secret = secret
client_id = client_id
chunk_size = 64M <--------- Not required but better uploading performance for some more RAM use
server_side_across_configs = true <--- enables server-side copying when possible

[GdriveCache]
type = cache
remote = Gdrive: <--- simplest possible for illustration, modify if desired (cache whole drive)
plex_url = https://plex.plexserver.com
plex_username = email@gmail.com
plex_password = password
chunk_size = 64M
info_age = 2d
chunk_total_size = 32G
chunk_path = $HOME/.cache/rclone/cache-backend/Gdrive <--- manual location for cache files

[GdriveCacheCrypt]
type = crypt
remote = GdriveCache:/Encrypted <--- for illustration, here assuming you want your encrypted files in a seperate fodler. If you ONLY are going to use encrypted files it can just be GdriveCache:
filename_encryption = standard
directory_name_encryption = true
password = password
password2 = password2


######### Standalone Teamdrive #########
[TD1]
type = drive
client_id = client_id
client_secret = secret
scope = drive
token = {"access_token":"ya29.ImCbB-aHpIwxghAng7Rh0392GziWGnmCPN1tc_CSKyRleuPeudtN_M4Pna9DR6HlF5YQUGlKACe-Bkwt-2U9C8SPYC1wCzC0","token_type":"Bearer","refresh_token":"1//03nQsrf6IfRAAGAMSNwF-L9IruGmnC4B06nO4pm9ECgSwrHYdiU1zN47SYqxHs-YkghyiSmXNiJgGIk","expiry":"2019-10-10T04:38:58.322624269+02:00"}
team_drive = team_drive
chunk_size = 64M
server_side_across_configs = true

[TD1Cache]
type = cache
remote = TD1:
plex_url = https://plex.plexserver.com
plex_username = email@gmail.com
plex_password = password
chunk_size = 10M
info_age = 1d
chunk_total_size = 10G
chunk_path = $HOME/.cache/rclone/cache-backend/TD1

[TD1CacheCrypt]
type = crypt
remote = TD1Cache:/Encrypted
filename_encryption = standard
directory_name_encryption = true
password = password
password2 = password2

This is the simple setup with separate caches. You can then rclone union them together, or use the (currently more well featured) mergerFS availiable in Linux. Union is likely to get updated soon to be brought more in line with mergerFS's most important functionality, but not quite yet. Each solution does have some benefits and caveats I can elaborate on if you ask for it.

I could propose a shared-cache alternative instead, but aside from it being easier to administrate it won't have much benefit unless you are keeping a significant amount of the same data on both drives. Let me know if you want more info on that.

As far as server-side copying goes, I have enabled the required line in the config for you. This should enable you to simply copy files/folders from one remote to another without it going through your PC / bandwidth. I have personally only used it on teamdrive-to-teamdrive, but since personal drives are based on the exact same underlying system I expect it will work fine. The reason this option is not enabled by default is it can't be guaranteed to work between all setups and configs, but at worst - it shouldn't do anything bad or destructive, so you should at least test it. you use server-side copy/move/sync via the normal commands and rclone will try to do it server-side if possible when this is enabled (if --verbose logging is used it should note these transfers as being server-side at the end of the line).

rclone move Gdrive:/SourceFolder TD1:/DestinationFolder -P --fast-list

This example for simplicity is not referencing Crypt or Cache.
You can transfer encrypted files just fine, but if you tried to decrypt them, transfer and then re-encrypt them then this would force them to go via your PC (as that's where the encryption happens). This can add some complications you need to think about. It would be easy to do...

rclone sync Gdrive:/Encrypted TD1:/Encrypted

But referencing something spesific inside the encrypted structure can be tricky as you'd need to use it's encrypted path and name to do that, and half the point of encryption is to obscure :stuck_out_tongue:
Encrypting the contents of files but not the names is an option though ... or filenames but not folder names, which is reasonable security compromise if you want to be able to do this spesific thing more easily.

I will stop my rambling here and leave you to ask followup questions rather than cover every possible eventuality :smiley:

Amazing! Thanks very much for the detailed explanation! I've put your suggestions into play and I am now able to get this back to being automated and hands off!! Your detailed explanation was awesome thanks very much!

Very happy to assist - and if there are any related topics you want more information about later on, just ask :slight_smile:

Ah spoke to soon:

2019/10/11 22:45:05 ERROR : /home/td00/.cache/rclone/cache-backend/teamdrive1cache.db: Error opening storage cache. Is there another rclone running on the same remote? failed to open a cache connection to "/home/td00/.cache/rclone/cache-backend/teamdrive1cache.db": timeout
2019/10/11 22:45:05 Failed to create file system for "crypt3:/TV": failed to make remote teamdrive1cache:"/encrypted/k0qqeo0us69lbq2rv43mg5ugpg" to wrap: failed to start cache db: failed to open a cache connection to "/home/td00/.cache/rclone/cache-backend/teamdrive1cache.db": timeout

I'm assuming this is because it's using the same database but I'm not to sure. I added this as a hail mary attempt:

cache-db-path = /home/td00/.cache/rclone/cache-backend/GDrive.db

In the hopes that it would create a new db file to work the other cache remote from but no, still getting that error and still only one db.

I will help you fix this, but you have to tell me what you have added (or what else you are trying to make happen), because here you are using a "crypt3" which is not in my recommendation, so you have to explain how that factors into this and what the config (redact keys) for crypt3 is.

Is your intention to set up multiple folders that use different encryptions? (because I did notice you had a "crypt2" in your last config you showed me that seemed somewhat redundant to me - unless they were using different encryption keys (which you had obviously removed).

If it is not a secret - the best thing may be to tell me the high-level goal you are trying to acomplish and I can suggest what the smartest way to get to that goal is.

The gist of it is: Each instance of rclone (such a "one mount") needs exclusive access to the cache. Any time you try to start a new rclone instance that uses a cache that is already in-use you will get this error. If you have the need to use multiple remotes for whatever reason - we might need a slightly more complex solution to solve it rather than just adding more and more separate caches for each one (although that WOULD work) ...

Lastly - it would help a lot if you describe to me the spesific reason for why you are using the cache-backend system. I see a lot of users who use it "just because it's there" and they think it's just better to use it than to not use it. I did exactly the same thing when I started using rclone - but I eventually moved away from using it as the I found the benefits were not quite good enough to justify the problems and limitations that come with it. The best setup is always the simplest one that still does what you need it to - so if you can elaborate a little bit on this it helps me understand your needs :slight_smile:

You are correct sir lol... I've been messing with the configs a bit to try to get it working to no avail. This is currently how I have it: https://paste.ee/p/ZW4Zx

Now my end goal is to have both my drive and team drives running and merged (TV, Movies) between both drives using mergerfs (which I have already configured and working). All new downloads will now be going to the teamdrive instead of my drive going forward but because I've already got lots of stuff on my drive and I don't want to move it to teamdrive, I figured I'd just mergerfs everything.

In answer to your question, yes the second crypt is redundant. I remember having/wanting to make this crypt for a reason... I think for some kind of automation I was doing - but now can't remember lol. I figure I just leave it as it isn't really doing anything. Please your help is much appreciated as I know the problem is me and my screwy configuration :slight_smile:

In terms of the cache based system, when I originally started doing this on some crummy vm's I had running at home it was a way for me to not only teach myself rclone and docker but to eventually build out something that would work. I found that cache worked better and has less buffering. But now that I'm on a Hetzner box I kept the same config because it just seemed to work effortlessly. The only reason why I'm even altering this is just to get on to teamdrives.

Ok...

I don't know if this is directly the cause of your error, but the frist thing we need to fix is your understand of how to link or layer remotes on top of eachother - because this is just wrong :sweat_smile:

I will try to explain very spesifically what you did wrong here.

[cache]
**remote = gdrive:/secure**

This is fine. The cache refers to the gdrive remote, and then will use a folder "secure" inside of it. Nothing wrong with that - assuming that you only want the cache to cover the "secure" folder, otherwise you'd just use remote = gdrive: (this would cache everything on gdrive)

[crypt]
**remote = gdrive:/secure/crypt**

While this is technically not wrong, it is probably not what you want.
If you use "crypt" like this you will bypass the cache and go directly to "gdrive".
If you wanted to utilize the cache when you used "crypt" you'd have to link it to the cache (which then again is linked to gdrive). For example:
remote = cache:/crypt
(if you used remote = cache:/secure/crypt then the final result would be gdriveroot:/secure/secure/crypt because the "secure" folder is already specified in the cache remote. Each layer adds on top of the other.
That would result in a chain of communication like this:
gdrive <-- cache <--crypt <-- OS <--user input
the key point of understanding here is that you don't point towards folders. you point to towards remote names.
So...
Gdriveremotename <-- cacheremotename <-- Cryptremotename
if that make more sense to you

[crypt2]
remote = gdrive:/secure/crypt

Same thing here is in "crypt" - but additionally I want to mention that if you linked them via the cache as I suggested - you would not be able to use both at the same time. If that is your goal we might need to use a different strategy.

[crypt3]
type = crypt
remote = teamdrive1:/encrypted

Same problem as the others (does not link to cache), but if you fixed this to link though (teamdrive) cache then it could run concurrently with "crypt" or "crypt2" because it uses a separate cache to the others.

I can of course edit this for you in your config if you want - but perhaps it's best you try yourself first and show me the result so I can see you understood what I tried to teach you :slight_smile:

I would definitely give it a try to not use the cache (you can just use a second remote for testing that bypasses the remote). I don't know if you use Plex with this, but Animosity (the resident Linux/Plex veteran) does not use it. It shouldn't be needed for that with the right settings - nor for streaming in general. On my Windows setup I stream huge 4K videos in a few seconds from VLC with no cache.

In my opinion the cache-backend should primarily be used if you really need a (large) read-cache, as the VFS cache can not (currently) provide this. If you have really bad bandwidth but large storage for example. All other problems can be worked around pretty much, and the result is a cleaner and more efficient system. Also, the VFS is likely to get read-caching in the future, making cache-backend completely redundant.

One of the biggest drawbacks to the cache-backend is that it is no longer in active development, so the reasons to use it compared to the VFS are less and less over time and it's bugs are not getting fixed anytime soon (the author disappeared and is no longer responding to contacts).

So if I were to redo everything to not use the cache... I suppose the only thing that would happen is that the Plex library would need to rescan itself again right? What would be a solution that doesn't use cache that would work the way I need it to?

Also, thank you for clearing that you for me. The configurations I have here are me just piecing information I've gathered over time here and elsewhere and pieced together until I got everything working.

To not use the cache you'd just edit your crypt remotes to bypass the cache (go straight to gdrive:).
This was actually how you were doing it before I corrected you on it. For NOT using the cache it would actually be correct.

What i would do would be:
(1) Look at Animosity's "recommended settings" thread (just search for it) and see what settings he suggests to use for Plex automatic scans
(2) Optionally set up a pre-caching script that would save a lot of time when Plex needs to list a lot of folders to do it's checking. Since you seem to be on a Linux system you could just steal Animosity's script for that also.

Honestly, for Linux + Plex that thread basically has a full guide on how to do everything - straight from the mouth of one of the most veteran users here (I'm on Windows so my scripts won't help you). I'd poke around and read, because there's a lot of gold there that is directly applicable to your setup. You could directly copy most of it varbatum and just make minor alternations (mostly to account for your 2-drive system).

Hello @thestigma,

Apologies for the delay here but I was messing around with the configs and wanted to get something working before running it by you. So I took your (and @Animosity022) recommendations and bowed to the masters and removed the cache back end after having another look at his scripts. This has been great because when I set this all up months ago, I didn't know all that much about these configurations and revisiting my config now that I'm more comfortable AND with your added explanations things just fell right into place. So thank you very much. Here is a copy of my new config that I wanted to show you to make sure I've got everything looking good. This is working just swimmingly at the moment but I still wanted to show you to see if there is anything else I've missed: https://paste.ee/p/Yzjoc

You'll notice I removed that redundant "/crypt" as per you suggestion but did keep one /secure folder for the my drive and an /encrypted folder for the team drive as I only want to encrypt stuff in those folders. Hoping that this should be good at this point but do have a look at let me know what you think. And again thanks much for all the help.

I am checking though it quick now.

Do note that you should always redact your "client secret" when you show you configs, because this could theoretically allow someone to use your authorization to access the drive (if they did it before the token expired anyway). Not very likely to happen, but you should not take the risk of posting that publicly - so I recommend you remove that link.

EDIT
The config looks fine to me. I have nothing further to add here - not unless you wanted to change the setup to something else.

Whoops missed one! Thanks for the good eye! Link has been updated.

Thanks very much for all the help @thestigma you're the man!

1 Like

When it comes to that precaching script, as I said Animosity has one I'm sure.

But if not I just whipped up a really basic one you could use if you wanted (real primitive but should work)

#!/bin/bash

for (( ; ; ))
do
if [ -z "$(ls -A /path/to/your/gdrive/mount)" ]; then
   echo "Empty" && sleep 2
else
   echo "Not Empty" && rclone rc vfs/refresh --fast-list recursive=true && exit
fi
done

(you have to edit the /path/to/your/gdrive/mount obviously)

This loops and waits until the mount is up and running, then sends the precaching command.
Note that you have to use the --rc flag in the mount command for this to work, because this script sends the command to the RC (remote control) module. set this to auto-start when your computer boots up and you should be golden.

(you may also want to look into setting a password on the RC if security is a concern for you and you don't have a firewall on the PC + do not trust your local network - otherwise it won't really be acessible to anyone). You can find those instructions on the remote control documentation page.

Thanks @thestigma I will check this out and mess around with it a bit!

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.