Error opening storage cache

Hi there, my rclone.conf looks like this:

[sdrive]
type = drive
client_id = myuniqueclientid
client_secret = mysecret
scope = drive
token = {"access_token":"tokenstuffhere"}
root_folder_id = someid

[scache]
type = cache
remote = sdrive:
plex_url = something.plex.direct:32400
plex_username = myplexuser
plex_password = myplexpass
chunk_size = 16M
info_age = 1d
chunk_total_size = 10G
plex_token = myplextoken

[scrypt]
type = crypt
remote = scache:
filename_encryption = standard
directory_name_encryption = true
password = superpass1
password2 = superpass2

I do mount with
rclone mount scrypt: /mnt/sdrive --fast-list --allow-other --buffer-size 0 --dir-cache-time 72h --cache-info-age 96h --timeout 1h &

Mount shows up in /mnt/sdrive, files are accessible and all seems to work. But when i use "rclone lsd scrypt:" i always get the following error:

ERROR : /mnt/myhome/rclone/cache-backend/scache.db: Error opening storage cache. Is there another rclone running on the same remote? failed to open a cache connection to "/mnt/myhome/.cache/rclone/cache-backend/scache.db": timeout

I checked with "ps -ef | grep rclone" and there is no other process for this mount running. I got another remote mounted (another gdrive, with diff client id), but that is named differently (gcrypt2:)

What am i doing wrong? What is needed to get a correct working cache backend? I do not get an error when using "rclone lsd sdrive:" but i just want to make sure a i got a working cloud -> cache -> crypt setup. Thanks in advance.

Regards,
zoo

It's telling you that you already have a rclone process running with the cache backend as only one process can access it.

Is there a reason you want to use the cache backend? I personally would not and just mount the crypt.

"It's telling you that you already have a rclone process running with the cache backend as only one process can access it."

Well, there is, but for a completely different remote drive. Even when i start rclone with a different user, i get the same error on lsd.

"Is there a reason you want to use the cache backend? I personally would not and just mount the crypt."

I did some tests with vfs and wasnt that happy, so i decided to give the cache backend a try. Why do you think its not needed? Isnt it faster to do so?

Not sure what that means as the error says you are trying to run a second cache rclone process on the same remote. I can only report what the error is as you'd want to check what you have running.

It's not needed and it's faster without it generally. There are some edge use cases where it may make sense.

ps -ef | grep rclone
userAAA 400128 400102 0 May23 pts/6 00:03:32 rclone mount gcrypt: /mnt/gdrive --fast-list --allow-other --buffer-size 0 --dir-cache-time 72h --cache-info-age 96h --timeout 1h
userBBB 3144463 1714822 0 14:33 pts/10 00:00:00 grep --color=auto rclone

userAAA is mounting gcrypt: (config userAAA)
userBBB is mounting scrypt: (config userBBB)

Both are different Googledrives.

But then i will give it a try without cache, even when i'm really interesseted in testing with cache backend.

You can remove this as it does nothing on a mount.

Sorry as I was not specific enough.

If you have a cache rclone mount going, only one process can access the cache db, which is the mount you have running.

If you run rclone ls against the mount, it would fail with that error.

Even if they have different names and different users? Ok interesting, thank you.

You have a cache remote mounted called "scrypt".

You cannot run a seperate rclone ls scrypt while it's mounted as only one process can access it.

That's what I was poorly trying to say earlier.

Well, i did not try to mount it more than once.

UserAAA did rclone mount gcrypt:

UserBBB did rclone mount scrypt:

UserBBB did rclone lsd scrypt: and got the error i described. Maybe i am missunderstanding something here, but it is only one mount process running for the mentioned remotes.

The first mount is the active cache backend running.

You can't run the second item you have above as you are already running the first.

Ok, now i understand. Did kill both mounts, retried with user BBB only, same error :frowning:

For the cache backend, there is only one process that ever can be running for a remote at one time. If you get the error, you have a process running still.

MANY THX!!! You were absolutly right :slightly_smiling_face:

Mounted userAAA with vfs, userBBB with cache-backend. All working now :slight_smile:

Me again, i did a mistake earlier (had no cache in config) - but it is still not working. I already deleted everything, wiped google drive and started from scratch including new credentials/project. Tested on 3 different machines, same error occours everytime.

My current (new) config:

[sdrive2]
type = drive
scope = drive.file
service_account_file = /path/to/.json
root_folder_id = root

[scache2]
type = cache
remote = sdrive2:
plex_url = plexurl.plex.direct:32400
plex_username = user
plex_password = password
chunk_size = 10M
info_age = 2d
chunk_total_size = 10G

[scrypt2]
type = crypt
remote = scache2:crypt
filename_encryption = standard
directory_name_encryption = true
password =
password2 =

Instead of SA i tried with client_id and secret without success. Tested on Ubuntu 20.04, 18.04

As Animosity mentioned above, you cannot run a separate rclone ls scrypt: while the mount for scrypt is running as only one rclone process can access the cache db at a time.

I feel stupid. Now i got it, all is good :slight_smile:

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.