Cache_tmp not respected & possible restart issues


I’m trying to keep my setup simple. It’s just a single drive with plex, sabnzbd, sonarr, and radarr. The useablity issue I am having is nothing is being written to the temp folder. It is just directly uploaded, tmp_dir variables never make it into rclone. It also seems like rclone stops and starts again.

Distributor ID: Ubuntu
Description:    Ubuntu 18.04.1 LTS
Release:        18.04
Codename:       bionic 

rclone mount -vv --log-file rclonelog.log --allow-other --rc --daemon --fast-list rwteamdrive-cache: rwteamdrive

Config File:

type = drive
client_id =
client_secret = xxxxx
scope = drive
token = {"access_token":"xxxxx","token_type":"Bearer","refres$
team_drive = xxxx

type = cache
remote = rwteamdrive:
plex_url =
plex_username =
plex_password = xxxxx
chunk_size = 5M
info_age = 1d
chunk_total_size = 50G
cache-tmp-upload-path = /home/claw/RemoteDrives/temprwteamdrive
cache-tmp-wait-time = 64h


Pastebin Log

Also when I directly put the tmp_dir in the command I get this:

rclone mount -vv --log-file rclonelog.log --allow-other --rc --daemon --fast-list --cache-tmp-upload-path /home/claw/RemoteDrives/temprwteamdrive --cache-tmp-wait-time 64h rwteamdrive-cache: rwteamdrive

Pastebin Log w/ inline tmp_dir

So I think there is two issues going on here.

  1. It’s not reading my tmp_dir from the config file.

  2. It’s loading twice, causing errors when I put the tmp_dir inline and the cache db is locked from the previous instance

You need to stop the other running rclone process before starting another one.

The cache backend can only have one running process as it locks the cache database for the backend.

Thanks, my issue is that it is loading twice or more. You can see from the logs multiple rclones report in the same second after i run my mount command from CLI.

is there anything you see me as doing “wrong” or did I somehow get a funky rclone install?

p.s. I don’t have a startup service yet invoking it, I would make that after I got everything working smoothly.

It only starts if you run the command to start it.

I’d recommend stoping all the processes and making sure you do not have any running before starting it back up.

You can check for processes like:

felix@gemini:~$ ps -ef | grep rclone
felix      595     1  0 Nov12 ?        00:47:29 /usr/bin/rclone mount gcrypt: /GD --allow-other --bind --dir-cache-time 72h --drive-chunk-size 32M --log-level INFO --log-file /home/felix/logs/rclone.log --umask 002 --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --rc
felix    14911 14892  0 13:03 pts/1    00:00:00 grep rclone

You can kill the process ID as in my case it would be 595

If none are running, you should be able to start it back up and it would use the proper config that you have. With the cache backend, you can only have 1 running at a time.

I’m not a total unix n00b so don’t be afraid to get a bit technical with me.

I restarted the computer. Ran your command. there was no rclone process running. I ran the mount command. same result.

You can see the log has 3 instances of " rclone: Version “v1.44” starting with parameters…"

If your ps -ef | grep rclone shows 0 processes running, try running it 1 time without the --daemon and do you immediately get a command line back as it exits or does it continue to run?

Does that full path /file exist?

ls -al /home/claw/.cache/rclone/cache-backend/rwteamdrive-cache.db

and have the right permissions on it?

It’s the daemon parameter.

When i run without it everything works fine.

I’ve tried deleting the db and leaving it in. both work fine without daemon. Once I enable that then it goes fubar

I’m running it in a detached screen now.

claw@paw:~/.cache/rclone/cache-backend$ ls -lah
total 668K
drwxrwxr-x 3 claw claw 4.0K Nov 16 21:17 .
drwxrwxr-x 3 claw claw 4.0K Sep 20 20:49 ..
drwxrwxr-x 2 claw claw 4.0K Nov 16 21:17 rwteamdrive-cache
-rw-r--r-- 1 claw claw 1.0M Nov 16 21:18 rwteamdrive-cache.db

Here are my current permissions

I tried to replicate and if I use the --daemon with a cache backend and without, I can’t replicate your issue.

I can only replicate that if I have more than 1 rclone process running as it locks the cache.db.