Rclone mount overwrites new remotes in config...I think

What is the problem you are having with rclone?

This is a GUESS! I am not sure how to recreate it...

I keep adding a remote, syncing my config file, only to have the new remotes disappear later. I am wondering if it is because I have a few mounts running on the server.

If I have a long-running mount on a remote like OneDrive where the token gets updated occasionally, and I make a change to the rclone config, does the change re-read the config before updating? Or does it just dump what the config looked like when the mount was started?

If this is the case, I would argue it is a bug. Thoughts?

Or, it could be related to rclone mount: unable to read obscured config key and unable to delete the temp file · Issue #4081 · rclone/rclone · GitHub? (see the log section below)

Run the command 'rclone version' and share the full output of the command.

rclone v1.60.1
- os/version: debian 10.11 (64 bit)
- os/kernel: 4.19.0-11-amd64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.19.3
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

OneDrive though I don't think it is unique to it

The command you were trying to run (eg rclone copy /tmp remote:tmp)

N/A

The rclone config contents with secrets removed.

[onedrive]
type = onedrive
token = {"access_token":"<REDACTED>","token_type":"Bearer","refresh_token":"<REDACTED>","expiry":"2022-11-03T12:32:58.867531472-06:00"}
drive_id = <REDACTED>
drive_type = personal
client_id = <REDACTED>
client_secret = <REDACTED>

A log from the command with the -vv flag

It's 60mb but I found the following. I guess this may also be answering the question?

2022/11/29 13:33:38 ERROR : Failed to read config file - using previous config: unable to read obscured config key and unable to delete the temp file: open /tmp/rclone108983931: no such file or directory

I am experiencing something that sounds similar, as i in the end result is the same, the remote is not responding. It might be something totally different, but it could be the same bug, just that we are triggering the bug in different ways.

I have a config file with several remotes, and "all is fine", the same conf on 3 different computers (All Linux Mint (2x21 and 1x20.3). One part is the opposite, though. I have jottacloud mounted on 1 of the computers, and there everything is working. On this computer, rclone mount is basically used for reading, not writing. (mint 21)

On the 2 others, jottacloud is not mounted, and on one of the computers, it is a high frequency of writing to Jottacloud, and even during an sync process, I get an "expired token". "Response: {"error":"invalid_grant","error_description":"Stale token"}"

The strange thing is that none of the other remotes seem to be affected, and I "solve" the issue by copying (only) the jottacloud config from the working computer and to the others, and "voila" it is working again. So I solve without refreshing the token. Even stranger, 1 of the computers with "expired token" is the source to the other 2. So the source for the stale token is NOT Jottacloud, but my computer(s?) or rclone. It started happen recently, so I would suspect it is only happening in 1.60.x

One thing that COULD affect it, is that Jottacloud have had issues with their 2FA lately, but again, it doesn't seem to be an Jottacloud issue.

I have seen a detail that might be helpful solving both our issues. On the computer where jottacloud token is NOT expiring, the conf file was updated by the system (I was away and know I didn't update the conf), while on the other 2, it hadn't been updated while I was away (and thus expired). So for me, it looks like mounting (with systemctl ) helped keeping the token fresh (locally).

I'll update if I find more, or create a new case if it looks to be different issue.

Rclone is supposed to reload the config file if it sees it has changed before writing it out to avoid this problem.

It is possible (of course!) that this bit of code got broken somehow.

I did a quick test and it appears to be working.

You should see

2022/11/30 10:00:28 DEBUG : Config file has changed externally - reloading

In the log.

Yes, that looks like the answer - if rclone can't read the config file then it can't know about updates.

Are you using --daemon? Can you switch to systemd or something like that?

I am. Should I not be? Is there something wrong with loading the config file?

I can’t easily switch but I could try to launch it differently.

I think this issue is caused by --daemon not passing the credentials to unlock your rclone.conf properly.

Suprisingly --daemon is a really complicated feature - Go absolutely hates forking so there are a lot of workarounds here.

I think this used to work properly. At least it is working well enough to unlock the config once - why not again?

Not using --daemon would be a useful test.

You could use screen or tmux to start rclone and leave it running. I often do this

screen -dmS rclone rclone mount ...

Okay. I’ll try some other options. I wonder if calling it with nohup would work instead. I’m not opposed to screen but would prefer not to have all the mounts there.

Should I file a bug so it’s noted?

I'm pretty sure it is in here: mount: improve implementation of --daemon mode · Issue #5664 · rclone/rclone · GitHub

You could try the workaround from there which effectively does the --daemon in bash before launching rclone.

echo "Starting Mounts ..." && sleep 5 && bash -c 'setsid sudo rclone mount --allow-other --dir-cache-time 24h --vfs-read-chunk-size 124M --vfs-read-chunk-size-limit 4G --buffer-size 512M --vfs-cache-mode writes Cloud\ Storage\ Server\ 1\ Uncrypt:/ /mnt/Cloud\ Storage/Cloud\ Storage\ Server\ 1/ </dev/null &>/dev/null & jobs -p %1' && echo "Done..." && sleep 5

Thanks.

I am launching this from Python with subprocess* so what I ended up doing was directly using Popen with start_new_session=True (docs). It is doing what I want so far but I am not sure how to force rclone to update the config file and see if it works. I will just watch the logs for issues.

Thanks for the help!


*I've found that even if a bit more verbose, I'd much rather use Python and subprocess over Bash/Shell. It is just so much less error prone since I pass arguments as a list rather than a single string. It is also way easier to add comments to flags (no `# comment` shenanigans) and I can do much more processing around it. It can be a bit hacky but well worth it! (in my opinion at least)

I didn't know about start_new_session - nice one!

The subprocess module is one of a small number of places I contributed code (in a very minor way) to python so I have a soft spot for it.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.