Why is rclone mount uploading to Google Drive?

What is the problem you are having with rclone?

I am seeing evidence in my rclone mount logs that it is actively uploading to my cloud drive. I use a separate script to handle uploads, and I don't want the mount to do any uploading. For example, this is in my logs:

2023/04/16 11:43:24 INFO : vfs cache: cleaned: objects 7592 (was 7623) in use 14, to upload 3, uploading 4, total size 199.962Gi (was 200.865Gi)

Run the command 'rclone version' and share the full output of the command.

rclone v1.61.1
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.4.0-146-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.19.4
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Since this is an rclone mount, here is my rclone-vfs-tv.service service file.

[Unit]
Description=RClone VFS TV Service
Wants=network-online.target
After=network-online.target

[Service]
Type=notify
KillMode=none
User=1000
Group=1000
RestartSec=5

ExecStart=/usr/bin/rclone mount gdrive-mount-tv: /Stuff_tv/Mount \
  --config /home/bink/.config/rclone/rclone.conf \
  --use-mmap \
  --allow-other \
  --dir-cache-time 1000h \
  --log-file /home/bink/logs/gdrive_tv.log \
  --log-level INFO \
  --poll-interval 10s \
  --umask 002 \
  --rc \
  --rc-addr 127.0.0.1:5581 \
  --rc-no-auth \
  --cache-dir=/caches \
  --vfs-cache-mode full \
  --vfs-cache-max-size 200G \
  --vfs-cache-max-age 168h \
  --vfs-read-ahead 1G \
  --tpslimit 10 \
  --tpslimit-burst 10 \
  --allow-non-empty \
  --drive-skip-gdocs \
  --vfs-read-chunk-size=64M \
  --vfs-read-chunk-size-limit=2048M \
  --buffer-size=64M \
  --timeout=10m \
  --drive-chunk-size=64M \
  --drive-pacer-min-sleep=10ms \
  --drive-pacer-burst=1000 \
  --bind=65.21.92.208 \
  --drive-upload-cutoff=1000T

ExecStop=/usr/bin/fusermount -uz /Stuff_tv/Mount
ExecStartPost=/usr/bin/rclone rc vfs/refresh recursive=true --rc-addr 127.0.0.1:5581 _async=true
Restart=on-failure

[Install]
WantedBy=multi-user.target

The rclone config contents with secrets removed.

[gdrive-mount-tv]
type = drive
client_id = xxxxxxxxxxxxxxx.apps.googleusercontent.com
client_secret = xxxxxxxxxxxxxxxx
scope = drive
token = {"access_token":"xxxxxxxxxxxxxxxxx","token_type":"Bearer","refresh_token":"1//0c>
team_drive = xxxxxxxxxxxxxxxxx
root_folder_id =
service_account_file = /home/bink/.config/sarotate/sa-tv/000063.json

Then don't write to the mount.

I'm not, as far as I am aware. I am using MergerFS and I only write to the merged locations. None of my apps even have access directly to the mount, only to the MergerFS locations.

But that might give me some clues as to where to look. Something must be writing directly to the mounted location ... hmm.

Rclone does nothing on its own.

You are writing there.

Ok so I figured out what is causing this, but I still don't know why it's happening.

I am using mkvpropedit to fix episodes that don't have correct language metadata. I'm using the command mkvpropedit --edit track:a1 --set language=eng filename.

The files are located in the MergerFS folder, so the command does not have direct access to the rclone mount folder. But the files physically exist on Google Drive, not on the local drive.

I believe this should happen server side, without having to download and re-upload the files. However once the file is processed by mkvpropedit, I start to see the log entries in the rclone log, indicating rclone is uploading the files.

A few minutes after processing this particular file, I saw this in the logs:

2023/04/16 18:55:47 INFO  : TV Shows/The Story of Diana (2017) [imdb-tt7271684] [tvdb-333114]/Season 01/The Story of Diana (2017) - S01E01 - Part One [HULU WEBDL-1080p][AAC 2.0][h264]-NTb.mkv: vfs cache: queuing for upload in 5s
2023/04/16 18:58:48 INFO  : TV Shows/The Story of Diana (2017) [imdb-tt7271684] [tvdb-333114]/Season 01/The Story of Diana (2017) - S01E01 - Part One [HULU WEBDL-1080p][AAC 2.0][h264]-NTb.mkv: Copied (replaced existing)
2023/04/16 18:58:48 INFO  : TV Shows/The Story of Diana (2017) [imdb-tt7271684] [tvdb-333114]/Season 01/The Story of Diana (2017) - S01E01 - Part One [HULU WEBDL-1080p][AAC 2.0][h264]-NTb.mkv: vfs cache: upload succeeded try #1

Again, mkvpropedit is only accessing via MergerFS. I'm not pointing it directly to the mount at all.

One other data point, looking on Google Drive directly, the file shows a "Last modified" time of 6:53PM. But the log entry shows that rclone uploaded it at 6:58. The earlier time seems to be more in line with the time I actually did the alteration of the file, not when it was "uploaded" by rclone.

This sounds quite similar to what I reported here:
Rclone mount repeatedly uploading the same unchanged files

One idea I can suggest is to add --read-only to mount in read-only mode, since you do not want the mount to do any uploading. This doesn't really solve the underlying issue, but it might at least prevent the main symptom for you.

I'd suggest that would make it worse as opposed to better. You would get a ton of write failures and things would really get wonky.

Do you have any suggestions for the underlying problem?

Yes, validate the logs and track down which app is doing it.

Turn stuff off and do one by one and use process of elimination until you figure out the cause.

That's the thing, I've determined that it happens when I issue the mkvpropedit command to change the metadata on the file. Since the file exists on Google Drive, I think it is just refreshing the file on the mount without copying it to the local drive.

I think that once this operation is completed, rclone sees the change in the file on the mount, and sees that as a change on the local mount folder. And proceeds to upload it.

I'm not sure if any of this makes sense, or if it is even possible. But that seems like what is happening.

So if you are writing to the file using that, it'll show up in your rclone log as a write and write to the local cache area and reupload it as you are changing the file.

If you don't want to write to the mount, you'd have to do that somewhere else I guess or copy it local first or whatnot. What's the problem with the mount uploading as it seems to be working fine?

Yeah I was wondering if it was even worth trying to resolve. The only issue is that it is saturating my upload, and affecting Plex streams. I have cloudplow set to monitor Plex streams and throttle uploads accordingly. But the rclone uploads are not throttling at all so when they are in operation, it makes Plex wonky.

Also, just to be clear, I'm using mkvpropedit to write to the MergerFS path, not directly to the rclone mount.

I do traffic shaping on my router so never have an issue as it sorts that out.

So that means the file only exists on the drive path? If it exists in both, perhaps a policy issue? I'd imagine it's finding the file on the cloud and opening it for write which causes the issue.

If you mean in the cloud, yes, it only exists on the cloud. If mkvpropedit opens the file for write (via the MergerFS path), I'm not sure what should happen in that case.

In any event, I think it's probably fine to just keep going the way it's going, and I'll just keep an eye on it and try not to do too many files at once.

Rclone doesn’t modify files server side. If you change a single byte in a file in a mount, it will download the file, make the change, upload.

I think your expectations are not correct. I don’t know how mergerFS will handle that though.

MergerFS is somewhat irrelevant as the file is only on the rclone mount and it's being opened r/w to update the metadata with that tool if I had to guess.

The rclone log would definitely validate my theory.

Would this be the rclone log that I set up in my rclone mount service file? Or is there another rclone log somewhere that I can access?

Also, I have the log level set to info. Do we need something higher than that to find the answers?