Where is rclone vfs --cache-dir by default?

And what happens if you run out of disk space? and what happens if you exceed your upload limits? I'm using google drive btw

I noticed that when writing files there, sonarr creates them as .partial files, and it seems to be uploaded and then moved to .ext is this safe or expected?

Aug 18 09:50:40 onebox rclone[10071]: file.mp4.partial~: Moved (server side)
Aug 18 09:50:37 onebox rclone[10071]: file.mp4.partial~: Copied (new)

I'm running my mount with --vfs-cache-mode writes but I haven't set --cache-dir is it required?

You'd fail the write with an IO error.

It would retry by default 10 times and fail.

By default it writes to your home directory under .cache.

1 Like

Thanks. But I did some speed tests and write speed is very low on the mount...around ~40 MB/s vs my 1.1 GB/s unionfs mount and 2 GB/s local ssd speeds so I'm not sure I'm going to keep on rclone.

Also I had some strange errors trying to write files with sickbeard mp4 automator, even with write cache mode.

It's writing to your local disk so not really a 'rclone' thing. If the local disk only gives 40MB/s, that's what you'd get.

Not true, I have 2 x 512 GB NVME ssds, and with local write speed tests I get 2 GB/s.

I believe the issue is that rclone is locking it till file are fully uploaded or something like that.

sync; dd if=/dev/zero of=/mnt/unionfs/test345 bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.961389 s, 1.1 GB/s

sync; dd if=/dev/zero of=/root/test345 bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.443702 s, 2.4 GB/s

sync; dd if=/dev/zero of=/mnt/rclone/test345 bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 24.3305 s, 44.1 MB/s

I think the confusion you are seeing is two things happen:

  • file is copied to the local disk
  • file is immediately uploaded

You'd be looking for this issue if you are expecting a delayed upload:

I don't think you are understanding me.

Software which writes into the mount, take much longer than doing in other fuse file systems or local disk, because rclone is not returning the write is successful til it's 100% uploaded. This lock many tasks and slow down even dd

But regardless if this is the reason or not, write speeds on rclone vfs mount are much slower than local disk, and a unionfs/mergerfs mount, and it shouldn't be.

When you write a mount with vfs-cache-mode writes, it does 2 things:

  1. It writes a copy locally to the cache area
  2. It immediately uploads the file to the remote

1 is dependent on disk speed
2 is dependent on the upload of the file

When 1 and 2 are complete, you get a prompt back.

Dude you can just run speed tests and sonarr/radarr importing to the mount to check what I'm saying.

If I was writing at 2 GB/s into my vfs mount I'd knew it, and I'm 100% sure I'm not. This is not my disks fault, it's a issue with rclone code.

I don't care how long it takes to offload the file from my disk, but I want writing to be as fast as possible for applications

If there are many files involved then once the max number of concurrent transfers is used - the mount will pause before the transfer it lets the next file into the cache. This can explain the apparent lower speed you experience because your OS may just be showing a simple average speed over time. It also unfortunately means that very large (numbe of files) transfers will be reliant on source OS staying active for a long time for the transfer to complete.

rclone currently does this syncronously, and there needs to be a code change to let you do this async and let the cache eat up everything all at once. There already exists a feature request issue for this.

As for partial files uploading, this is not ideal.
It will probably work, but those pertials are going to uploaded and downloaded lots of times.
rclone has no idea what files are temporary workfiles. When a file is released from write-lock then it is assumed to be done and set to upload. If it's not actually done and gets accessed again very soon it's going to get pulled back again and then re-uploaded (probably many times).

All software that creates unfinished files like this (mostly torrents, rendering ect.) should do their owrk in a local folder and then upload when file is actually finished. Often such software supports a setting to use a temp-folder and automatically move finished files. If you set the temp-folder locally and the finished-folder to cloud then everything will be smooth and automatic. qbittorrent does this for me for example.

If your software has no such feature then you have to work around it somehow. Either manually upload from local when done - or set up some sort of custom script that can filter out unfinished files based on the filename or something. rclone has the functions needed to do this and automate it if you schedule it as a script on a recurring timer.

It's because of #2 I listed above and the feature request for delayed uploads addresses your statement.

When I did my tests no other files were being uploaded at all.

But you did gave me an idea... I could try to use rclone --exclude to not upload or process .partial files...

But I'm still not sure is worth using the mount because of the speed loss. My unionfs mount is much faster yet :frowning:

And what you said about "open" files, are files opened for reading counted? Say if I want to have 200 files open for reading on the mount I should use --transfers 200 ?

If you want to run the mount with -vv, you can see it copy and upload. Here is an example testfile I did:

2019/08/18 12:06:49 DEBUG : &{testfile (rw)}: >Write: written=131072, err=<nil>
2019/08/18 12:06:49 DEBUG : &{testfile (rw)}: Write: len=131072, offset=352976896
2019/08/18 12:06:49 DEBUG : &{testfile (rw)}: >Write: written=131072, err=<nil>
2019/08/18 12:06:49 DEBUG : &{testfile (rw)}: Write: len=118208, offset=353107968
2019/08/18 12:06:49 DEBUG : &{testfile (rw)}: >Write: written=118208, err=<nil>
2019/08/18 12:06:49 DEBUG : &{testfile (rw)}: Flush:
2019/08/18 12:06:49 DEBUG : testfile(0xc000136b40): close:
2019/08/18 12:06:49 DEBUG : testfile: Couldn't find file - need to transfer
2019/08/18 12:06:49 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 0 length 8388608
2019/08/18 12:06:49 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 8388608 length 8388608
2019/08/18 12:06:50 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 16777216 length 8388608
2019/08/18 12:06:50 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 25165824 length 8388608
2019/08/18 12:06:51 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 33554432 length 8388608
2019/08/18 12:06:51 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 41943040 length 8388608
2019/08/18 12:06:52 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 50331648 length 8388608
2019/08/18 12:06:52 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 58720256 length 8388608
2019/08/18 12:06:52 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 67108864 length 8388608
2019/08/18 12:06:53 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 75497472 length 8388608
2019/08/18 12:06:53 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 83886080 length 8388608
2019/08/18 12:06:54 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 92274688 length 8388608
2019/08/18 12:06:54 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 100663296 length 8388608
2019/08/18 12:06:54 DEBUG : Google drive root 'media': Checking for changes on remote
2019/08/18 12:06:54 DEBUG : testfile: updateTime: setting atime to 2019-08-18 12:06:49.186756764 -0400 EDT
2019/08/18 12:06:54 INFO  : Cleaned the cache: objects 3 (was 3), total size 1.329G (was 1G)
2019/08/18 12:06:54 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 109051904 length 8388608
2019/08/18 12:06:55 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 117440512 length 8388608
2019/08/18 12:06:55 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 125829120 length 8388608
2019/08/18 12:06:56 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 134217728 length 8388608
2019/08/18 12:06:56 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 142606336 length 8388608
2019/08/18 12:06:57 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 150994944 length 8388608
2019/08/18 12:06:57 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 159383552 length 8388608
2019/08/18 12:06:58 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 167772160 length 8388608
2019/08/18 12:06:58 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 176160768 length 8388608
2019/08/18 12:06:58 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 184549376 length 8388608
2019/08/18 12:06:59 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 192937984 length 8388608
2019/08/18 12:06:59 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 201326592 length 8388608
2019/08/18 12:07:00 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 209715200 length 8388608
2019/08/18 12:07:00 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 218103808 length 8388608
2019/08/18 12:07:00 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 226492416 length 8388608
2019/08/18 12:07:01 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 234881024 length 8388608
2019/08/18 12:07:01 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 243269632 length 8388608
2019/08/18 12:07:02 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 251658240 length 8388608
2019/08/18 12:07:02 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 260046848 length 8388608
2019/08/18 12:07:02 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 268435456 length 8388608
2019/08/18 12:07:03 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 276824064 length 8388608
2019/08/18 12:07:03 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 285212672 length 8388608
2019/08/18 12:07:04 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 293601280 length 8388608
2019/08/18 12:07:04 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 301989888 length 8388608
2019/08/18 12:07:04 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 310378496 length 8388608
2019/08/18 12:07:05 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 318767104 length 8388608
2019/08/18 12:07:05 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 327155712 length 8388608
2019/08/18 12:07:06 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 335544320 length 8388608
2019/08/18 12:07:06 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 343932928 length 8388608
2019/08/18 12:07:07 DEBUG : 6r50j5eru0lfub6f2n39ifq4b8: Sending chunk 352321536 length 990912
2019/08/18 12:07:07 INFO  : testfile: Copied (new)
2019/08/18 12:07:07 DEBUG : testfile: transferred to remote
2019/08/18 12:07:07 DEBUG : testfile(0xc000136b40): >close: err=<nil>
2019/08/18 12:07:07 DEBUG : &{testfile (rw)}: >Flush: err=<nil>
2019/08/18 12:07:07 DEBUG : &{testfile (rw)}: Release:
2019/08/18 12:07:07 DEBUG : testfile(0xc000136b40): RWFileHandle.Release nothing to do
2019/08/18 12:07:07 DEBUG : &{testfile (rw)}: >Release: err=<nil>

You can see it finished writing and it uploads it immediately.

The first flush is #1 completing, the upload completing is #2 I posted.

If you get this problem even when uploading a single large file then I don't know where your bottleneck is.

All I can say is that I have never experienced the VFS mount being a bottleneck. I have copied up to 500MB/sec (ish) to cache, which is the fastest my current SSD can handle. In short - I doubt it is rclone internals that are the problem here.

That's not vfs-cache-mode writes work though. Large files written have a huge delay as it has to upload them.

This is not the same as using the rclone cache backend.

I know where my bottleneck is...the rclone mount. Perhaps you are speaking of rclone cache? I'm using rclone vfs, with --vfs-cache-mode writes.

And I really doubt you can post screenshots of you writing at 500 MB/sec to a rclone vfs mount, google itself limits upload speeds per file.

You can start a rclone move/copy with a single file and it'll never max a 1 gbps connection, and the mount is not different. The issue is that this slows down stuff

As far as I know - no, the uploads won't conflict.
I believe the copy operation that moves files from cache to cloud is always the default 4 transfers. The transfers you set via parameter seem to only affect the download transfers (on a mount). If not using a mount they seem to affect both. I'm not aware of a parameter that currently can set the number of transfers being used by the cache.

We are speaking of different rclone commands.

Also, reading on the mount is downloading. But I think transfers doesn't affect opening files for reading. Otherwise mounts by default wouldn't be able to handle more than 5 open files for reading, which is ridiculous...

transfers/checkers have no impact on a mount.