And what happens if you run out of disk space? and what happens if you exceed your upload limits? I'm using google drive btw
I noticed that when writing files there, sonarr creates them as .partial files, and it seems to be uploaded and then moved to .ext is this safe or expected?
Aug 18 09:50:40 onebox rclone[10071]: file.mp4.partial~: Moved (server side)
Aug 18 09:50:37 onebox rclone[10071]: file.mp4.partial~: Copied (new)
I'm running my mount with --vfs-cache-mode writes but I haven't set --cache-dir is it required?
Thanks. But I did some speed tests and write speed is very low on the mount...around ~40 MB/s vs my 1.1 GB/s unionfs mount and 2 GB/s local ssd speeds so I'm not sure I'm going to keep on rclone.
Also I had some strange errors trying to write files with sickbeard mp4 automator, even with write cache mode.
Software which writes into the mount, take much longer than doing in other fuse file systems or local disk, because rclone is not returning the write is successful til it's 100% uploaded. This lock many tasks and slow down even dd
But regardless if this is the reason or not, write speeds on rclone vfs mount are much slower than local disk, and a unionfs/mergerfs mount, and it shouldn't be.
If there are many files involved then once the max number of concurrent transfers is used - the mount will pause before the transfer it lets the next file into the cache. This can explain the apparent lower speed you experience because your OS may just be showing a simple average speed over time. It also unfortunately means that very large (numbe of files) transfers will be reliant on source OS staying active for a long time for the transfer to complete.
rclone currently does this syncronously, and there needs to be a code change to let you do this async and let the cache eat up everything all at once. There already exists a feature request issue for this.
As for partial files uploading, this is not ideal.
It will probably work, but those pertials are going to uploaded and downloaded lots of times.
rclone has no idea what files are temporary workfiles. When a file is released from write-lock then it is assumed to be done and set to upload. If it's not actually done and gets accessed again very soon it's going to get pulled back again and then re-uploaded (probably many times).
All software that creates unfinished files like this (mostly torrents, rendering ect.) should do their owrk in a local folder and then upload when file is actually finished. Often such software supports a setting to use a temp-folder and automatically move finished files. If you set the temp-folder locally and the finished-folder to cloud then everything will be smooth and automatic. qbittorrent does this for me for example.
If your software has no such feature then you have to work around it somehow. Either manually upload from local when done - or set up some sort of custom script that can filter out unfinished files based on the filename or something. rclone has the functions needed to do this and automate it if you schedule it as a script on a recurring timer.
When I did my tests no other files were being uploaded at all.
But you did gave me an idea... I could try to use rclone --exclude to not upload or process .partial files...
But I'm still not sure is worth using the mount because of the speed loss. My unionfs mount is much faster yet
And what you said about "open" files, are files opened for reading counted? Say if I want to have 200 files open for reading on the mount I should use --transfers 200 ?
If you get this problem even when uploading a single large file then I don't know where your bottleneck is.
All I can say is that I have never experienced the VFS mount being a bottleneck. I have copied up to 500MB/sec (ish) to cache, which is the fastest my current SSD can handle. In short - I doubt it is rclone internals that are the problem here.
I know where my bottleneck is...the rclone mount. Perhaps you are speaking of rclone cache? I'm using rclone vfs, with --vfs-cache-mode writes.
And I really doubt you can post screenshots of you writing at 500 MB/sec to a rclone vfs mount, google itself limits upload speeds per file.
You can start a rclone move/copy with a single file and it'll never max a 1 gbps connection, and the mount is not different. The issue is that this slows down stuff
As far as I know - no, the uploads won't conflict.
I believe the copy operation that moves files from cache to cloud is always the default 4 transfers. The transfers you set via parameter seem to only affect the download transfers (on a mount). If not using a mount they seem to affect both. I'm not aware of a parameter that currently can set the number of transfers being used by the cache.
Also, reading on the mount is downloading. But I think transfers doesn't affect opening files for reading. Otherwise mounts by default wouldn't be able to handle more than 5 open files for reading, which is ridiculous...