Rclone mount faster upload? Is 8192 the limit?

Hi there!

So I’ve been racking my brain with this forever. I’m using rclone mount + Sonarr.
Rclone mount seems to not want to upload more than 8192 (8M) per chunks no matter what value to put in drive-upload-cutoff and drive-chunk-size for a non-cached gdrive.

Is it not possible to edit this variable for Rclone Mount?

Just for clarification, I’m using rclone mount purely for automated uploads with sonarr for the latest tv series. I am not doing any plex streaming.
I do not have root access as this whole setup is on a seedbox so no FUSE.
Rclone mount is the only practical way to mount my Google Drive account.

I tried to circumvent this by attempting to use cache-tmp-upload-path in conjunction with a long cache-tmp-wait-time with a cached gdrive mounted with Rclone Mount. Evidently, I had the wrong idea that any files downloaded by sonarr would just be mv over to the cache-tmp-upload-path.
After a file had finished downloading, it still gets “uploaded” to the cache-tmp-upload-path with the painfully slow 8192 byte speed.

This totally made my script of running a manual rclone copy with 256M drive-chunk-size once a file was detected in cache-tmp-upload-path useless.

I’m throwing the towel at this point and hope someone could guide me on how to make the upload for a non-cached gdrive on rclone mount go faster than the dreaded 8192 limit.

These are the commands
TVMount
rclone -vv mount TV: /home/me/downloads/Mount/TV\ Series
–drive-upload-cutoff 256M
–drive-chunk-size 256M
–buffer-size 0

Bonus (My idea of an “alternative” faster upload method)
TVCache
rclone -vv mount TV-Cache: /home/me/downloads/Mount/TV\ Series
–cache-db-purge
–cache-workers=10
–cache-tmp-upload-path=/home/me/.cache/rclone/tvcache
–cache-tmp-wait-time=1h

Auto upload script (Some commands ommited)
if find /home/me/.cache/rclone/tvcache/* -maxdepth 0 -type d -mmin +4 | read
then
mkdir /home/me/.cache/rclone/tvcache2
find /home/me/.cache/rclone/tvcache/* -maxdepth 0 -type d -mmin +4 -exec mv {} /home/me/.cache/rclone/tvcache2
rclone -vv move /home/me/.cache/rclone/tvcache2 TV: --drive-chunk-size=256M
rm -rf /home19/ratens/.cache/rclone/tvcache2
fi

Any help would be appreciated!

How are you measuring that?

This means that only files bigger than 256M will be uploaded in chunks.

This really looks like it should work though…

Quite literally by doing the calculation with the log output.

This is the output for when the upload was through rclone mount both cache and non-cached google drive.

rclone -vv mount TV: /home/me/downloads/Mount/TV\ Series
rclone -vv mount TV-Cache: /home/me/downloads/Mount/TV\ Series

2018/06/24 07:41:53 DEBUG : &{Test3/someiso.iso (w)}: Write: len=8192, offset=1206722560
2018/06/24 07:41:53 DEBUG : &{Test3/someiso.iso (w)}: >Write: written=8192, err=<nil>
2018/06/24 07:41:53 DEBUG : &{Test3/someiso.iso (w)}: Write: len=8192, offset=1206730752
2018/06/24 07:41:53 DEBUG : &{Test3/someiso.iso (w)}: >Write: written=8192, err=<nil>
2018/06/24 07:41:53 DEBUG : &{Test3/someiso.iso (w)}: Write: len=8192, offset=1206738944
2018/06/24 07:41:53 DEBUG : &{Test3/someiso.iso (w)}: >Write: written=8192, err=<nil>
2018/06/24 07:41:53 DEBUG : &{Test3/someiso.iso (w)}: Write: len=8192, offset=1206747136

And it takes ages to finish compared to normal rclone copy/move.

For the following:
rclone -vv mount TV-Cache: /home/me/downloads/Mount/TV\ Series

As I had this variable set for it –cache-tmp-upload-path=/home/me/.cache/rclone/tvcache
I can see the file size for someiso.iso increasing in real time in the /home/me/.cache/rclone/tvcache directory.
However, this beats the purpose of my script as it is acting as if the transfer to the tvcache directory is in the cloud.
I was hoping that it would just rename or mv Test3/someiso.iso to /home/me/.cache/rclone/tvcache/Test3/someiso.iso.

My usual move/copy speed (rclone copy/move) is as such:

Transferred:   22.332 GBytes (381.078 MBytes/s)
Errors:                 0
Checks:                 1
Transferred:            1
Elapsed time:        1m0s
Transferring:
 *                             Test/Test1.MP4: 44% /6.365G, 57.693M/s, 1m3s
 *                             Test/Test4.MP4: 47% /5.970G, 50.683M/s, 1m3s
 *                             Test/Test7.MP4: 60% /4.682G, 49.906M/s, 38s
 *                             Test/Test5.MP4: 25% /10.292G, 48.194M/s, 2m42s
 *                            Test/Test2.MP4: 26% /10.669G, 47.231M/s, 2m50s
 *                            Test/Test6.MP4: 15% /5.688G, 41.274M/s, 1m59s
 *                          Test/Test0.MP4: 54% /5.476G, 56.508M/s, 45s
 *                          Test/Test3.MP4: 46% /5.846G, 47.309M/s, 1m7s

2018/06/24 12:19:47 DEBUG : Test/Test0.MP4: Sending chunk 3221225472 length 268435456
2018/06/24 12:19:48 DEBUG : Test/Test3.MP4: Sending chunk 2952790016 length 268435456
2018/06/24 12:19:49 DEBUG : Test/Test5.MP4: Sending chunk 2952790016 length 268435456
2018/06/24 12:19:50 DEBUG : Test/Test4.MP4: Sending chunk 3221225472 length 268435456
2018/06/24 12:19:51 DEBUG : Test/Test2.MP4: Sending chunk 3221225472 length 268435456
2018/06/24 12:19:51 DEBUG : Test/Test1.MP4: Sending chunk 3221225472 length 268435456

I append the following to it. –drive-chunk-size 256M

So I was wondering if I could change it from 8192 to something more substantial.

Ah, my bad there. Yeah, just noticed that I set the cutoff a little too high. I’ll play around with that some more. However, it shouldn’t affect me too much as most of the time, my files are at least bigger than 1gb. :slight_smile:

I see.

That isn’t the writing to google drive layer, that is the mount/vfs layer. So your OS is writing in 8k chunks (not 8M).

If you use a --buffer-size > 0 then that will help a lot I expect.

Wow that is fast! rclone move and rclone copy will always be faster than copying into an rclone mount as they have more control over reading the file and they don’t have to traverse multiple times through the operating system. It shouldn’t be too much slower though.

Hmm, weird that it is transferring at such a slow rate. The drive I/O was not even close to 10% usage, but it’s limiting itself to 8k chunks. And without root access, I can’t do any kernel modification too.

Sadly, that did nothing to improve the performance of rclone mount. I’ve tried up to 512M, but I suspect this value only affects the upload to Google Drive and not the mount layer transfer.

Another problem this brings about is the leftover file of .mkv.partial~ from sonarr. As the transfer is too slow, there tend to be 2 files made after the fact. First is the tv_series.mkv.partial~, and another file that is supposed to be the rename from tv_series.mkv.partial~ to just tv_series.mkv. Not a huge biggie as I can just do a domain wide search for .partial~ and delete 'em.

Oh, it’s slower. By a huge margin. Almost 10 times slower than a standard rclone move/copy. And during that time, sonarr will be unresponsive as the very action of uploading the file to google drive on rclone mount is akin to importing, and until that import is done, sonarr is basically useless.

However, just having rclone mount is a huge life saver for me as I can’t exactly install something like google-drive-ocamlfuse on the seedbox I’m currently using. Can’t wait to see how rclone will evolve to!

With the cache-tmp-upload, it should write locally first so the actual upload to the GD would not come into play since it’s written to the cache-tmp-upload.

Once the partial is written locally, it just moves it and the partial should go away.

The upload to your GD happens without impact as it won’t access the new file in the GD until the upload is finished and remove the old one.

The cache upload to GD is slow because it only uses 1 worker if you are using the plex integration.

Placeholder:
TV: Google Drive Account
TV-Cache : Cached TV:
CacheFolder: cache-tmp-upload-path
CacheFolder2: find /tvcache/* -maxdepth 0 -type d -mmin +4 -exec mv {} tvcache2
TVSeries: A TV Series folder automatically created by sonarr after an import from the nzbget tmp folder to CacheFolder

(Sonarr automation)>(NZBGet) -> CacheFolder -> (Sonarr Automation) CacheFolder/TVSeries

That’s my understanding as well. I set up my TV-Cache: with a cache-tmp-upload-path and a long cache-tmp-wait-time so that I could trick Sonarr into thinking that the files have been downloaded, unpacked and renamed correctly. At this point, the file should’ve been moved from the nzbget folder to the cache-tmp folder and waiting for upload once reaching the cache-tmp-wait-time.

CacheFolder/TVSeries -> mv -> CacheFolder2/TVSeries

As the variable for the rclone move for cache-tmp-upload cannot be configured at this moment, I made a script that would scan the top level folder of CacheFolder for any folder with a modification time of more than 4 minutes, and execute a mv of the folder (CacheFolder/TVSeries) to CacheFolder2.

rclone move CacheFolder2 TV:

From there, CacheFolder2 will be rclone move to TV: with the TV Series Name intact.
After the upload is done, another script will kill the screen running rclone mount TV-Cache: and start a new screen with another instance of rclone mount.

Sonarr won’t be affected whatsoever as the unmount is but less than a fraction of a second, with the cached directory all refreshed with the cache-db-purge variable.

However, here’s the problem. After the nzb file was downloaded, unpacked and renamed by sonarr, I could still see in the log the usual output you would get from an upload through rclone mount.

2018/06/25 20:49:29 DEBUG : &{Goliath/Goliath - S02E03 - Fresh Flowers - WEBDL-2160p.mkv (w)}: Write: len=8192, offset=5179096080
2018/06/25 20:49:29 DEBUG : &{Goliath/Goliath - S02E03 - Fresh Flowers - WEBDL-2160p.mkv (w)}: >Write: written=8192, err=<nil>

And in TV-Cache: , I can see the file size for the file increase as per the log as I refresh the directory.
So just like @ncw said, this may mean that the 8192 (8k chunks) is not the actually rclone move that should run after cache-tmp-wait-time, but a cp initiated by the OS from nzbget temp folder to CacheFolder.

I’m using an SSD, so it is not saturation. This could be OS related and nothing rclone can do to improve the transfer speed.

Oh, and I don’t use plex or run any sort of reading operation from Google to my Server. My set-up is 99% for upload.

Sorry for the long winded explanation. I'm really bad at explaining stuff =.=.

Your setup is confusing me a little bit.

I have my NZBget and Sonarr on the same box.

I have NZBGet download and do it’s post processing and it drops a file on a local drive /data/NZB/done
Sonarr picks up the completed from from the local drive /data/NZB/done and copies it to my GD “/gmedia”.

In /gmedia/TV/Somehow/SomefileS01E01.mkv gets copied and since it’s a cache-tmp-upload, it copies it locally first. Cache and all the magic happens and the final product is the actual .mkv file with the .partial file gone.

My upload time is 6 hours on my cache so in 6 hours it leaves my local file system tmp area and uploads (slowly) to the cloud.

I can’t figure out why you have the multiple caches.

Yup, this happens to me too. But the transfer to my CacheFolder takes ages because it gets transferred in chunks of 8K (8192 bytes). And during this time, sonarr thinks that it is still importing from my, let’s say /nzb/Done directory.
During that transfer, sonarr will become unresponsive, with the purple progress bar showing 100% for the whole time until the file gets fully transferred to CacheFolder.

During this period, any other files that finished NZBGet post processing and is waiting in /nzb/Done will have to wait for that to finish before proceeding. That includes Series that is pointed to the local drive and not the rclone mount directory.

Oh, I have 2 directory path set for sonarr. TV Series that ended goes to the local disk which I then rclone move to GDrive, and rclone mount for Series that is ongoing.

Hmm. I did two tests by copying to the same file system and copy across file systems. The copy to the cache-tmp-upload and the vfs-cache-writes are both very slow.

felix@gemini:/data$ time cp test.mkv /gmedia/

real	1m0.050s
user	0m0.001s
sys	0m0.139s
felix@gemini:/data$ time cp test.mkv /tmp/test1.mkv

real	0m0.196s
user	0m0.000s
sys	0m0.196s

The one minute test copy was configured with cache-dir pointing back to my /data which I did the subsecond copy:

/usr/bin/rclone mount gcrypt: /gmedia --allow-other --dir-cache-time 96h --cache-dir /data/rclone --vfs-read-chunk-size 10M --vfs-read-chunk-size-limit 512M --vfs-cache-mode writes --vfs-cache-max-age 6h --buffer-size 512M --syslog --umask 002 --rc --log-level INFO

I’d expect the cp into the rclone mount to be almost the same as the local copy since it is writing to the same directory underneath the covers.

@ncw - Is it uploading the file after the local copy is complete even though there is a wait time?

2018/06/25 16:22:56 DEBUG : &{test.mkv (rw)}: >Write: written=1135, err=<nil>
2018/06/25 16:22:56 DEBUG : &{test.mkv (rw)}: Flush:
2018/06/25 16:22:56 DEBUG : test.mkv(0xc42058c540): close:
2018/06/25 16:22:56 DEBUG : test.mkv: Couldn't find file - need to transfer
2018/06/25 16:22:57 DEBUG : n0t3rf73m837rcfpfuefg9fs3o: Sending chunk 0 length 8388608
2018/06/25 16:22:57 DEBUG : : Statfs:
2018/06/25 16:22:57 DEBUG : : >Statfs: stat={Blocks:274877906944 Bfree:263232401030 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Bsize:4096 Namelen:255 Frsize:
4096}, err=<nil>
2018/06/25 16:22:58 DEBUG : n0t3rf73m837rcfpfuefg9fs3o: Sending chunk 8388608 length 8388608
2018/06/25 16:22:58 DEBUG : : Statfs:
2018/06/25 16:22:58 DEBUG : : >Statfs: stat={Blocks:274877906944 Bfree:263232401030 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Bsize:4096 Namelen:255 Frsize:4096}, err=<nil>
2018/06/25 16:22:59 DEBUG : n0t3rf73m837rcfpfuefg9fs3o: Sending chunk 16777216 length 8388608

Sorry as I must be feeling old today. I forgot the cache-writes for vfs uploads immediately rather than the cache-tmp-upload, which waits. I’ll redo my test with a cache-tmp-upload…

Yeah, my time copy for the cache-tmp-upload takes 3 seconds, which seems fine:

felix@gemini:/data$ time cp test.mkv /gmedia/

real	0m3.673s
user	0m0.010s
sys	0m0.330s
1 Like