Slow copy speed to mounted google drive with windows file explorer

Hi, i started using rclone a while ago, mostly i used rclone copy to upload a huge file to google drive, manly around 1TB with the following code:rclone copy -v -vv --stats 5s --drive-chunk-size 512M --fast-list D:\"file-name" remote:.
With this command, i can get around 50MB/s upload speed.
However, when i mount the remote: withrclone mount -v -vv --fast-list --drive-chunk-size 512M remote: K: and copy a local file to the mounted g-drive in WINDOWS file explorer, the upload speed caps out at 2xMB/s, which is quite slow.

So, is there any way i could make the speed of copying files to the mounted drive be the same as copying with rclone?
(I tried using buffer-size but that doesn’t help much, as my file size is huge)

rclone version is what?

Did you make your own API key?

https://rclone.org/drive/#making-your-own-client-id

–fast-list doesn’t do anything on a mount.

512M chunk size is kind of huge as the sweet spot usually is around 32M or 64M.

I think you are hitting the V3 api limits based on how it’s uploading via the mount. The most I’ve ever seen on a mount is around that 2xMB range.

using v1,46 rclone if my memory is correct.
And yes, i did make my own api key and client-id to increase copy speed.

However, i believe it is not V3 api limits because i never saw any rate limited message while running with “-v -vv”, and using rclone copy can achieve 50MB/s speed, so i think it is something related to the rclone mount?

I’m quite sure it is as there are some differences in how the copy / sync command upload related to how the mount does it.

here is the output of using rclone copy

while manually copying to rclone mounted drive , windows explorer showed 2xMB/s, even by manually calculating the speed by size/time, so i am sure windows is reporting the correct speed

Yeah, no doubt. I can reproduce the same thing on the mount upload.

You can see download on the mount work much faster as it does chunking differently.

It’s related back to the V3 API.

so the upload speed of rclone mount is slower than rclone copy because of V3 api?
and…api v2 is not working for now right?

Yeah, it’s back to the chunked uploading that the copy uses vs the write method the mount uses.

2019/04/08 10:42:41 DEBUG : .1G.out.O469FZ: >Attr: a=valid=1s ino=0 size=0 mode=-rw-r--r--, err=<nil>
2019/04/08 10:42:41 DEBUG : .1G.out.O469FZ: Setattr: a=Setattr [ID=0x14 Node=0x2 Uid=1000 Gid=1000 Pid=7613] mode=-rw------- handle=INVALID-0x0
2019/04/08 10:42:41 DEBUG : .1G.out.O469FZ: >Setattr: err=<nil>
2019/04/08 10:42:41 DEBUG : .1G.out.O469FZ: Attr:
2019/04/08 10:42:41 DEBUG : .1G.out.O469FZ: >Attr: a=valid=1s ino=0 size=0 mode=-rw-r--r--, err=<nil>
2019/04/08 10:42:41 DEBUG : &{.1G.out.O469FZ (w)}: Write: len=131072, offset=0
2019/04/08 10:42:41 DEBUG : &{.1G.out.O469FZ (w)}: >Write: written=131072, err=<nil>
2019/04/08 10:42:41 DEBUG : &{.1G.out.O469FZ (w)}: Write: len=131072, offset=131072
2019/04/08 10:42:41 DEBUG : &{.1G.out.O469FZ (w)}: >Write: written=131072, err=<nil>
2019/04/08 10:42:41 DEBUG : &{.1G.out.O469FZ (w)}: Write: len=131072, offset=262144
2019/04/08 10:42:41 DEBUG : &{.1G.out.O469FZ (w)}: >Write: written=131072, err=<nil>

And the copy sends chunks

[felix@gemini data]$ rclone copy /data/1G.out GD: -vv
2019/04/08 10:43:24 DEBUG : rclone: Version "v1.46" starting with parameters ["rclone" "copy" "/data/1G.out" "GD:" "-vv"]
2019/04/08 10:43:24 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2019/04/08 10:43:24 DEBUG : 1G.out: Couldn't find file - need to transfer
2019/04/08 10:43:25 DEBUG : 1G.out: Sending chunk 0 length 8388608
2019/04/08 10:43:25 DEBUG : 1G.out: Sending chunk 8388608 length 8388608
2019/04/08 10:43:25 DEBUG : 1G.out: Sending chunk 16777216 length 8388608
2019/04/08 10:43:26 DEBUG : 1G.out: Sending chunk 25165824 length 8388608
2019/04/08 10:43:26 DEBUG : 1G.out: Sending chunk 33554432 length 8388608
2019/04/08 10:43:27 DEBUG : 1G.out: Sending chunk 41943040 length 8388608
2019/04/08 10:43:27 DEBUG : 1G.out: Sending chunk 50331648 length 8388608
2019/04/08 10:43:27 DEBUG : 1G.out: Sending chunk 58720256 length 8388608
2019/04/08 10:43:28 DEBUG : 1G.out: Sending chunk 67108864 length 8388608
2019/04/08 10:43:28 DEBUG : 1G.out: Sending chunk 75497472 length 8388608
2019/04/08 10:43:29 DEBUG : 1G.out: Sending chunk 83886080 length 8388608
2019/04/08 10:43:29 DEBUG : 1G.out: Sending chunk 92274688 length 8388608

Thanks for your reply.

So is there any way i could use copy method on rclone mount?

In fact, let me tell you my use case.
Normally i would have a bunch of small files generated by a program, and i have 2 way to merge them.

  1. Using cmd with copy “xxx+ yyy” and then use rclone copy
  2. Program merge the file and it automatically write it to the rclone mounted drive.

Since 2 is more convenient, i would prefer using method 2. However, from your reply, it seems that it is not possible for me to use method 2 while having decent speed

Would cache writes help? Although I suspect it would use the same non-chunked method… I haven’t tested it.

I think it would help as with --vfs-cache-mode writes, it stores it locally first and uploads it immediately when it finishes.

2019/04/08 15:35:47 DEBUG : &{test.mkv (rw)}: >Write: written=131072, err=<nil>
2019/04/08 15:35:47 DEBUG : &{test.mkv (rw)}: Write: len=130007, offset=736624640
2019/04/08 15:35:47 DEBUG : &{test.mkv (rw)}: >Write: written=130007, err=<nil>
2019/04/08 15:35:47 DEBUG : &{test.mkv (rw)}: Flush:
2019/04/08 15:35:47 DEBUG : test.mkv(0xc000139080): close:
2019/04/08 15:35:47 DEBUG : test.mkv: Couldn't find file - need to transfer
2019/04/08 15:35:48 DEBUG : n0t3rf73m837rcfpfuefg9fs3o: Sending chunk 0 length 8388608
2019/04/08 15:35:48 DEBUG : n0t3rf73m837rcfpfuefg9fs3o: Sending chunk 8388608 length 8388608
2019/04/08 15:35:49 DEBUG : n0t3rf73m837rcfpfuefg9fs3o: Sending chunk 16777216 length 8388608

The first part was the regular copy and the second part is a chunked upload. At that point, it really doesn’t matter as the upload happens in the background anyway. It also honors the drive-chunk-size using that method as well, whereas the normal writes do their own thing.

But then…what should i set the vfs write cache size to.

As i said, i mainly copy large files to g drive, and if i need to have 1tb local space for cache in order to upload a 1tb file eith fast speed…thats a big problem…So is there any formula for the size of cache per uploaded file size?
Or, can i set the location of the cache to the location of the file i need to upload, so it could start using copy immediately?(detecting the file is in the cache directory, so assume the file is in the cache and start upload immediately, without putting the file in cache)

There is cache-tmp-upload-path. Which I believe you can move the file to and it’ll trigger the next wait time (configurable). Might be an option for you.

https://rclone.org/cache/#write-features

Thanks for your reply.
Since i will upload 5TB file at some time.And i dont have another 5TB free partition in my windows machine for cache.Is there any way i can use cache to speed up 5TB upload?

If you move the files to that cache tmp upload, it shouldn’t take more space. Ultimately using sync/copy/move is going to be your best bet though.

What if my cache directory size is smaller than the file size(ie: 1tb cache but 5tb file), what will happen then?
And…is there any way that i can merge bunch of small files into a large files on rclone copy. Like merging 1.abc and 2.abc into a 1+2.abc file on the REMOTE only. Should i use rclone cat? But i don’t understand how to use rclone cat to merge 2 file into 1 on the fly

It’ll try to use the larger size and remove the files from cache after transfer.

Not with rclone. You’d have to script that to cat them and upload.

Thanks for your reply,
So if i copy a 5tb file to a 1tb cache, it will first upload the chunk in the cache, then copy the files to the cache again? and upload it again?

I believe the way it works is the tmp upload path doesn’t have a limit. You’ll move or copy a file there, it will upload it. Add then it will move that file to the regular cache. If it’s too big based on your limit, it simply won’t and delete it. If it was small enough, it’ll add it and add it grows it’ll purge up to the max size.

It does spell out how it works here.
https://rclone.org/cache/#write-features

Hi, i tried using vfs-cache-mode writes and cache-dir to a 1G drive, while copying a 5TB file to a mounted remote, IO error always occur, and rclone shows something like “cannot update because the file is still using”? i don’t remember correctly. So, is there any way for me to use a small cache, but achieve copy speed with mount? Thanks