Can you upload to an rclone mount without caching?

Hi everyone. I'm new to rclone and I tired to mount a gdrive with the following command: rclone mount gdrive: z: --vfs-cache-mode off , so as to avoid writing to disk before uploading and unnecessarily wearing out my disk. However, I cannot get rclone to work with this flag at all as it refuses to upload anything and displays the following error: WriteFileHandle: Truncate: Can't change size without --vfs-cache-mode >= writes. Therefore, it seems to require --vfs-cache-mode writes...

Is it possible to upload to an rclone mount without caching?

I'm on Windows 7 x64 and I tested the above on the latest versions of rclone (both beta and stable).

Thanks for your time and help.

hello and welcome to the forum.

normally, you do not need to use vfs for simple file copying.
and even if you need to use --vfs-cache-mode=writes, will not be wearing out your disk. the amount of data is almost nothing. just think about how much data is being written to your drive from the windows operating system writing to pagefile, virus scans and so on.

how are you uploading the files, windows explorer, or what?

are you sure the files are not being uploaded correctly?

rclone mount on windows is painfully slow, is there a reason you must use rclone mount, instead of rclone copy

for example, i created 50 files of size 1.0MB of random data.
using double commander, i copied those files from local to a rclone mounted drive x:
and then i did a checksum comparison on those files.


then i used the backup software, secondcopy, to copy those same files to another location on the x:
and secondcopy does checksums and again, no errors at all in the second copy logs.

1 Like

@asdffdsa Thanks for an ultra-fast and helpful reply.

I might not understand how cache works. For example, if you were to upload 2TB from D-drive to the cloud with cache enabled, wouldn't that 2TB of data be cached/written on C-drive (default cache location) in the process?

I mounted my google drive as a volume in Windows Explorer via the rclone mount command and I was basically trying to drag&drop the files that I want to upload from my physical volumes to the remote volume that I mounted with rclone mount.

I double-checked this after you mentioned it and it does in fact work with the --vfs-cache-mode off flag as well as without it. What threw me off is that the transfer rate was initially quite fast, but then the progress bar got stuck for some time. I think that might just be my Samsung SSD's Rapid function at the start of the process. When that happened and I read in the cmd window the earlier-mentioned rclone errors, I wrongly assumed that it doesn't work...

Unfortunately, you seem to be absolutely right. My upload, as measured by a reliable speedtest is 15MB/s. My transfer rate to gdrive via browser is about 11MB/s. When I used Mountain Duck (with cache disabled), which is a program that also mounts a cloud volume in File Explorer, I got about 2.6M/s . With rclone mount I only seem to get 500KB/s - 850KB/s. Do you get similar performance? Is there any way to improve it via additional flags or do I just have to use rclone copy in cmd?

i did some tesing of rclone mount


and using fastcopy for copying
my best run was

TotalRead  = 500 MiB
TotalWrite = 500 MiB
TotalFiles = 50 (2)
TotalTime  = 12.7 sec
TransRate  = 39.4 MiB/s

on average

TotalRead  = 500 MiB
TotalWrite = 500 MiB
TotalFiles = 50 (2)
TotalTime  = 20.7 sec
TransRate  = 24.2 MiB/s
1 Like

Your results look very good to me. I tested this again as follows:

(1) Installed FastCopy
(2) Started rclone with rclone mount gdrive: z: --vfs-cache-mode writes
(3) Set source (SSD) and destination (z:) in FastCopy, without altering other settings
(4) Run FastCopy

First, I copied 9 files and the results are as follows:

TotalRead  = 119 MiB
TotalWrite = 119 MiB
TotalFiles = 9 (1)
TotalTime  = 33.7 sec
TransRate  = 3.53 MiB/s
FileRate   = 0.27 files/s

Then I copied a 2.26 GB (2,427,945,044 bytes) file. FastCopy said that it finished, but explorer.exe crashed when I tired to do anything in Windows Explorer. The file was not copied at all.

Later, I tried copying the 2.26 GB file using rclone copy "B:\file 1" gdrive:\ -P. The time estimate was around 16 minutes and the upload hovered at around 2.3MB/s. Increasing the buffer size to 64MB (rclone copy "B:\file 1" gdrive:\ -P --buffer-size 64) didn't change anything...

Uploading the 2.26 GB file via web browser takes about 2-3min. with upload around 11MB/s.

Do you know why the upload speed with rclone is so slow?

i have not used windows explorer in many years, cannot comment on its behavior.

one good feature of fastcopy is that it can verify.
make sure to disable that when testing.

about that samsung ssd, is that external, usb?
usb is slow. are you using usb 2.x, 3.0, 3.1?

goto and report the results.

for me, this is my speedtest

704.05 856.59 7

also, gdrive can be slow and has many quotas and limits.
gives me a headache to understand it.

i use wasabi, a s3 rclone.

I used default settings in FastCopy and verify was not checked (only nonstop was checked).

The Samsung SSD 850 EVO is an internal SATA II drive on which my OS is installed. I get the same results with another internal SATA II SSD, i.e. Patriot Burst.

I had problems testing my connection at in the past. It seemed to report lower speeds than they actually were (as indicated by downloading multiple files in JDownloader, for example). Perhaps they don't have a good server near me... Anyway, the results now are PING = 11 ms; DOWNLOAD = 224.97 Mbps; UPLOAD = 92.67 Mbps. On the other hand, reports the following averages PING = 12 ms; DOWNLOAD = 592.6Mb/s; UPLOAD = 99.42 Mb/s.

I don't think it's the quotas or limits, because I have only uploaded several test files and the speeds are okay when uploading through the browser.

gdrive has many quotas and limits transactions per second.

there is a big difference between uploading one large file versus many small files.
and rclone mount is more like uploading a lot of small files.

you have a somewhat slow upload. approx. 10 times slower the mine.
so your performance will be less

as i said i normally do not use mount as it is so slow compared to rclone sync.

my only use for rclone is to backup files, as such no need to mount.

Yes, but even with rclone copy "B:\file 1" gdrive:\ -P the time estimate is around 16 minutes for a 2.26 GB (2,427,945,044 bytes) file and the upload estimate is around 2.3MB/s... Clearly, this isn't right, considering that uploading the same file via a web browser takes about 2-3min (upload around 11MB/s). I even get around 11.6MB/s with Filezilla when uploading about 4-5 files at once.

2020/03/24 16:13:55 INFO :
Transferred: 2G / 2 GBytes, 100%, 83.467 MBytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 24.5s

where in the world are you? what country?

[deleted] I'm basically trying to get the same upload speed with rclone as I do with a regular Chromium web browser. As we've discussed, now I get maybe 20-25% with rclone.

I'm trying to figure out how to create my own client ID and see if that helps...

well, it is well known that europe is having a lot of problems with heavy usage of internet.
for example, netflix and youtube are limiting video bandwidth.

do you pay google or are you using it as a free service?
perhaps google is doing the same. intentionally slowing connections.
you get what you pay for...

my suggestion is to get a free trial at and do some testing.
they have endpoints in europe and the united states.

let me know if you setup wasabi as as endpoint and we can compare
do some testing.

If what you say is true, then how would it be possible to get 4-5 times the speed when uploading via Chrome. In other words, when I upload through rclone I get 2MB/s, when I upload through Chrome I get 11MB/s... and I'm still in Europe... Therefore, this cannot be due to heavy usage, distance from servers etc. This is clearly a settings thing. Probably rclone settings. Perhaps PC settings.

Well, thanks for your help so far. If I don't figure it out, I'll create a new topic.

well, let's call a gdrive expert.

calling @thestigma...

1 Like

Yes. if you disable the write-cache this is what it will do.
Do however be aware that functionality of a mount will be very limited like this. It will not be able to support all operations that an OS expects it to do - so it will probably not work well with most applications that try to use that mount directly to write data. Additionally, with no cache you have basically no security against any failed uploads (aside from manually handling it). rclone can't auto-retry data it doesn't have anymore because the whole thing was just streamed.

It will be fine for simple sequential uploads and downloads though. Read-only operations like streaming and such will also be fine. It just has very limited read+write mode handling without a cache and a write-cache is absolutely necessary to provide a fully compatible interface between OS and cloud.

As for why it still gives you errors in simple copies without cache.... well, I'm not entirely sure on this. I suspect it should be a warning at most and not an error. It also happens to local remotes which really should not be happening. @ncw an elaboration on that would be nice. I dropped the ball on this discussion with you last time.

There is no point to compare between the web-interface and the external API. These are totally different things, and the servers you talk to can be far apart.

That said - I think we can solve this problem for you most likely.

Start by adding this to your rclone.conf file (at the bottom of your Gdrive block):
chunk_size = 128M
(64M is also decent if you don't have a lot of memory on the system).
This will drastically increase upload bandwidth utilization compared to the default 8M which is IMO very insufficient for faster connections. I can explain the technical what and why of this if you r are interested - but for now let's just say this can easily add +70% or more upload (does not affect download).

If you have a gigabit connection (which it looks like) Gdrive can saturate that whole pipe with the right settings (well, assuming that you aren't transferring only small files at least).

Go do a new test with that setting (test wiht one or more large files above 100MB please). Report the results. I will suggest further improvements if we can't push this thing way way up :smiley:

EDIT: But it is worth noting we have seen some temporary dips in performance across many major providers the last days (including Gdrive) - probably due to increased load related to corona (this has also been in news) so don't trust any (poor) results until you have the first time around as they can be temporary hickups these days. Seems to work well for me again at least last 24 hrs.

1 Like

Do you mean edit an rclone.conf file? Where is it located? Please kindly provide more details.


perhaps he can add that to his rclone command instead of config file.
that would enable him to tweak it while testing?

Yes, true.

--drive-chunk-size 128M
would accomplish the same thing as a command flag.

I just did the following rclone copy "B:\file 1" gdrive:\ -P --drive-chunk-size=128, this gives me the following error 2020/03/24 22:11:10 Failed to create file system for "gdrive:\\": drive: chunk size: 128k is less than 256k.

Never mind, I see the mistake.