What do you think about this cache mount settings?

What is the problem you are having with rclone?

Error when uploading consecutive files to nextcloud

What is your rclone version (output from rclone version)


Which OS you are using and how many bits (eg Windows 7, 64 bit)

Linux X64, Slackware 14.2

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount --cache-dir $PATH_ROOT \
    --buffer-size 256M \
    --dir-cache-time 72h \
    --poll-interval 1m \
    --drive-chunk-size 512M \
    --vfs-read-chunk-size 128M \
    --vfs-read-chunk-size-limit off \
    --cache-chunk-size=1M \
    --cache-info-age=1M \
    --cache-chunk-total-size=500G \
    --cache-workers=20 \
    --cache-db-path=$PATH_CACHE \
    --cache-chunk-path=$PATH_CACHE \
    --cache-tmp-upload-path=$PATH_UPLOAD \
    --cache-tmp-wait-time=1h0m0s \
    --cache-db-wait-time=1m0s \
    --allow-other --fast-list --uid=1000 --gid=1000 --umask=7 --vfs-cache-mode writes --log-level DEBUG --log-file=/mnt/user/logs/rclone_test.log \

Hello, I am aiming to mount Google Drive as a cache on Plex and Nextcloud.

I think tmp_wait_time is very good for the cache feature. The delayed upload function did not let me feel slow. But sometimes I can't upload when I drag multiple files on nextcloud. I wonder which part of my setup I should fix. Or, if you have recommended cache upload settings, please introduce me.

Thank you to everyone who read this question.

You should make that 0M as the cache does its own memory thing.

To pull a 1G file, that requires 1000 API calls. Are you setting it that small for a reason?

1 Like

I think you quoted the wrong thing?
I assume you meant


But your point is very valid for this, and I agree that is too low for typical use-cases. Unless your bandwidth is painfully slow maybe...



I'm not even sure what 1M indicates. it's a duration, so assuming this works then it must be a month I guess? I don't know, I have never tried using that postfix for durations before, but that seems plausible enough I guess.

1 Like

Wow. What a great way!

I set it up because I wanted to store the cache for a long time (1 month). I'd like to ask carefully as Rclone Newbie. Is this an upload-related setup?

I have enough bandwidth. Is this the setup associated with the upload? I'll try it when I give you the right settings. :slight_smile:

(As mentioned above)
I set this up for caching for a month. If it's not a typical use, please give me directions.

Thank you :slight_smile:

That is perfectly fine. I assume Ani just quoted the wrong line here.

No, it is associated with the download. What this effectively means is that the cache will be downloading (and caching it) in 1MB chunks. So as Ani says, if you download 1GB it means the cache will have to ask Google for 1MB 1000 times. This is very inefficient both in terms of bandwidth utilization and API.

I would say the ideal chunk-size is about as much data as you can download in maybe 1-3 seconds. For example, on my 162Mbit (around 18MB/sec in practice) I might use something like 20,30 or 40MB.

  • A higher chunk size is good because it is more efficient (bandwidth utilization and API calls)
  • A lower chunk size is good because caching is more granular, and also it will reduce the minimum time required to be able to start reading a file (because the first chunk must download fully before it can be delivered).

If you have absolutely no idea what I'm talking about, just set 10M :slight_smile:
That is a good middle-ground for most people.

Since you mentioned upload though...

This is a similar setting that relates to upload. Larger is more efficient (effectively faster), but the downside is that each active transfer can use that much memory. Since you are using the default 4 transfers in this case - that means rclone may use up to 512M x 4 = 2GB just on upload buffer. This may be a bit excessive... but it's not "wrong" as long as you have that much memory to spare...

As a general guide:
64M - IMO best compromise between speed and memory
128M - close to best performance
256M - highest value that gives any significant improvement in my testing, but not much faster than 128M
anything above 256M- overkill. Probably not worth it or noticeable difference. Don't you have anything better to use your RAM for? :smiley:

How this impacts things does vary somewhat on your max bandwidth though. Someone with gigabit connection would probably get more out of using 256/512M than I do...


One last point - I'd just leave this at default when using the cache-backend - because that is caching that data for you already (the info-age flag). No point in trying to cache the same data twice in 2 places. If the VFS layer needs to update directory data it will just ask the cache-backend for the answer. Any number less than 1M here would be meaningless ( not that I am suggesting you should make it higher... rather the opposite).

Probably won't blow anything up to leave it like this, but you are just creating another point of potential failure. Let the cache-backend handle it - as long as you use it that is.


Thank you god :slight_smile:

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.