What is the problem you are having with rclone?
Not a problem, just a usage question
What is your rclone version (output from
- os/arch: linux/amd64
- go version: go1.13.5
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Centos 7 minimal vm running in esx 6.5u1 hypervisor
Which cloud storage system are you using? (eg Google Drive)
The command you were trying to run (eg
rclone copy /tmp remote:tmp)
A log from the command with the
-vv flag (eg output from
rclone -vv copy /tmp remote:tmp)
Hey all. Quick question. My understanding is vfs-chunk-size determines the size of the initial chunk downloaded upon file requests, then scales up to the chunk max value. The general consensus is 128M is a good value, and that's what mine is set to and works well for streaming.
My question is, if I'm loading in a library for an initial scan by Plex, does this mean it will grab 128M for every imported item since Plex has to read a portion of the file on import? If the answer is yes, is a lower value better for the initial scan? I'm noticing massive amounts of data usage on my cloud host for the scan. If my thought process is correct, does anyone have a recommended value for the initial scan, just enough to get the required data from the file in one chunk, not for streaming, and would this increase the initial scan performance and/or lower data consumption for the initial scan?
I can test various scenarios but didn't want to reinvent the wheel if the answers are already known.
Thanks in advance.