After copying files from Google Drive to Box via rclone, copied files show up in rclone mount only after remounting the chunker drive after copying is finished. Files are visible in the Box web interface from the start of the copying.
Run the command 'rclone version' and share the full output of the command.
rclone v1.63.1
os/version: debian 11.7 (64 bit)
os/kernel: 5.10.0-22-amd64 (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.20.6
go/linking: static
go/tags: none
Which cloud storage system are you using? (eg Google Drive)
The command you were trying to run (eg rclone copy /tmp remote:tmp)
/usr/bin/rclone mount chunker: /path/to/mountonlocaldisk --config /path/to/rclone.conf -vv --use-mmap --vfs-cache-mode full --cache-dir=/path/to/cachefolderonlocaldisk --vfs-read-ahead 512M --buffer-size=2G --vfs-cache-max-size=2000G --tpslimit 10 --vfs-fast-fingerprint
rclone copy -P gdrive:/Media/"TEST FOLDER"/ chunker:/Data/Media/"TEST FOLDER"/ --transfers 5 --bwlimit 50M
#### The rclone config contents with secrets removed.
Unless you have very specific workflow you use mount for, your values do not make sense and can be counterproductive and/or dangerous.
--buffer-size is used per open file. With your settings if some program opens 20 files rclone will have to use 40GB of RAM. And if you don't have it result will be even system crash:)
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
With your values every file you open - even to read just one byte will result with rclone reading 2.5GB from remote......
I use these flags for streaming. Use them to preload files from remote to disk cache that are then just uploaded to player. Sometimes it happens that I start streaming from remote and a file download starts on my server at the same time. Since it's a gigabit server up/down, the download has priority and uses all available bandwidth, causing reading from remote to be slower. By having some buffer already downloaded, the servers uploads that to the player and slowly keeps filling buffer from remote until the simultaneous file download is finished.
When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.
When using this mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is set large if required.
You can also read more about these two flags in this thread:
OK, I'll play around with the mount commands and follow the docs.
Unrelated to the post topic, which you solved (thanks for that, I removed --dir-cache-time of 1000h and left it to default 5m), is there a use case for the box flag --box-upload-cutoff ?
I am currently uploading and chunking 30gb files. The multipart upload is running with default part size of 64Mi, chunker split size is set to 4.8GB.
I received an error: multipart upload failed to upload part: client connection refused. This was with a bandwith of 50 MB/s. I am now trying 30 MB/s, so far so good. Maybe my server needed more time to verify the MD5 hash and took longer than rclone was willing to wait?
You do not want to set --vfs-read-ahead massively large as it would work for streaming one video. But when for example you start watching film A, then change to B and finally decide to watch C rclone has 3x --vfs-read-ahead to finish. At some stage depending on your usage pattern you can choke all your connection.
What you want is jitter free watching experience so there is always some ahead being downloaded to prevent glitches when temporary network slowdowns happen. But not too much ahead to prevent situations when you download things not needed. When you set it to 2GB when you only click at the file rclone has to download this 2GB.
But of course you can experiment and find values the best for your specific usage.
This flag tells rclone when to use multipart upload (for files larger than cut-off size). The default is 50MB. If you set it too low then uploads of small files can suffer as number of simultaneous uploads is limited by number of transfers - and this value is shared between multipart and individual files.
Box does throttle - but it is not based on bandwidth but rather rate of API calls - even though these two values can be related. You can exceed API calls limit on slow connection as well. It is more related to latency than pure speed. It is much better and precise to use tpslimit to prevent throttling.
You still see errors - lower the value. Or try to increase when all ok.
Unfortunately precise value can be only found empirically. For example for Dropbox consensus is that 12 is optimal. For Box I do not think we know yet:)
Great, thanks for the help. I'm monitoring the transfer with 30MB/s bandwith limit, no issues so far. Will test the tpslimit and burst flags with the next transfer or if the current one messes up.
New and exciting times Thanks again. Have a lovely day and weekend.