Raspberry Pi4B 4GB config help

FIRST
It's very hard to say if the setup is ideal unless we have a much clearer idea of your spesific use-case, but in general the setup seems sensible enough.

SECOND
What speeds you will get on Gdrive is very use-dependent. I seem to be able to max pretty much any connection I have been able to experiment on, so bandwidth to the google systems does not seem to be any issue. However, Gdrive has some limitations on how many file operations it can do in a second (about 2/sec) so the result is that while large files can be very fast - tons of small files can be very slow. This is not rclones fault but rate limiting on Gdrive. Consider zipping up very large collections of tiny files if needed. (a transparent system to do this automatically on the fly may eventually be developed as work is already well underway for a compression-remote)

Also, --drive-chunk-size will heavily impact performance on large files. By default it uses a pitiful 8MB pr chunk which means the TCP protocol never really gets to ramp up to full speed on a fast connection. Set this to as large as you have memory for (something you will have to be careful about on a pi). 64MB is decent, 128MB is nearing ideal. more than 256MB has very little benefit. Going up from 8MB to a reasonable number you can very easily double your throughput.

Also, since you are using a write cache you really need to consider if that is a botleneck. Remember that all written data has to be written to that cache first before it gets transfered - and that means you can never transfer faster than the medium you store on is capable of writing. You don't really specify it here unambiguously, but if the cache is on an SDcard, even a fast one, that is unlikely to keep up with a gigabit connection. Consider if you can maybe use some space on the USB HDD for this (or any other storage on the network)? Any decent HDD should be able to saturate a gigabit connection pretty well. I am also not certain if using an SDcard for large amounts of regular writing is ideal. I don't think SDcards generally have anywhere near the write-endurance of an SSD, much less a HDD. If going for months or years of heavy use it could literally burn out your card on write-endurance so at the very least check what write-endurance your card has so you know what you are doing and won't get a nasty surprise later.

THIRD,
using a 1G buffer on a 4GB system is not advisable. each transfer can use up to that much memory, so that will easily have the potential of crashing rclone even just with the default 4 transfers. Besides, a 1GB buffer is massive overkill anyway. The default 16MB typically is not a major factor in overall performance.

Use a larger --drive-chunk-size as mentioned above, but not so large that your memory risks running out. you need (number of transfers) x (chunk size) to be able to fit, so be reasonable.

Setting both a cache max age and a max size probably indicates you misunderstand how these work. max age will dump anything from cache (after done using) over that age.
max size will dump oldest files from cache (after done using) when it reaches its max size
Neither of this will actually hard-limit the size of the cache. If you transfer a 10GB file that WILL make your cache bloom to 10GB until rclone is done with the transfer (which is something you have to keep in mind when using limits space for the cache). Only after will the limits come into play to clean up the cache to the target size or age. It just has to be that way for rclones cache to function...

1 Like