Raspberry Pi4B 4GB config help

Hi

I am new to linux and had my fair share of difficulties with setting up raspbian buster, installing sabnzbd, sonarr, radarr and rclone.

Now I would love to get some assistance for optimizing the (working but slow) rclone mount.

I have a raspberry pi 4b 4GB with a 32 GB SDCard (Class 10).
Attached to the USB-Port I have a 500GB USB3 HDD.

SABNzbd, Sonarr, Radarr and rclone are all running under the user pi.

The workflow is as follows:

  1. SABNzbd downloads to USB HDD
  2. SABNzbds extracts to /mnt/google-drive/upload dump, whereas mnt/google-drive is the rclone mount.
  3. Sonarr or Radarr move the files from mnt/google-drive/upload dump to the appropriate location on the google-drive.

First, do you think this is a good idea?

Second, if I am monitoring the ethernet port of the raspberry pi on my router I see it topping out at roughly 250mbit/s upload. I do have a gigabit connection. Is this to be expected when connecting to google-drive?

Third, below is my rclone mount /etc/systemd/system/rclone.service file.
What could be optimized for my situation? Is there a way to make sure my SD Card doesn't get written too much to?

I tried googling and reading the wiki and the forums but there is so much conflicting information and to be honest I still don't understand the differences between cache and vfs-cache.

# /etc/systemd/system/rclone.service
[Unit]
Description=Google Drive (rclone)
AssertPathIsDirectory=/mnt/google-drive

[Service]
Type=simple
ExecStart=/usr/bin/rclone mount google-drive:/ /mnt/google-drive \
	--config=/home/pi/.config/rclone/rclone.conf \
	--allow-other \
	--buffer-size 1G \
	--dir-cache-time 96h \
	--umask 002 \
	--vfs-cache-mode writes \
	--vfs-cache-max-age 1m \
	--vfs-cache-max-size 5G
	
ExecStop=/bin/fusermount -u /mnt/google-drive
Restart=always
RestartSec=10

[Install]
WantedBy=default.target

Any help is greatly appreciated.

FIRST
It's very hard to say if the setup is ideal unless we have a much clearer idea of your spesific use-case, but in general the setup seems sensible enough.

SECOND
What speeds you will get on Gdrive is very use-dependent. I seem to be able to max pretty much any connection I have been able to experiment on, so bandwidth to the google systems does not seem to be any issue. However, Gdrive has some limitations on how many file operations it can do in a second (about 2/sec) so the result is that while large files can be very fast - tons of small files can be very slow. This is not rclones fault but rate limiting on Gdrive. Consider zipping up very large collections of tiny files if needed. (a transparent system to do this automatically on the fly may eventually be developed as work is already well underway for a compression-remote)

Also, --drive-chunk-size will heavily impact performance on large files. By default it uses a pitiful 8MB pr chunk which means the TCP protocol never really gets to ramp up to full speed on a fast connection. Set this to as large as you have memory for (something you will have to be careful about on a pi). 64MB is decent, 128MB is nearing ideal. more than 256MB has very little benefit. Going up from 8MB to a reasonable number you can very easily double your throughput.

Also, since you are using a write cache you really need to consider if that is a botleneck. Remember that all written data has to be written to that cache first before it gets transfered - and that means you can never transfer faster than the medium you store on is capable of writing. You don't really specify it here unambiguously, but if the cache is on an SDcard, even a fast one, that is unlikely to keep up with a gigabit connection. Consider if you can maybe use some space on the USB HDD for this (or any other storage on the network)? Any decent HDD should be able to saturate a gigabit connection pretty well. I am also not certain if using an SDcard for large amounts of regular writing is ideal. I don't think SDcards generally have anywhere near the write-endurance of an SSD, much less a HDD. If going for months or years of heavy use it could literally burn out your card on write-endurance so at the very least check what write-endurance your card has so you know what you are doing and won't get a nasty surprise later.

THIRD,
using a 1G buffer on a 4GB system is not advisable. each transfer can use up to that much memory, so that will easily have the potential of crashing rclone even just with the default 4 transfers. Besides, a 1GB buffer is massive overkill anyway. The default 16MB typically is not a major factor in overall performance.

Use a larger --drive-chunk-size as mentioned above, but not so large that your memory risks running out. you need (number of transfers) x (chunk size) to be able to fit, so be reasonable.

Setting both a cache max age and a max size probably indicates you misunderstand how these work. max age will dump anything from cache (after done using) over that age.
max size will dump oldest files from cache (after done using) when it reaches its max size
Neither of this will actually hard-limit the size of the cache. If you transfer a 10GB file that WILL make your cache bloom to 10GB until rclone is done with the transfer (which is something you have to keep in mind when using limits space for the cache). Only after will the limits come into play to clean up the cache to the target size or age. It just has to be that way for rclones cache to function...

1 Like

Thank you very much for your detailed answer.
I did some further digging and now it seems I have found a decently working configuration. I tried to explain my thought process. I hope I assume the correct commands?

# /etc/systemd/system/rclone.service
[Unit]
Description=Google Drive (rclone)
AssertPathIsDirectory=/mnt/google-drive
User=pi
Group=pi

[Service]
Type=simple
ExecStart=/usr/bin/rclone mount google-drive:/ /mnt/google-drive \

#specifies remote and mountpoint
--config=/home/pi/.config/rclone/rclone.conf \
#tells rclone where the config is, not sure if necessary because it could be the default location
--allow-other \
#allows other users to access remote
--umask 000 \
#allows any user to have full file privileges (again unsure if necessary but I don't need linux permissions and in fact I find them rather complicated and annoying.
--vfs-cache-mode writes \
#enables vfs cache mode writes, which seems to be necessary for normal file system operation.
--vfs-cache-max-size=10G \
#defines maximum size of vfs cache. However if a bigger file is written, cache still grows to that size
--cache-tmp-upload-path=/media/pi/USBDISK/rclone/upload \
#directory where rclone will cache files for upload. In my case USBDISK because I want to limit SD-Card writes.
--cache-chunk-path=/media/pi/USBDISK/rclone/chunks \
#same as above but for downloads?
--drive-chunk-size=128M \
#as per your recommendation. Will monitor if my RAM (4GB) is sufficient and maybe reduce it to 64MB.
--cache-writes \
#enable caching on writes throung the filesystem
--cache-dir=/media/pi/USBDISK/rclone/vfs \
#directory of cache data, again on USB to limit SD Card r/w operations
--cache-db-path=//media/pi/USBDISK/rclone/db \
#Directory of cache database, again on USBDISK
--checkers=16 \
amount of checkers to check things? Don't completely understand it but I guess it runs fine this way.
--size-only \
#limits rclone to check only size of files and ignores modification time. Seems to speed-up things for Sonarr/Radarr library scans.
--dir-cache-time=5m
#time the directory structure of google-drive stays in cache. Reduced this massively, otherwise Sonarr/Radarr would not start importing as quickly.

ExecStop=/bin/fusermount -u /mnt/google-drive
Restart=always
RestartSec=10

[Install]
WantedBy=default.target

This feels like a bad one to set on with a low memory system like a PI. You really don't get much benefit from this in terms of writing anyway.

This is only used for uploads and on a small memory system, I'd just remove it as the bang for the buck here doesn't matter that much.

This does nothing a mount and should be removed.

This does nothing on a mount and should be removed.

I think there is some confusion here on what this does. This does memory based caching for the directory and file structure. The polling internal is 1 minute and this picks up changes. You'd want this to be high as it reduces API calls and make things overall faster.

1 Like

Yea that's default on Linux as far as I understand. Doesn't hurt to specify though.

Ask Animosity he's the Linux guy :smiley: I use Linux so irregularly it's the sort of stuff I have to look up every time I need it.

All these only apply if you actually use the cache-backend (a seperate remote that you stack and not to be confused with the VFS cache). I can't actually confirm unless you share your config (redact sensitive info if you do). --cache-writes AND --cache-tmp-uploads at the same time is not a good idea and I would generally recommend you not use these at all unless you know you really need to. I have had far too much buggy behavior with both and cache-writes probably doesn't even work like you expect. Ask me to elaborate more on this if you are interested. If you really need the read-cache the cache backend can do that ok (but at some drawbacks too). I stopped using the cache backend myself after much testing. Hopefully soon we will see the VFS cache pick up this functionality.

To the best of my understanding a checker does the work of listing a directory and then comparing sizes, modtimes, hashes ect. to determine if rclone needs to update files or not. As animosity says the command currently will not affect a mount and I think you get the default 8. Transfers (down) are effectively unlimited. Transfer (up) are effectively 4. We may get settings to specify this in more detail on a mount at some point, but these settings are unlikely to be a problem for you.

Aside from that I agree with Animosity on his comments. chunk-size does improve performance (on larger files - upload only), but for each doubling up there is reduced benefit, so 128MB is not needed by any means. 32 or 64MB will give good performance too if RAM if ever an issue. The main point is that 8MB (the default) is inadequate in my opinion.
This chart (at the bottom) gives you some idea of the scaling:
https://www.chpc.utah.edu/documentation/software/rclone.php

2 Likes

To both of you a huge thank you for patiently explaining all the stuff and helping me to build a proper configuration for rclone on my Raspi 4!
I really appreciate it.
I have implemented all your comments but left all the path settings there. Somehow if I remove them rclone starts using the SD Card.

# /etc/systemd/system/rclone.service
[Unit]
Description=Google Drive (rclone)
AssertPathIsDirectory=/mnt/google-drive
User=pi
Group=pi

[Service]
Type=simple
ExecStart=/usr/bin/rclone mount google-drive:/ /mnt/google-drive \
--config=/home/pi/.config/rclone/rclone.conf \
--allow-other \ 
--uid=1000 \
--gid=1000 \
--umask 000 \
--vfs-cache-mode writes \
--vfs-cache-max-size=10G \
--cache-tmp-upload-path=/media/pi/USBDISK/rclone/upload \
--cache-chunk-path=/media/pi/USBDISK/rclone/chunks \
--drive-chunk-size=64MB \
--cache-dir=/media/pi/USBDISK/rclone/vfs \
--cache-db-path=//media/pi/USBDISK/rclone/db \
--dir-cache-time=1h
ExecStop=/bin/fusermount -u /mnt/google-drive
Restart=always
RestartSec=10

[Install]
WantedBy=default.target

Lowered drive chunk size to 64MB. RAM doesn't seem to be an issue, however SABNzbd unrars roughly 100 MB rar parts and creates the output file on mnt/google-drive. So it probably doesn't make much sense to go above 100 MB for this setting.

Extended dir-cache-time to one hour. This seems to be a sensible solution. API calls probably aren't the limiting factor anyway since I set up my own client-id.

I still kept all the references to the Cache directories albeit I don't use a cache-backend remote. However, if they are removed rclone seems to start using the SD Card (root-directory) for the vfs cache. As long as those settings are enabled I do not see SD Card usage and the specified directories grow and vary in size if I monitor them in the file viewer.

I also googled around and couldn't find a "vfs-cache-dir" or something similar.

#--cache-writes
#--checkers=16
#--size-only \

Removed according to Animosity022's recommendation.

Again, thank you very much for your help.
Best from Switzerland.

Try removing only the below:

--cache-tmp-upload-path=/media/pi/USBDISK/rclone/upload \
--cache-chunk-path=/media/pi/USBDISK/rclone/chunks \
--cache-db-path=//media/pi/USBDISK/rclone/db \

The cache-dir option, contrary to its name, controls the path for the vfs cache dir. If you remove that too, it will pick the default path i.e. the home directory for storing the cache.

Yes, that was my bad for listing it.
--cache-dir dir is needed for the VFS cache unless you want the default location. Keep that.

All the others refer only to cache backend which is another module - and should be removed if only to prevent later confusion.

1 Like

Ah perfect!

Now everything is working perfectly.
So for later reference and maybe someone else ooking for a rclone config on a Raspberry pi 4:

Either go to the terminal and use the following command
sudo chmod 777 -R /etc/systemd/system

and create the rclone.service file within the text editor and the following content (adapting the name of the remote to your configuration and obviously the paths) in the above mentioned directory.

Alternatively, and probably in a more linux appropriate fashion:

sudo nano /etc/systemd/system/rclone.service

Now you can edit the file within the terminal, pasting the below and saving it by pressing Ctrl+X, followed by a yes or enter (you will be asked).

#/etc/systemd/system/rclone.service
[Unit]
Description=Google Drive (rclone)
AssertPathIsDirectory=/mnt/google-drive
User=pi
Group=pi

[Service]
Type=notify
ExecStart=/usr/bin/rclone mount google-drive:/ /mnt/google-drive \
	--config=/home/pi/.config/rclone/rclone.conf \
	--uid=1000 \
	--gid=1000 \
	--allow-other \
	--umask=000 \
	--vfs-cache-mode writes \
	--vfs-cache-max-size=20G \
	--drive-chunk-size=64M \
	--cache-dir=/media/pi/USBDISK/rclone/vfs \
	--dir-cache-time=1h 
ExecStop=/bin/fusermount -u /mnt/google-drive
Restart=always
RestartSec=10

[Install]
WantedBy=default.target

Finally, you have to enable the automatic start-up with the following commands

systemctl daemon-reload
systemctl start rclone.service
systemctl enable rclone.service

Rclone should now be up and should autostart on reboot. If it isnt, paste the content of your rclone.service (starting from rclone mount and just before ExecStop) into the terminal and check for errors.

1 Like

I might just take you up on that offer because I have a need to set up an rclone service on Linux soon - and my familiarity with Linux is so-so.

I will use your template and see how that goes. If not then I'll ping you to annoy you with questions =P
Thanks for the low-level guide. Sometimes I feel like we should have a subsection for "how to guides for common problems" so these nuggets of useful info don't just die and disappear over time.

1 Like

rclone implements notifying systemd when the mount is ready so you can use Type=notify here.

1 Like

Alright! Thanks for the addition.
Had to amend the rclone.service file with the arguments for --uid and --gid=1000.
Linux permissions are a huge and overly complicated annoyance…
Without those arguments radarr (which is run by user pi (id=1000) was not able to import files sucessfully albeit umask was set.

Sigh… I will refrain from using Linux whenever possible just to avoid this permission annoyance. This whole permission stuff (not only related to rclone) has probably cost me around 5 hours setting up the rpi.

Lots of good information in there, thanks to all.
I am thinking of setting up my raspberry pi4 as a rclone NAS and this is really helpful.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.