RPI4 DIY Nas advice

Hi Guys,

Looking for advice on how to create a DIY Nas with the RPI4 and Rclone.
I have a usb3 2TB drive I'm going to use at the storage and Rclone cache.
My hope is to create a "Share" (sftp and/or SMB since they both offer encryption over the network) directory on the attached drive, accessible on the RPI that has three folders, Local, Cloud, Mirrored.

Then I'm trying to mount gdrive (through an encrypted remote that points to the gdrive remote) to that Cloud Directory. Ideally Mirrored should should exist on both the local and gdrive, whereas cloud should only exist on gdrive. The point in having them wrapped in a single share is so that hopefully server side copy would work, instead of having to copy files between different network mounts to individual shares (cloud/local/mirrored). Since I have a slow connection, I assume data will be cache locally for between 2-4 hours before it starts an upload in case I need to modify the files locally, and ideally leave it available for editing while uploading if necessary.

Right now I'm testing with local rclone mount and then sharing the parent directory with SMB. I'm testing it right now with info level logging on. I tried it a bit ago and my copy to the mounted fuse drive seemed to hang.
I'm open to other switches or if there are better ways to accomplish my goals. I thought about rclone serve webdav instead of mount originally. but was hoping to accomplish the 3 primary folder goal.

Mount command looks like:

/usr/bin/rclone mount tenc: /mnt/hda/cs/cloud
--allow-other
--user-agent='Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.92 Safari/537.36'
--config /mnt/hda/.rclonedata/rclone.conf
--cache-dir /mnt/hda/.rclonedata/cache/
--cache-db-path /mnt/hda/.rclonedata/cachedb/
--cache-db-purge
--cache-writes
--cache-tmp-upload-path /mnt/hda/.rclonedata/uploadcache/
--cache-tmp-wait-time 4h
--use-mmap
--dir-cache-time 168h
--timeout 1h
--umask 002
--poll-interval=1m
--vfs-cache-mode writes
--vfs-read-chunk-size 64M
--vfs-read-chunk-size-limit 1024M
--tpslimit 10
--attr-timeout 1s
--log-level INFO
--tpslimit-burst 10

Config looks like:

[t]
type = drive
client_id = XXX
client_secret = XXX
scope = drive
acknowledge_abuse = true
token = XXX
root_folder_id = XXX

[tenc]
type = crypt
remote = t:/encdrive
filename_encryption = standard
directory_name_encryption = true
password = XXX
password2 = XXX

Hi,

I don't think you should share a mounted drive. That's way too many layers on top of each others. If anything it's going to be extremely slow.

Now your idea of having a local share which is replicated in the cloud is perfectly fine. SMB shares from a USB disk on the raspberry pi works fine. Then you run a cron job to sync the folder with the cloud every hour or so.

If you need a cloud only folder, you'd better mount it directly on the machine where you need it. Sorry I don't think there's a usable way to share a fuse mount.

Just and update and a question,

I went with the gdrive->cryptmount->rclone-vfs->mergerfs->SMB with upload script solution.
Seems to be working well so far even over SMB.

One thing I'd like to ask is whats the expected behavior if a file gets moved locally by mergerfs for organization purposes while its mid upload with the upload script sending it to encrypted gdrive ?

I'm going to test it after this long sync completes, but I thought i'd get input to change the config before I try if there was any suggestions.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.