ZFS on rclone mount experiment

I don't really have a problem but would like some input on an experiment I want to try with ZFS. This filesystem can use regular files as the devices (called vdevs for those unfamiliar), and I'm thinking it would be neat in a completely experimental way to create a volume on cloud storage. ZFS also allows for faster SSD devices to be added to a pool of vdevs for read and write caching.

Before I start my experiment, I'd like some info about the size of the files to use on these vdevs that will work best with an rclone mount on google drive storage, and if there are any obvious limitations that could be worked around.

My initial though is to create 10 folders in Google Drive, and fill each with 1000 1MB sparse/empty files. Then mount the Google drive using rclone mount to be accessible to ZFS. There are two possible strategies,

  1. each file in a folder will become a vdev in a raidz group, this will stripe across all 1000 files
  2. each file in all folders will become a vdev in the whole pool, data will be written sequentially or randomly to each file/vdev

I will then attach a 128GB SSD as an L2ARC (read cache) and another 128 GB SSD as an SLOG (effectively a write cache). This is in addition to the in memory caching with 64GB or system RAM.

The goal will be to minimize any data that is stored locally, frequently read files in the ZFS volume will be stored in the L2ARC.Writes are only cached in what is effectively an SSD write queue while the data is flushed to disk (google drive files in this case)
I will be avoiding vfs caching unless it's unavoidable and I've chosen 1MB as a possible starting point to allow downloading and uploading of each segment in a "reasonable" amount of time. Would smaller files be better?

It would be wonderful if only the section of a google drive file that needed to be read/written was downloaded or uploaded, but I gather that requires full vfs caching. Have I misunderstood this, at least in the Google drive case? If a single 1GB file can be streamed from bytes 512MB to 524MB, and written at random without vfs downloading the file, that would be great to know. Maybe using the chunker remote could be an option...

I'm also curious about any google drive service limitations I'm not aware of. I already know that the maximum upload per account is 750GB, which isn't a problem for this experiment, however I can't find any limitation on how many times a file can be downloaded and re-uploaded in a 24 hour period.

Are there any rclone mount settings that could improve performance in this wild usecase?

hello and welcome to the forum,

most/all cloud providers do not support random access writes.
so rclone mount does not work like that.

if you plan to write to the cloud, need to use --vfs-cache-mode=writes/full
and rclone will have to cache the entire file locally and upload it as a single complete file.

I am not sure I'm quite getting what problem you are trying to solve.

I use vfs-cache-mode full and my cache-dir is a 1TB SSD that I use for it. Anything cached already comes from SSD.

No documentation exists and Google won't tell you.

Ultimately, the goal of the experiment is to create a zfs volume that can be used to replicate another zfs volume to as though it were native SSD/HDD volumes. The only reasonable method is to upload a single 700GB + snapshot file and then periodic incremental backups that would have to be kept around forever, even if the deleted data is no longer relevant 5 years later.
It's just as simple as deleting a monthly or yearly snapshot to free up space, so uploading a 700GB initial snapshot once a year wouldn't be necessary anymore.

Since random writes aren't supported and at least vfs write caching is required, how can the rclone vfs cache be encouraged to keep the local cache path as small as possible? In my rclone mounts, I often see the cache grow beyond the size limit when files have a "lock" but aren't actively being read/written.

It already does that. You'd have to provide a log file to diagnosis as it's best to make a help support post for that.

Ok, I was hoping that someone with deep knowledge of the google drive remote might find the concept interesting and share their thoughts on what could make this work. I don't specifically need support since I can play around with the remote settings I've become familiar with just using rclone in the supported method.

Google drive doesn't support updating files - you have to upload them completely.

I think you'll need to make the chunks small enough did that you don't mind each one being uploaded completely.

Your suggestion of 1MB sounds like a good start.

This sounds like a bad idea as all files will always need uploading.

If I understand, this sounds like a better idea.

Rclone can read segments from a file no problem, but files can only be written all at once.

The chunker remote doesn't support this yet.

I did write an experimental VFS mode for doing exactly this a while ago which presented a large file which could be read and written to at random. This was stored on the cloud as lots of small 1MB files. The files were missing if they were all zeroes.

I then loop mounted the file, formatted as ext3 and used it as a disk. It worked, but was very slow, and bugs in my code kept crashing the kernel so it was painful to debug!

Ideally we'd be able to delegate this to the chunker backend now which would be possible if we added a random read write to the chunker backend.

That is really cool to hear that the chunker worked even to a small degree in that way. I'll just pause my curiosity for now. I can see how VFS would be useful to absorb writes when the user rate limit is exceeded.

1 Like

What user rate limit? Are you talking TPS or upload quotas?

Upload quotas. If this wrote directly to google drive, it would crash in a spectacular way if it just writing directly to cloud.

It would be cool if there was a backend that could write to a series of remotes that may or may not be the same cloud storage but using different tokens based on log output, you know, because certain accounts might not have permission to a subfolder :wink:

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.