ZFS on rclone mount experiment

Google drive doesn't support updating files - you have to upload them completely.

I think you'll need to make the chunks small enough did that you don't mind each one being uploaded completely.

Your suggestion of 1MB sounds like a good start.

This sounds like a bad idea as all files will always need uploading.

If I understand, this sounds like a better idea.

Rclone can read segments from a file no problem, but files can only be written all at once.

The chunker remote doesn't support this yet.

I did write an experimental VFS mode for doing exactly this a while ago which presented a large file which could be read and written to at random. This was stored on the cloud as lots of small 1MB files. The files were missing if they were all zeroes.

I then loop mounted the file, formatted as ext3 and used it as a disk. It worked, but was very slow, and bugs in my code kept crashing the kernel so it was painful to debug!

Ideally we'd be able to delegate this to the chunker backend now which would be possible if we added a random read write to the chunker backend.