Advice wanted for writing a backend for a content addressable remote

What is the problem you are having with rclone?

I'm looking into writing an rclone backend for a remote which is implemented as a content addressable file system (CAFS). As far as I can tell there are no existing backends for a CAFS, and I'm not sure how well suited the rclone API would be for supporting this having looked at a few of the existing backends.

So I'm looking for advice or relevant resources to help me understand how to write such a backend, or to learn if this use case is really not suited to the rclone APIs.

I have already written a library for the remote which allows me to represent a directory based file system on the remote, and have used this to demonstrate storage and serving of complex static websites directly from the CAFS.

To implement this for rclone, it would be impractical to update this metadata on every mutable operation (e.g. on every put file operation). This is because metadata for the whole directory tree and all files it contains is stored in a single file (as the remote does not support metadata directly).

So I am looking for advice on how to batch updates of the metadata for the remote filesystem. For example, saving ten files in a backup session would involve my backend saving each file in turn while keeping a record of all metadata until all files have been written. Finally it would write a new file of metadata for the directory containing those files.

If you can make your CAFS look like a file system tree then in I don't see why it shouldn't work.

As for batching updates, take a look at the Dropbox/ Google Photos backends and lib/batcher - rclone supports this quite well.

1 Like

Batcher looks perfect thanks.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.