Multiwrite union test beta

It should not throw errors when you use a quota policy which isn't supported

Some of the other policies?

Perhaps someone could translate @Animosity022's mergerfs setup into an equivalent setup for the union backend? That would make a good thing to test and a good cookbook example.

@Max-Sum reading through the docs again, it would probably be a good idea to provide the policies and names in the configurator, eg like the examples here: https://github.com/rclone/rclone/blob/master/backend/crypt/crypt.go#L37-L47

That example block should be defined once elsewhere so we don't have to repeat it 3 times! Probably at this level of detail - what do you think?

| epmfs  | existing path, most free space |
| eprand | existing path, random          |
...

I'm happy to do that and write some more docs once we've merged the backend.

Looks promising! Hope we can see the backend in the next release.

That is the plan! I wanted to get the user testing going as that is more valuable for a backend than code review normally.

Hi

Looking forward to this! Seems very similar to mergerfs which I'm using which is good. I have a question about cache-time - can you expand a bit on what this does please?

Cache time just controls the time for which the free space of the remote is cached, nothing else. So when you are using a policy which needs to know the free space, instead of reading it for every transaction it reads it from the cache which makes things more efficient at a cost of a bit of accuracy.

1 Like

Thank you. So for a ff policy, set it to a ridiculously high number or does 0 disable?

How does rclone union treat mix of ro/rw? i.e. will there be CoW?

I'm currently using unionfs to pool a local folder (RW) + a rclone remote (RO).
So when any change is done to the RO remote, a copy is saved on the local folder before any modification is applied i.e. unionfs CoW.
This serves as a form of protection against cryptovirus and accidental changes since any change will only be updated to the remote when I manually run an upload / cleanup script.

You can just ignore it if the policy doesn't use it I think

This is one of the uses I'd like to have in the cookbook.

I'm pretty sure it should be possible - anyone want to take a crack?

I think this is ok if a changes is made by download->change->upload. Just changing the metadata of a file would be problematic though.

I'm not following you @Max-Sum - can you explain more please?

Upload a edited file to union should be ok since it would choose a writable upstream. However if only the metadata is being updated, SetModTime/Update would be used. In this case, no upstream would be chosen unless we want to re-upload a file to a writable upstream.

1 Like

Just tried to unify 4 onedrive remotes, works perfectly fine for me. :ok_hand: :ok_hand:

@ncw, One question though: If Rclone fails to upload the object into the first remote for some reason, does it automatically try upload on the second remote?

Which settings did you use?

[unioned]
type = union
upstreams = onedrive1:new onedrive2:new onedrive3:new
create_policy = eplfs

rclone will do the retries it normally does (--low-level retries if possible) but if those fail then I don't think it will try the next remote. Is that right @Max-Sum

That's right. Rclone framework can handle it while union backend itself don't have failover built-in now.

I am curious. Is there any purpose to use lfs policy? I implement this just because mergerfs has it but I don't really get the point.

I never used mergerfs, but after reading the docs, I assumed if I use lfs, Rclone will try to fill the storage on the first remote, if the first remote has no storage left or the file is larger than available storage, then it will try to store the file on second remote and so on and so forth. Am I wrong? :confused:

So, basically I wants to fill the first remote as much as possible, then try second and so on and so forth.

Update: I was wrong, Rclone did not try to put the file on second remote, when first remote was full. :confused: I guess a fail-over function must needed.

1 Like

What if you want to have just one remote be a local folder, that is then mirrored to 2 other remotes?

Is this possible?

Tested with gdrive and local folder as upstreams.

Unfortunately I'm unable to access my nzbget folder after some time. I guess it has to do with the continuously writing to the log file.

2020/03/24 08:09:26 INFO  : nzbget/tmp/nzb-90.tmp: Copied (new)


2020/03/24 08:09:31 INFO  : Cleaned the cache: objects 25 (was 25), total size 25.501M (was 23.725M)


2020/03/24 08:10:31 INFO  : Cleaned the cache: objects 25 (was 25), total size 25.501M (was 25.501M)


2020/03/24 08:11:08 INFO  : nzbget/tmp/nzb-91.tmp: Copied (new)


2020/03/24 08:11:31 INFO  : Cleaned the cache: objects 26 (was 26), total size 26.524M (was 25.501M)


2020/03/24 08:12:31 INFO  : nzbget/tmp/nzb-77.tmp: Removed from cache


2020/03/24 08:12:31 INFO  : Cleaned the cache: objects 25 (was 26), total size 26.524M (was 26.524M)


2020/03/24 08:12:48 INFO  : nzbget/tmp/nzb-92.tmp: Copied (new)

My Mount options:
- "--verbose"
- "mount"
- "union:/"
- "/data/mount"
- "--allow-other"
- "--allow-non-empty"
- "--vfs-cache-mode=writes"
- "--attr-timeout=8700h"
- "--dir-cache-time=8760h"
- "--poll-interval=30s"
- "--buffer-size=512M"
- "--uid=1000"
- "--gid=1000"
- "--umask=007"
- "--rc"

Any ideas?