Union storage: how to fill upstream sequentially?

I am trying to make an union storage (Mega upstreams). My expectation is very simple. Suppose I have configured the union storage with two or three upstreams, I want to fill up upstreams sequentially, such that when one upstream is completely filled up only then rclone should start filling up the second upstream. In this way, I can keep adding new upstreams when required. This mode is more reasonable for incremental backups, where data will remain organized sequentially across upstreams.

So, I configured the create policy with eplfs (while keeping the action and search policy to default). I was expecting it will keep filling up a single upstream until its quota is over, then it will start filling up the second one. But, what happened is that, after the first upstream quota is full, rclone stops uploading new files with quite full error.

Can you please suggest how it should be configured to achieve what I am trying to?
I know one can probably do, is to use epmfs create policy, then wait until rclone throws quota full for all upstream, then halt rclone, add a new upstream, and then again restart copying. But this is not an optimal solution as I have to halt in between.

What is your rclone version (output from rclone version)

Latest: v1.55.1

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Windows 10

Which cloud storage system are you using? (eg Google Drive)

Union with Mega.nz upstreams

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy src dest:/

The policies are all listed out here:

https://rclone.org/union/#policy-descriptions

How does it tell there is a quota? Are you out of space? Or just an error message?

I am aware of the policies.

The error message I get when copying anything new to union_storage is

2021-07-07 19:40:19 ERROR : nikon/212NIKON/DSCN9891.JPG: Failed to copy: upload file failed to create session: Request over quota

However, only one of the two added upstreams is filled up now. So, if I query the space for the union storage, it correctly displays this.

rclone about union_storage:
Total:   40G
Used:    20.008G
Free:    19.992G

I guess this is the expected error if eplfs create policy is not programmed to look for the next upstream (with least-free-space) when the first one's quota is full.

So, this is my question, how should I configure for sequential fillup of upstreams, as I described above?

That's great as I can't read minds so I like to share. Happy you saw them!

You missed my question. How does it know the quota is full? Out of space? Error message? What happens when a remote is "bad"?

Unknown as my first question wasn't answered yet either :slight_smile:

1 Like

I quoted the exact error message in my above reply. I am still unsure what you are trying to ask. So, here I post the entire command and error again for a single file copy. It just repeats the error I mentioned above

rclone copy \nikon\212NIKON\DSCN9890.JPG union_storage:\nikon\212NIKON\
2021/07/07 20:08:07 ERROR : DSCN9890.JPG: Failed to copy: upload file failed to create session: Request over quota
2021/07/07 20:08:07 ERROR : Attempt 1/3 failed with 1 errors and: upload file failed to create session: Request over quota
2021/07/07 20:08:08 ERROR : DSCN9890.JPG: Failed to copy: upload file failed to create session: Request over quota
2021/07/07 20:08:08 ERROR : Attempt 2/3 failed with 1 errors and: upload file failed to create session: Request over quota
2021/07/07 20:08:08 ERROR : DSCN9890.JPG: Failed to copy: upload file failed to create session: Request over quota
2021/07/07 20:08:08 ERROR : Attempt 3/3 failed with 1 errors and: upload file failed to create session: Request over quota
2021/07/07 20:08:08 Failed to copy: upload file failed to create session: Request over quota

These are Mega.NZ upstreams. so, rclone can check the quota and free space.

I'm trying to ask how the mega quota works as I'm not familiar with it, but good luck then my friend. Hope you get it figured out.

1 Like

Okay, thanks. I am not aware how api call are made and read internally in rclone. But, for a mega type storage 'about' command works fine and it displays the total, used and free space, like this (for both upstream individually).

$ rclone about mega_ac1:
Total:   20G
Used:    20.008G
Free:    off

$ rclone about mega_ac2:
Total:   20G
Used:    0
Free:    20G

Hi subhash,

Your initial post has some implicit assumptions that are impossible for others to guess (correctly) and you didn’t answer some of the questions in the support template. This makes a volunteer like Animosity irritated/frustrated and then the answers/questions becomes somewhat short of head. This in turn makes you irritated/frustrated, and then you become somewhat pushy. And so everybody loses time without any real progress towards solving your issue.

Here is how you can get into a constructive help/support dialogue:

If at all possible, reduce your issue to the copy of a single file with a minimum of parameters, E.g.

    rclone copy mySource: myDest:/ --include=”DSCN9891.JPG” -vv

and then post the log from this command and the (relevant) config contents with secrets removed, so we have all relevant facts (no guesswork).

This will make things much easier for all of us, and thus give you a quicker and better solution to your issue :smiley:

1 Like

Hi Ole,

So, here is the information which you asked

config

[mega_ac1]
type = mega
user = removed
pass = removed

[mega_ac2]
type = mega
user = removed
pass = removed

[mega_union]
type = union
upstreams = mega_ac1:/union mega_ac2:/union
create_policy = eplfs

command and output

$ rclone copy ..\DigiCams\nikon\212NIKON\DSCN9891.JPG  mega_union:subhash/nikon/ -vv

2021/07/08 17:26:24 DEBUG : Using config file from "rclone.conf"
2021/07/08 17:26:24 DEBUG : rclone: Version "v1.55.1" starting with parameters ["rclone.exe" "--config" "rclone.conf" "copy" "..\\DigiCams\\nikon\\212NIKON\\DSCN9891.JPG" "mega_union:subhash/nikon/" "-vv"]
2021/07/08 17:26:24 DEBUG : Creating backend with remote "..\\DigiCams\\nikon\\212NIKON\\DSCN9891.JPG"
2021/07/08 17:26:24 DEBUG : fs cache: adding new entry for parent of "..\\DigiCams\\nikon\\212NIKON\\DSCN9891.JPG", "//?/G:/tmp/DigiCams/nikon/212NIKON"
2021/07/08 17:26:24 DEBUG : Creating backend with remote "mega_union:subhash/nikon/"
2021/07/08 17:26:24 DEBUG : Creating backend with remote "mega_ac2:/union"
2021/07/08 17:26:24 DEBUG : Creating backend with remote "mega_ac1:/union"
2021/07/08 17:26:29 DEBUG : fs cache: renaming cache item "mega_ac2:/union" to be canonical "mega_ac2:union"
2021/07/08 17:26:29 DEBUG : Creating backend with remote "mega_ac2:/union/subhash/nikon"
2021/07/08 17:26:33 DEBUG : fs cache: renaming cache item "mega_ac1:/union" to be canonical "mega_ac1:union"
2021/07/08 17:26:33 DEBUG : Creating backend with remote "mega_ac1:/union/subhash/nikon"
2021/07/08 17:26:33 DEBUG : fs cache: renaming cache item "mega_ac2:/union/subhash/nikon" to be canonical "mega_ac2:union/subhash/nikon"
2021/07/08 17:26:33 DEBUG : fs cache: renaming cache item "mega_ac1:/union/subhash/nikon" to be canonical "mega_ac1:union/subhash/nikon"
2021/07/08 17:26:33 DEBUG : union root 'subhash/nikon/': actionPolicy = *policy.EpAll, createPolicy = *policy.EpLfs, searchPolicy = *policy.FF
2021/07/08 17:26:33 DEBUG : DSCN9891.JPG: Need to transfer - File not found at Destination
2021/07/08 17:26:34 ERROR : DSCN9891.JPG: Failed to copy: upload file failed to create session: Request over quota
2021/07/08 17:26:34 ERROR : Attempt 1/3 failed with 1 errors and: upload file failed to create session: Request over quota
2021/07/08 17:26:34 DEBUG : DSCN9891.JPG: Need to transfer - File not found at Destination
2021/07/08 17:26:34 ERROR : DSCN9891.JPG: Failed to copy: upload file failed to create session: Request over quota
2021/07/08 17:26:34 ERROR : Attempt 2/3 failed with 1 errors and: upload file failed to create session: Request over quota
2021/07/08 17:26:34 DEBUG : DSCN9891.JPG: Need to transfer - File not found at Destination
2021/07/08 17:26:34 ERROR : DSCN9891.JPG: Failed to copy: upload file failed to create session: Request over quota
2021/07/08 17:26:34 ERROR : Attempt 3/3 failed with 1 errors and: upload file failed to create session: Request over quota
2021/07/08 17:26:34 INFO  :
Transferred:             0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors:                 1 (retrying may help)
Elapsed time:        10.2s

2021/07/08 17:26:34 DEBUG : 10 go routines active
2021/07/08 17:26:34 Failed to copy: upload file failed to create session: Request over quota

I think maybe this thread is looking too complicated, so, I will again try to put my question very generally:

For the union storage method, how it should be configured such that upstreams are to be only filled up sequentially? given that upstreams can be any cloud service supporting full quota/free/used space information.
After reading the docs I thought eplfs create policy can achieve this, but it seems I was wrong.

Thanks, this is exactly the information I was missing. Some of it was already in your prose, but now it is concise and complete in a format we all understand :ok_hand:

In reply to your first edit:

It is my experience, that I get better support when I provide alle the requested support information in the requested format - even if I find it laborious and irrelevant. I remind myself that I am requesting the help/support to fill a gap in my own knowledge. I really don’t know what piece of information I am overlooking or having wrong at the moment. I therefore cannot use my current (incomplete) knowledge to select which information to provide to the person trying to help me.

I do however sometimes fill support templates with responses like: “I have a huge log, that requires a lot of redacting. Please let me know if it is needed or there is an easy way to extract the information you need.”

I have no experience with the union remote but sometimes it just takes a fresh set of eyes. So, I have just read your log/config and the union docs, and may have found something that could explain your experience.

In reply to your third edit:

I guess your issue could be caused by the ep=”existing path” in your create_policy. The docs say:

All policies which start with ep (epff, eplfs, eplus, epmfs, eprand) are path preserving. ep stands for existing path.

A path preserving policy will only consider upstreams where the relative path being accessed already exists.

When using non-path preserving policies paths will be created in target upstreams as necessary.

And more specifically:

Policy: eplfs (existing path, least free space)
Of all the upstreams on which the relative path exists choose the one with the least free space.

I would therefore like to check if the target folder exists in both mega accounts. Will you please share the output from these two commands:

rclone lsd mega_ac1:subhash/nikon/
rclone lsd mega_ac2:subhash/nikon/

My guess is that the path to exists in mega_ac1 and not in mega_ac2. (Please correct me, if the commands/paths are a bit off - I cannot test at my end)

If my expectation is correct, then try changing your create_policy to “lfs” (without ep) and post the log from retrying this command:

rclone copy ..\DigiCams\nikon12NIKON\DSCN9891.JPG mega_union:subhash/nikon/ -vv

If this solves your issue, then you may also need to take a look at your action_policy, the log says: actionPolicy = *policy.EpAll

I really appreciate your response to my 1st edit, and thanks for not mistaking it.
I understand and agree with what you just said. Actually, my rclone config is encrypted, so I had to decrypt and also redact secrets and usernames etc. Also my original log was full of file/path names which I don't want to disclose without going through entirely. So, I was a bit reluctant to add those two, and thought I am already providing all necessary info in the original post. Anyways, I am glad you are now satisfied.

You are correct the second storage path mega_ac2:union/ (corrected this path to match my configuration) is actually empty because nothing has been written so far.

here is the output anyways

$ rclone lsd mega_ac1:union/subhash/nikon/
          -1 2021-07-06 13:55:00        -1 100NIKON
          -1 2021-07-06 11:45:44        -1 101NIKON
          -1 2021-07-06 11:03:03        -1 102P_001
          -1 2021-07-06 11:04:16        -1 103P_002
          -1 2021-07-06 11:04:43        -1 104P_003
          -1 2021-07-06 11:04:00        -1 105P_004
          -1 2021-07-06 11:06:18        -1 106P_005
          -1 2021-07-06 11:09:35        -1 108P_007
          -1 2021-07-06 11:27:53        -1 109P_008
          -1 2021-07-06 13:46:05        -1 110P_009
          -1 2021-07-06 15:21:21        -1 111P_010
          -1 2021-07-06 15:24:54        -1 112P_011
          -1 2021-07-06 15:29:22        -1 114P_013
          -1 2021-07-06 15:32:03        -1 115P_014
          -1 2021-07-06 15:56:47        -1 116P_015
          -1 2021-07-06 18:00:15        -1 117NIKON
          -1 2021-07-06 15:58:08        -1 118P_016
          -1 2021-07-06 20:41:06        -1 119NIKON
          -1 2021-07-06 19:32:22        -1 120NIKON
          -1 2021-07-06 19:01:28        -1 121P_016
          -1 2021-07-06 19:04:00        -1 122P_017
          -1 2021-07-06 19:08:44        -1 123NIKON
          -1 2021-07-06 22:35:52        -1 124P_018
          -1 2021-07-06 21:29:59        -1 125P_019
          -1 2021-07-06 21:33:49        -1 126NIKON
          -1 2021-07-06 21:31:04        -1 127P_020
          -1 2021-07-06 21:32:48        -1 128P_021
          -1 2021-07-06 22:28:35        -1 129P_022
          -1 2021-07-06 22:47:39        -1 130P_023
          -1 2021-07-06 22:46:40        -1 131P_024
          -1 2021-07-06 22:47:45        -1 132P_025
          -1 2021-07-06 22:47:40        -1 133P_026
          -1 2021-07-06 22:51:02        -1 134NIKON
          -1 2021-07-06 23:11:40        -1 135P_027
          -1 2021-07-06 23:47:29        -1 136NIKON
          -1 2021-07-06 23:45:02        -1 137P_028
          -1 2021-07-07 00:48:49        -1 138NIKON
          -1 2021-07-06 23:46:18        -1 139P_029
          -1 2021-07-07 00:46:05        -1 140P_030
          -1 2021-07-07 00:46:10        -1 141P_031
          -1 2021-07-07 00:47:17        -1 142P_032
          -1 2021-07-07 02:11:25        -1 143P_033
          -1 2021-07-07 05:07:56        -1 144P_034
          -1 2021-07-07 07:33:15        -1 145P_035
          -1 2021-07-07 02:12:37        -1 146P_036
          -1 2021-07-07 02:16:46        -1 147P_037
          -1 2021-07-07 02:14:05        -1 148P_038
          -1 2021-07-07 02:19:14        -1 149P_039
          -1 2021-07-07 02:20:04        -1 150P_040
          -1 2021-07-07 02:20:37        -1 151P_041
          -1 2021-07-07 02:28:46        -1 152NIKON
          -1 2021-07-07 02:25:41        -1 153P_029
          -1 2021-07-07 02:27:49        -1 154P_030
          -1 2021-07-07 03:26:46        -1 155P_031
          -1 2021-07-07 03:26:50        -1 156NIKON
          -1 2021-07-07 07:43:01        -1 157NIKON
          -1 2021-07-07 04:55:04        -1 158NIKON
          -1 2021-07-07 07:41:23        -1 159NIKON
          -1 2021-07-07 07:43:32        -1 160NIKON
          -1 2021-07-07 07:33:52        -1 161P_020
          -1 2021-07-07 07:37:50        -1 162NIKON
          -1 2021-07-07 07:36:08        -1 163P_021
          -1 2021-07-07 07:37:08        -1 164P_022
          -1 2021-07-07 07:38:02        -1 165P_023
          -1 2021-07-07 07:38:05        -1 166P_024
          -1 2021-07-07 07:40:30        -1 167P_025
          -1 2021-07-07 07:43:02        -1 168P_026
          -1 2021-07-07 07:46:29        -1 169NIKON
          -1 2021-07-07 07:50:50        -1 170NIKON
          -1 2021-07-07 07:45:10        -1 171P_027
          -1 2021-07-07 07:46:23        -1 172P_028
          -1 2021-07-07 07:50:02        -1 173NIKON
          -1 2021-07-07 07:46:25        -1 174P_029
          -1 2021-07-07 07:46:25        -1 175P_030
          -1 2021-07-07 07:46:28        -1 176P_031
          -1 2021-07-07 07:48:36        -1 177NIKON
          -1 2021-07-07 07:50:49        -1 178NIKON
          -1 2021-07-07 07:50:48        -1 179NIKON
          -1 2021-07-07 07:50:47        -1 180NIKON
          -1 2021-07-07 07:50:01        -1 181P_032
          -1 2021-07-07 07:50:41        -1 182P_033
          -1 2021-07-07 07:50:43        -1 183P_034
          -1 2021-07-07 07:50:44        -1 184P_035
          -1 2021-07-07 07:50:51        -1 185NIKON
          -1 2021-07-07 07:50:46        -1 186P_036
          -1 2021-07-07 07:50:50        -1 187P_037
          -1 2021-07-07 07:50:47        -1 188P_038
          -1 2021-07-07 07:50:56        -1 189P_039
          -1 2021-07-07 07:50:59        -1 190NIKON
          -1 2021-07-07 07:51:24        -1 190NIKON_pan1
          -1 2021-07-07 07:51:39        -1 191NIKON
          -1 2021-07-07 07:56:06        -1 191P_040
          -1 2021-07-07 07:56:15        -1 192NIKON
          -1 2021-07-07 07:56:17        -1 192P_041
          -1 2021-07-07 07:56:21        -1 193P_001
          -1 2021-07-07 07:56:21        -1 193P_042
          -1 2021-07-07 07:56:24        -1 194P_002
          -1 2021-07-07 07:56:28        -1 194P_043
          -1 2021-07-07 07:56:32        -1 195NIKON
          -1 2021-07-07 07:56:34        -1 195P_044
          -1 2021-07-07 07:56:36        -1 196P_003
          -1 2021-07-07 07:56:37        -1 197NIKON
          -1 2021-07-07 07:56:40        -1 198P_001
          -1 2021-07-07 07:56:46        -1 199P_002
          -1 2021-07-07 07:56:47        -1 200P_003
          -1 2021-07-07 07:56:49        -1 201P_004
          -1 2021-07-07 07:56:55        -1 202NIKON
          -1 2021-07-07 07:56:56        -1 203NIKON
          -1 2021-07-07 07:57:04        -1 204NIKON
          -1 2021-07-07 07:57:20        -1 205NIKON
          -1 2021-07-07 07:57:35        -1 206NIKON
          -1 2021-07-07 08:00:06        -1 207P_001
          -1 2021-07-07 08:00:10        -1 208NIKON
          -1 2021-07-07 19:38:48        -1 209P_002
          -1 2021-07-07 19:38:49        -1 210NIKON
          -1 2021-07-07 19:39:19        -1 211NIKON
          -1 2021-07-07 19:39:53        -1 212NIKON
          -1 2021-07-07 19:39:18        -1 213P_001
          -1 2021-07-07 19:39:19        -1 214P_002
          -1 2021-07-07 19:39:52        -1 215P_003
          -1 2021-07-09 06:03:14        -1 216NIKON
          -1 2021-07-09 06:03:19        -1 217NIKON
          -1 2021-07-09 06:03:31        -1 218NIKON
$ rclone lsd mega_ac2:union/
<EMPTY>
$

changing create_policy to “lfs” also didn't solve, it throws the same error.

Here is the log

rclone copy ..\DigiCams\nikon\212NIKON\DSCN9891.JPG  mega_union:subhash/nikon/ -vv
2021/07/09 06:34:45 DEBUG : Using config file from "rclone.conf"
2021/07/09 06:34:45 DEBUG : rclone: Version "v1.55.1" starting with parameters ["rclone.exe" "--config" "rclone.conf" "copy" "..\\DigiCams\\nikon\\212NIKON\\DSCN9891.JPG" "mega_union:subhash/nikon/" "-vv"]
2021/07/09 06:34:45 DEBUG : Creating backend with remote "..\\DigiCams\\nikon\\212NIKON\\DSCN9891.JPG"
2021/07/09 06:34:45 DEBUG : fs cache: adding new entry for parent of "..\\DigiCams\\nikon\\212NIKON\\DSCN9891.JPG", "//?/G:/tmp/DigiCams/nikon/212NIKON"
2021/07/09 06:34:45 DEBUG : Creating backend with remote "mega_union:subhash/nikon/"
2021/07/09 06:34:45 DEBUG : Creating backend with remote "mega_ac2:/union"
2021/07/09 06:34:45 DEBUG : Creating backend with remote "mega_ac1:/union"
2021/07/09 06:34:52 DEBUG : fs cache: renaming cache item "mega_ac2:/union" to be canonical "mega_ac2:union"
2021/07/09 06:34:52 DEBUG : Creating backend with remote "mega_ac2:/union/subhash/nikon"
2021/07/09 06:34:56 DEBUG : fs cache: renaming cache item "mega_ac1:/union" to be canonical "mega_ac1:union"
2021/07/09 06:34:56 DEBUG : fs cache: renaming cache item "mega_ac2:/union/subhash/nikon" to be canonical "mega_ac2:union/subhash/nikon"
2021/07/09 06:34:56 DEBUG : Creating backend with remote "mega_ac1:/union/subhash/nikon"
2021/07/09 06:34:56 DEBUG : fs cache: renaming cache item "mega_ac1:/union/subhash/nikon" to be canonical "mega_ac1:union/subhash/nikon"
2021/07/09 06:34:56 DEBUG : union root 'subhash/nikon/': actionPolicy = *policy.EpAll, createPolicy = *policy.Lfs, searchPolicy = *policy.FF
2021/07/09 06:34:56 DEBUG : DSCN9891.JPG: Need to transfer - File not found at Destination
2021/07/09 06:34:56 ERROR : DSCN9891.JPG: Failed to copy: upload file failed to create session: Request over quota
2021/07/09 06:34:56 ERROR : Attempt 1/3 failed with 1 errors and: upload file failed to create session: Request over quota
2021/07/09 06:34:56 DEBUG : DSCN9891.JPG: Need to transfer - File not found at Destination
2021/07/09 06:34:57 ERROR : DSCN9891.JPG: Failed to copy: upload file failed to create session: Request over quota
2021/07/09 06:34:57 ERROR : Attempt 2/3 failed with 1 errors and: upload file failed to create session: Request over quota
2021/07/09 06:34:57 DEBUG : DSCN9891.JPG: Need to transfer - File not found at Destination
2021/07/09 06:34:57 ERROR : DSCN9891.JPG: Failed to copy: upload file failed to create session: Request over quota
2021/07/09 06:34:57 ERROR : Attempt 3/3 failed with 1 errors and: upload file failed to create session: Request over quota
2021/07/09 06:34:57 INFO  :
Transferred:             0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors:                 1 (retrying may help)
Elapsed time:        11.9s

2021/07/09 06:34:57 DEBUG : 10 go routines active
2021/07/09 06:34:57 Failed to copy: upload file failed to create session: Request over quota

Good information!

Tip: You can use "rclone config show" or "rclone config show myRemote:" to print the (decrypted) config file, or the config of a single remote.
(@Animosity022 : I suggest we add this to the support template and “How to ask for assistance”)

I have confirmed your above observations with this config:

[myGoogleDriveUnion]
type = union
upstreams = myGoogleDrive1: myGoogleDrive2:
action_policy = lfs
create_policy = lfs

It has the same behaviour; it first fills the remote with least free space and then errors when it is out of space.

Strictly speaking, this is correct according to the “lfs” policy description: “Create category: Pick the upstream with the least available free space.” Rclone does indeed pick the remote with least free space (E.g. 0 bytes) and then tries to create.

The policy you are looking for is: “Create category: Pick the (first) upstream with enough free space to create the file.” But unfortunately, this doesn’t exist.

I therefore see no possibility to fulfil your wish for sequentially filled upstreams, sorry.

The rclone union is inspired by trapexit/mergerfs having similar policy descriptions.

Do you know if mergerfs or other similar file systems can do sequential fills?

Hi Ole,

Thanks a lot for reproducing and verifying the behavior and also thanks for tip.

It's unfortunate that there is no sequential fillup option. It takes the least-free-space definition too strictly. In that case I don't see any real use-case scenario for lfs/eplfs create policies, it is basically equivalent to only using a single remote. For file systems on physical hard drives (mergerfs etc.), using a most-free-space (or similar) policy to spread data across all drives is probably a preferred option for improving I/O efficiency and drive's life-time. However, for cloud storages, I think sequential fillup will be a more logical or preferred choice in most cases.
So, I wonder is it possible to implement such an option in rclone-union? even if the policy definition deviated from mergerfs', because physical and cloud storages have different areas of concern.

Sorry, I have no experience with mergerfs and I do not have a Linux system with multiple HDDs to try this out. However, I searched online regarding this and found one person mentioned it here. So, it is likely possible but I am not totally sure.

As I was searching more on this issue, I found a software called Air Cluster, it is a GUI utility only focussed on making union storage of cloud remotes. It doesn't have so much customizability like rclone-union, but it has only two basic options, one of which is sequential/ordered fillup of storage, and you can also set the order of fillup. Similar to what I was looking for. So, sequential fillup is in demand as it seems, and not sure if this can serve as a motivation for rclone to implement such a policy.
Here is a screenshot of it

1 Like

Thanks, you're welcome.

I fully agree, the policy doesn’t make much sense to me either - even if it skipped drives with too little space for the create.

Most likely, but I doubt it will happen in the foreseeable future. Take a look at the open issues/enhancements. I see many things with higher need/value/priority.

Personally, I prefer to store my backup set under a single account. I find it simpler, faster, more robust and more cost effective (in all the scenarios I can imagine).

Others may see the world differently. I therefore suggest you make a new forum topic (type: Feature) where you propose this as a new feature. This will give you better attention and feedback.

I recommend you repeat your use case and explain the benefits of having a union of multiple small accounts instead of a single large account - and include a link to this thread.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.