Problem with uploading to a union when the file is already there on some

What is the problem you are having with rclone?

I want to upload a file to several different cloud storage services. So let's say (just for the sake of example; the real situation will be shown below in the copy-paste sections) that I've got a file blah.txt that I want to upload to Dropbox, B2, and OneDrive. I could run three different rclone copy commands, one for each service, but I'd rather just run one that can upload to all three. So I make a Union backend, with three separate remotes for the three separate services, and rclone copy blah.txt MyUnion: instead.

I've been doing that sort of thing using "all" policies for action, create, and search, and it had been working fine. But now I've run into a situation where it is not working as I would hope:

For one of the three, let's say B2, the file already existed before I did the rclone copy. I don't know how that happened, but it did, and when I then did the rclone copy to the union, rclone's output said that there was nothing to upload. I could then see that the file was not uploaded to either Dropbox or OneDrive. It was still on B2.

I tried a few more times, thinking I had just screwed the command up or something, but I kept getting the same behavior. Then it occurred to me that rclone might have seen that the file already existed somewhere on the union, and therefore decided that it didn't need to be uploaded.

So, I deleted the file from B2, ran the exact same rclone copy command again, and the file successfully got uploaded to all three services.

I don't know if this is a bug or not; maybe I shouldn't be using "all" for the search policy? But I don't see any other policy that looks like it would make it behave the way I would expect - i.e. where the fact that the file is already on one service would not cause rclone to refrain from uploading it to the other services.

Is there a way to do what I want?

What is your rclone version (output from rclone version)

rclone v1.53.3
- os/arch: windows/amd64
- go version: go1.15.5

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Windows 10, 64 bit

Which cloud storage system are you using? (eg Google Drive)

Dropbox, Google Drive, OneDrive, PCloud, S3, and Wasabi. The one (and only) that the file had already been uploaded to was PCloud.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy ToUpload Data-Gorilla: -P

The rclone config contents with secrets removed.

[Raw-B2]
type = b2
account = (...)
key = (...)

[Raw-Dropbox]
type = dropbox
token = (...)

[Raw-GDrive]
type = drive
client_id = (...)
client_secret = (...)
scope = drive
token = (...)

[Raw-GSuite]
type = drive
client_id = (...)
client_secret = (...)
scope = drive
token = (...)
root_folder_id = (...)

[Raw-OneDrive]
type = onedrive
token = (...)
drive_id = (...)
drive_type = personal

[Raw-PCloud]
type = pcloud
hostname = eapi.pcloud.com
token = (...)

[Raw-S3]
type = s3
provider = AWS
env_auth = false
access_key_id = (...)
secret_access_key = (...)
region = us-east-1
acl = private
server_side_encryption = AES256
storage_class = DEEP_ARCHIVE
bucket_acl = private

[Raw-Wasabi]
type = s3
provider = Wasabi
env_auth = false
access_key_id = (...)
secret_access_key = (...)
endpoint = s3.wasabisys.com

[B2]
type = alias
remote = Raw-B2:rwv037

[Dropbox]
type = alias
remote = Raw-Dropbox:

[GDrive]
type = alias
remote = Raw-GDrive:

[GSuite]
type = alias
remote = Raw-GSuite:Amalgamated

[GSuite-OhWell]
type = alias
remote = Raw-GSuite:OhWell

[OneDrive]
type = alias
remote = Raw-OneDrive:

[PCloud]
type = alias
remote = Raw-PCloud:

[S3Ireland]
type = alias
remote = Raw-S3:rwv37-eu-west-1

[S3Ireland-Deprecated]
type = alias
remote = Raw-S3:rwv37-ireland

[S3Sydney]
type = alias
remote = Raw-S3:rwv37-ap-southeast-2

[S3Sydney-Deprecated]
type = alias
remote = Raw-S3:rwv37-sydney

[S3Virginia]
type = alias
remote = Raw-S3:rwv37-us-east-1

[S3Virginia-Deprecated]
type = alias
remote = Raw-S3:rwv37

[Wasabi]
type = alias
remote = Raw-Wasabi:rwv37-us-east-1

[Wasabi-Deprecated]
type = alias
remote = Raw-Wasabi:rwv37-deprecated

[Size-Unlimited]
type = alias
remote = GSuite:

[Size-Big]
type = union
upstreams = S3Ireland: S3Sydney: S3Virginia:
action_policy = all
create_policy = all
search_policy = all

[Size-Medium]
type = alias
remote = Wasabi:

[Size-Small]
type = alias
remote = PCloud:

[Size-Tiny]
type = union
upstreams = B2: Dropbox: GDrive: OneDrive:
action_policy = all
create_policy = all
search_policy = all

[MinSize-Tiny]
type = union
upstreams = Size-Tiny: MinSize-Small:
action_policy = all
create_policy = all
search_policy = all

[MinSize-Small]
type = union
upstreams = Size-Small: MinSize-Medium:
action_policy = all
create_policy = all
search_policy = all

[MinSize-Medium]
type = union
upstreams = Size-Medium: MinSize-Big:
action_policy = all
create_policy = all
search_policy = all

[MinSize-Big]
type = union
upstreams = Size-Big: MinSize-Unlimited:
action_policy = all
create_policy = all
search_policy = all

[MinSize-Unlimited]
type = alias
remote = Size-Unlimited:

[Data-Audio]
type = alias
remote = MinSize-Small:Audio

[Data-Bob]
type = alias
remote = MinSize-Small:Bob

[Data-Calibre]
type = alias
remote = MinSize-Small:Calibre

[Data-DA]
type = alias
remote = MinSize-Small:DA

[Data-DiskImages]
type = alias
remote = MinSize-Big:DiskImages

[Data-EBooks]
type = alias
remote = MinSize-Small:EBooks

[Data-Gorilla]
type = alias
remote = MinSize-Tiny:Gorilla

[Data-LOR]
type = alias
remote = MinSize-Small:LOR

[Data-Mail]
type = alias
remote = MinSize-Small:Mail

[Data-Processing]
type = alias
remote = MinSize-Medium:Processing

[Data-TiddlyWiki]
type = alias
remote = MinSize-Small:TiddlyWiki

[Data-Video]
type = alias
remote = MinSize-Medium:Video

[Temp-S3All]
type = union
upstreams = S3Ireland: S3Sydney: S3Virginia:
action_policy = all
create_policy = all
search_policy = all

A log from the command with the -vv flag

2021/01/09 02:49:56 DEBUG : rclone: Version "v1.53.3" starting with parameters ["C:\\rclone\\rclone.exe" "copy" "ToUpload" "Data-Gorilla:" "-P" "-vv"]
2021/01/09 02:49:56 DEBUG : Creating backend with remote "ToUpload"
2021/01/09 02:49:56 DEBUG : Using config file from "C:\\Users\\bob\\.config\\rclone\\rclone.conf"
2021/01/09 02:49:56 DEBUG : fs cache: renaming cache item "ToUpload" to be canonical "//?/D:/trunk/Gorilla/ToUpload"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Data-Gorilla:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "MinSize-Tiny:Gorilla"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "MinSize-Small:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Size-Tiny:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "MinSize-Medium:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "B2:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Raw-B2:rwv037"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Size-Small:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Dropbox:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "OneDrive:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "MinSize-Big:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Size-Medium:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "GDrive:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "PCloud:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Raw-Dropbox:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Raw-OneDrive:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "MinSize-Unlimited:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Size-Big:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Wasabi:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Raw-GDrive:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "S3Sydney:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Raw-PCloud:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Size-Unlimited:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Raw-Wasabi:rwv37-us-east-1"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "S3Ireland:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "S3Virginia:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Raw-S3:rwv37-ap-southeast-2"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "GSuite:"
2021/01/09 02:49:56 DEBUG : fs cache: renaming cache item "PCloud:" to be canonical "Raw-PCloud:"
2021/01/09 02:49:56 DEBUG : fs cache: renaming cache item "Size-Small:" to be canonical "Raw-PCloud:"
2021/01/09 02:49:56 DEBUG : fs cache: switching user supplied name "Size-Small:" for canonical name "Raw-PCloud:"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Raw-S3:rwv37-eu-west-1"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Raw-S3:rwv37-us-east-1"
2021/01/09 02:49:56 DEBUG : Creating backend with remote "Raw-GSuite:Amalgamated"
2021/01/09 02:49:56 DEBUG : fs cache: renaming cache item "S3Sydney:" to be canonical "Raw-S3:rwv37-ap-southeast-2"
2021/01/09 02:49:56 DEBUG : fs cache: renaming cache item "Wasabi:" to be canonical "Raw-Wasabi:rwv37-us-east-1"
2021/01/09 02:49:56 DEBUG : fs cache: renaming cache item "Size-Medium:" to be canonical "Raw-Wasabi:rwv37-us-east-1"
2021/01/09 02:49:56 DEBUG : fs cache: switching user supplied name "Size-Medium:" for canonical name "Raw-Wasabi:rwv37-us-east-1"
2021/01/09 02:49:56 DEBUG : fs cache: renaming cache item "S3Ireland:" to be canonical "Raw-S3:rwv37-eu-west-1"
2021/01/09 02:49:56 DEBUG : fs cache: switching user supplied name "S3Ireland:" for canonical name "Raw-S3:rwv37-eu-west-1"
2021/01/09 02:49:56 DEBUG : fs cache: renaming cache item "S3Virginia:" to be canonical "Raw-S3:rwv37-us-east-1"
2021/01/09 02:49:56 DEBUG : fs cache: switching user supplied name "S3Virginia:" for canonical name "Raw-S3:rwv37-us-east-1"
2021/01/09 02:49:56 DEBUG : fs cache: switching user supplied name "S3Sydney:" for canonical name "Raw-S3:rwv37-ap-southeast-2"
2021/01/09 02:49:56 DEBUG : union root '': actionPolicy = *policy.All, createPolicy = *policy.All, searchPolicy = *policy.All
2021/01/09 02:49:57 DEBUG : Google drive root '': root_folder_id = <REDACTED> - save this in the config to speed up startup
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "GDrive:" to be canonical "Raw-GDrive:"
2021/01/09 02:49:57 DEBUG : fs cache: switching user supplied name "GDrive:" for canonical name "Raw-GDrive:"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "Dropbox:" to be canonical "Raw-Dropbox:"
2021/01/09 02:49:57 DEBUG : fs cache: switching user supplied name "Dropbox:" for canonical name "Raw-Dropbox:"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "OneDrive:" to be canonical "Raw-OneDrive:"
2021/01/09 02:49:57 DEBUG : fs cache: switching user supplied name "OneDrive:" for canonical name "Raw-OneDrive:"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "GSuite:" to be canonical "Raw-GSuite:Amalgamated"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "Size-Unlimited:" to be canonical "Raw-GSuite:Amalgamated"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "MinSize-Unlimited:" to be canonical "Raw-GSuite:Amalgamated"
2021/01/09 02:49:57 DEBUG : fs cache: switching user supplied name "MinSize-Unlimited:" for canonical name "Raw-GSuite:Amalgamated"
2021/01/09 02:49:57 DEBUG : union root '': actionPolicy = *policy.All, createPolicy = *policy.All, searchPolicy = *policy.All
2021/01/09 02:49:57 DEBUG : union root '': actionPolicy = *policy.All, createPolicy = *policy.All, searchPolicy = *policy.All
2021/01/09 02:49:57 DEBUG : union root '': actionPolicy = *policy.All, createPolicy = *policy.All, searchPolicy = *policy.All
2021/01/09 02:49:57 DEBUG : Creating backend with remote "MinSize-Small:/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "MinSize-Medium:/Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: switching user supplied name "Size-Small:" for canonical name "Raw-PCloud:"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Size-Small:/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "PCloud:/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Raw-PCloud:/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "MinSize-Big:/Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: switching user supplied name "Size-Medium:" for canonical name "Raw-Wasabi:rwv37-us-east-1"
2021/01/09 02:49:57 DEBUG : fs cache: switching user supplied name "MinSize-Unlimited:" for canonical name "Raw-GSuite:Amalgamated"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "MinSize-Unlimited:/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Size-Medium:/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Size-Big:/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Size-Unlimited:/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "GSuite:/Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: switching user supplied name "S3Virginia:" for canonical name "Raw-S3:rwv37-us-east-1"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "S3Virginia:/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Raw-S3:rwv37-us-east-1/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Wasabi:/Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: switching user supplied name "S3Ireland:" for canonical name "Raw-S3:rwv37-eu-west-1"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "S3Ireland:/Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: switching user supplied name "S3Sydney:" for canonical name "Raw-S3:rwv37-ap-southeast-2"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "S3Sydney:/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Raw-GSuite:Amalgamated/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Raw-Wasabi:rwv37-us-east-1/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Raw-S3:rwv37-eu-west-1/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Raw-S3:rwv37-ap-southeast-2/Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "B2:" to be canonical "Raw-B2:rwv037"
2021/01/09 02:49:57 DEBUG : fs cache: switching user supplied name "B2:" for canonical name "Raw-B2:rwv037"
2021/01/09 02:49:57 DEBUG : union root '': actionPolicy = *policy.All, createPolicy = *policy.All, searchPolicy = *policy.All
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Size-Tiny:/Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: switching user supplied name "OneDrive:" for canonical name "Raw-OneDrive:"
2021/01/09 02:49:57 DEBUG : fs cache: switching user supplied name "Dropbox:" for canonical name "Raw-Dropbox:"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Dropbox:/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Raw-Dropbox:/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "OneDrive:/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Raw-OneDrive:/Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: switching user supplied name "B2:" for canonical name "Raw-B2:rwv037"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "B2:/Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: switching user supplied name "GDrive:" for canonical name "Raw-GDrive:"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "GDrive:/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Raw-B2:rwv037/Gorilla"
2021/01/09 02:49:57 DEBUG : Creating backend with remote "Raw-GDrive:/Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "Wasabi:/Gorilla" to be canonical "Raw-Wasabi:rwv37-us-east-1/Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "Size-Medium:/Gorilla" to be canonical "Raw-Wasabi:rwv37-us-east-1/Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "S3Virginia:/Gorilla" to be canonical "Raw-S3:rwv37-us-east-1/Gorilla"
2021/01/09 02:49:57 NOTICE: S3 bucket rwv37-ap-southeast-2 path Gorilla: Switched region to "ap-southeast-2" from "us-east-1"
2021/01/09 02:49:57 DEBUG : pacer: low level retry 1/10 (error BucketRegionError: incorrect region, the bucket is not in 'us-east-1' region at endpoint ''
        status code: 301, request id: , host id: )
2021/01/09 02:49:57 DEBUG : pacer: Rate limited, increasing sleep to 10ms
2021/01/09 02:49:57 DEBUG : Google drive root 'Gorilla': root_folder_id = <REDACTED> - save this in the config to speed up startup
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "Raw-GDrive:/Gorilla" to be canonical "Raw-GDrive:Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "GDrive:/Gorilla" to be canonical "Raw-GDrive:Gorilla"
2021/01/09 02:49:57 DEBUG : Dropbox root 'Gorilla': Using root namespace <REDACTED>
2021/01/09 02:49:57 NOTICE: S3 bucket rwv37-eu-west-1 path Gorilla: Switched region to "eu-west-1" from "us-east-1"
2021/01/09 02:49:57 DEBUG : pacer: low level retry 1/10 (error BucketRegionError: incorrect region, the bucket is not in 'us-east-1' region at endpoint ''
        status code: 301, request id: , host id: )
2021/01/09 02:49:57 DEBUG : pacer: Rate limited, increasing sleep to 10ms
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "GSuite:/Gorilla" to be canonical "Raw-GSuite:Amalgamated/Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "Size-Unlimited:/Gorilla" to be canonical "Raw-GSuite:Amalgamated/Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "MinSize-Unlimited:/Gorilla" to be canonical "Raw-GSuite:Amalgamated/Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "Raw-PCloud:/Gorilla" to be canonical "Raw-PCloud:Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "PCloud:/Gorilla" to be canonical "Raw-PCloud:Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "Size-Small:/Gorilla" to be canonical "Raw-PCloud:Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "Raw-Dropbox:/Gorilla" to be canonical "Raw-Dropbox:Gorilla"
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "Dropbox:/Gorilla" to be canonical "Raw-Dropbox:Gorilla"
2021/01/09 02:49:57 DEBUG : pacer: Reducing sleep to 0s
2021/01/09 02:49:57 DEBUG : fs cache: renaming cache item "S3Ireland:/Gorilla" to be canonical "Raw-S3:rwv37-eu-west-1/Gorilla"
2021/01/09 02:49:58 DEBUG : fs cache: renaming cache item "Raw-OneDrive:/Gorilla" to be canonical "Raw-OneDrive:Gorilla"
2021/01/09 02:49:58 DEBUG : fs cache: renaming cache item "OneDrive:/Gorilla" to be canonical "Raw-OneDrive:Gorilla"
2021/01/09 02:49:58 DEBUG : pacer: Reducing sleep to 0s
2021/01/09 02:49:58 DEBUG : fs cache: renaming cache item "S3Sydney:/Gorilla" to be canonical "Raw-S3:rwv37-ap-southeast-2/Gorilla"
2021/01/09 02:49:58 DEBUG : union root '/Gorilla': actionPolicy = *policy.All, createPolicy = *policy.All, searchPolicy = *policy.All
2021/01/09 02:49:58 DEBUG : union root '/Gorilla': actionPolicy = *policy.All, createPolicy = *policy.All, searchPolicy = *policy.All
2021/01/09 02:49:58 DEBUG : union root '/Gorilla': actionPolicy = *policy.All, createPolicy = *policy.All, searchPolicy = *policy.All
2021/01/09 02:49:58 DEBUG : union root '/Gorilla': actionPolicy = *policy.All, createPolicy = *policy.All, searchPolicy = *policy.All
2021/01/09 02:49:59 DEBUG : fs cache: renaming cache item "B2:/Gorilla" to be canonical "Raw-B2:rwv037/Gorilla"
2021/01/09 02:49:59 DEBUG : union root '/Gorilla': actionPolicy = *policy.All, createPolicy = *policy.All, searchPolicy = *policy.All
2021/01/09 02:49:59 DEBUG : union root 'Gorilla': actionPolicy = *policy.All, createPolicy = *policy.All, searchPolicy = *policy.All
2021/01/09 02:49:59 DEBUG : fs cache: renaming cache item "Data-Gorilla:" to be canonical "MinSize-Tiny:Gorilla"
2021-01-09 02:50:00 DEBUG : union root 'Gorilla': Waiting for checks to finish
2021-01-09 02:50:00 DEBUG : 202x/2021/gorilla-2021-01-09-32285.kdbx: Size and modification time the same (differ by -28.9471ms, within tolerance 1s)
2021-01-09 02:50:00 DEBUG : 202x/2021/gorilla-2021-01-09-32285.kdbx: Unchanged skipping
2021-01-09 02:50:00 DEBUG : union root 'Gorilla': Waiting for transfers to finish
2021-01-09 02:50:00 INFO  : There was nothing to transfer
Transferred:             0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks:                 1 / 1, 100%
Elapsed time:         3.7s
2021/01/09 02:50:00 INFO  :
Transferred:             0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks:                 1 / 1, 100%
Elapsed time:         3.7s

2021/01/09 02:50:00 DEBUG : 35 go routines active

I think that is probably working as expected.

What is happening is that rclone looks in the union to see if the file needs uploading at all and if it doesn't it doesn't upload it.

However with the search policy all a single copy of the file is sufficient to satisfy rclone that the file is available.

I don't think there is a policy which does what you want at the moment. We could imagine an and policy which would need all the files to be present for a file to be readable. This would be a disaster for reading files, but would be just what you need for uploading them.

I can think of a couple of work-arounds.

You could use this flag which will always upload regardless of whether the file exists or not.

  --no-check-dest          Don't check the destination, copy regardless.

You don't want to use that for regular syncing though as it will always upload the files.

The other thing you could do is sync between the remotes every now and again to make sure they are all complete.

I'd probably use rclone copy so once a week you could do

rclone copy Raw-GDrive: Raw-GSuite:
rclone copy Raw-GSuite: Raw-OneDrive:
rclone copy Raw-OneDrive: Raw-GDrive:

Or maybe you could use rclone check instead to generate a report of missing items and fix them up manually.

OK, thanks. It doesn't really seem to make sense to me, though; "copy it everywhere unless it is somewhere" seems like a much less natural and much less useful concept than "copy it everywhere that it is not". I'm honestly having a hard time even imagining a case where I would ever want to "copy it everywhere unless it is somewhere", really.

At the very least it seems like this seemingly super-counterintuitive behavior should be pointed out in the documentation. I think the root confusion is based on simultaneously considering a union to functionally be an atomic thing in one sense but a list of a bunch of things in another sense (as opposed to being conceptually one but functionally the other). The current documentation didn't even make me think of the possibility that it would behave this way.

Also, perhaps a warning should be emitted during the output. If I hadn't happened to use -P, which I often do not use for small numbers of small files (such as what I was uploading), I would have never noticed until I happened to try to retrieve my backup that I didn't actually back it up. Potentially, by the time I noticed, the one place where it did get backed up might have gone out of business years ago, and so I don't have any backup of the file now, despite having intentionally given rclone a command which, in the vast majority of cases, would have caused it to be backed up to several different companies' servers, to protect against just such a contingency.

But really, it seems to me like it just shouldn't behave this way in the first place. Instead, I think copy with all should first determine what individual ultimate services the file is or is not on and up to date at, and then upload it to all of the individual ultimate services at which it is either not present or not up to date.

Due to the way the union works it can either show a file is present, or it isn't present to the higher level copying layers.

At the moment if one copy of the file is present it will declare it to be present.

With the and policy a file would only appear if all 3 copies were present (and the same).

This would be fine for copying to the union, but I suspect the all policy would be prefered when reading from the union.

What do you think, is it worth making an and policy?

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.