RClone S3 Bucket size setting

What is the problem you are having with rclone?

I don't know if it is a problem or things are performing "as-expected" but i am trying to combine multiple cloud storage accounts from multiple providers utilizing "Union" with some limited success.

The process i am trying to achieve is to compress the file (the gzip in rclone is fine for my purposes), Chunk the file up, Encrypt the chunks, and store the encrypted chunks across multiple providers at random.

I have had success on all providers expect those that utilize S3 - basically the S3 providers do not report their size (using rclone about) and it just reports at being 1pb in size. From my reading and understanding this is "As Expected".

I have tried the --vfs-disk-space-total-size flag but that will only affect the total Union and not the individual remotes in that union.

What i am wondering is if there is a way to manually set the size of the remote in the config file that can be configured on a per remote basis like "Set Storage size = 75GB" or similar.

Thanks for your time.

Run the command 'rclone version' and share the full output of the command.

rclone v1.62.2
- os/version: Microsoft Windows 10 Pro 22H2 (64 bit)
- os/kernel: 10.0.19045.2846 Build 19045.2846.2846 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.20.2
- go/linking: static
- go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Mega, Onedrive, Dropbox, pDrive, Scaleway (S3), Storj (S3)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone --config="rclone.conf" mount "Compress:" p: --vfs-cache-mode full

The rclone config contents with secrets removed.

[Dropbox]
type = dropbox
token = *REMOVED*

[Mega]
type = mega
user = *REMOVED*
pass = *REMOVED*

[OneDrive]
type = onedrive
token = *REMOVED*
drive_id = *REMOVED*
drive_type = personal

[pCloud]
type = pcloud
hostname = eapi.pcloud.com
token = *REMOVED*

[Scaleway]
type = s3
provider = Scaleway
access_key_id = *REMOVED*
secret_access_key = *REMOVED*
region = nl-ams
endpoint = s3.nl-ams.scw.cloud
acl = private
storage_class = STANDARD

[Storj - S3]
type = s3
provider = Storj
access_key_id = *REMOVED*
secret_access_key = *REMOVED*
endpoint = gateway.storjshare.io

[Union]
type = union
upstreams = "Dropbox:/Pool" "Mega:/Pool" "OneDrive:/Pool" "Scaleway:/rclone.pool/Pool" "pCloud:/Pool" "Storj - S3:/rclone.pool/Pool"
action_policy = eprand
create_policy = eprand

[Crypt]
type = crypt
remote = Union:/Secure
password = *REMOVED*

[Chunk]
type = chunker
remote = Crypt:
chunk_size = 50Mi

[Compress]
type = compress
remote = Chunk:

Rclone would then need to count up the files in the remote to work out how much space is used wouldn't it? That could be quite an overhead.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.