Move files between remotes in "rclone union"?

What is the problem you are having with rclone?

After creating rclone union how do you (or can you) move files between remotes? Is there something like rclone switch union-remote:/path/to/file remote1 command? Is it possible at all to choose on which remote a particular file will be stored?

If it's not possible with rclone union are you aware of any other method on linux to logically merge directories of two remotes, so that you can explicitly choose where their files will go?

I imagine you could run rclone move remote1:/path/to/file remote2:/path/to/file, but then you need to repeat the path twice each time, which is prone to human-error (and tedious).

Run the command 'rclone version' and share the full output of the command.

rclone v1.62.2
- os/version: arch "rolling" (64 bit)
- os/kernel: 6.3.9-arch1-1 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.20.4
- go/linking: dynamic
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

remote1: AWS Glacier
remote2: Google Drive

The rclone config contents with secrets removed.

type = drive
client_id =
client_secret =
scope = drive
token =
root_folder_id =
team_drive = 

type = s3
provider = AWS
access_key_id =
secret_access_key =
region =
location_constraint =
acl = private
storage_class = DEEP_ARCHIVE

What is actual problem you are trying to solve? If you share your end goal, maybe one of the forum members can see an alternative path to that.

Union remote as per name is about making underlying remotes look as one.

If you want explicitly choose where files will go use combine remote.

If you want explicitly choose where files will go use combine remote.

I thought that combine can't make two remotes appear as one?


under combine becomes:


what I was hoping for was:


If you change file1 it changes on remote1. If you change file2 it changes on remote2. They don't leave their original remote location.

The end goal is to move some of the files off the Google Drive due to recent 5TB restriction. AWS Glacier doesn't have a maximum storage limit, but it is very slow to retrieve the files, so it would be very useful to be explicit which file goes to AWS and which stays on Google Drive.

I was hoping that this could be achieved without separating them. Right now all files are stored in one tree (a big monorepo). It would be good if it was possible to keep them in the same location "logically", but move them somewhere else "physically" if that makes sense.

Yes this is correct how it works.

If file already exist you could achieve it with epff action policy

Yes this is correct how it works.

That's good to hear. Sorry if it sounded obvious, I might have got overwhelmed by this at some point.

I assume that if you want to move files across remotes then rclone union doesn't offer any new interface to do that?

rclone move-same-path ./file1 remote2

You just have to move them between the two remotes "directly"?

rclone move remote1:folder1/file1 remote2:folder1/ 

I suppose that this is doable, but it's a lot of typing (for deeply nested directories) for what seemed like a really common operation. You could write a shell script for that, but I'd rather not reinvent the wheel if someone has solved this "issue" already.

Yes this is correct.

You can control where files are by using path preserving policies -so you can have folder1 on remote1 and folder2 on remote2. But you can achieve the same with combine which gives you one remote with access to multiple remotes.

What you can investigate is GitHub - trapexit/mergerfs: a featureful union filesystem - actually rclone union uses some ideas from there. It has much more options than rclone.

Then you mount with rclone remote1 and remote2 and merge mount points with mergerfs.

Thanks, I've discussed this and related topics recently with the mergerfs author. I think the biggest problem for me was lack of demos for this particular purpose. He kindly shared useful collection of articles, which I hope will explain this a little more. I don't doubt that I'm making some wrong assumptions, but most examples focus on RAID-like setup, which is a bit different from "offloading a few selected files to an archive" that makes it hard to crack.

I think you could use both union and combine at the same time.

Union would give you unified view of both remotes.

And combine can be used to move files between remotes.

As you are talking about programmatic way:

rclone switch union-remote:/path/to/file remote1

it is the same as doing this operation in combine remote. So IMHO all required functionality is already here.

It could be 'packaged' as bash script applying all logic. /path/to/file ArchiveRemote

and inside just rclone move

rclone move remote1:path/to/file remote2:path/to/file

so actually combine is not even needed.

You would like to have your union to always create new content in hot remote - so eplfs (existing path, least free space) policy. As your local drive will always have less space than S3 remote - then you can even mount it and just use from your OS directly.

You would like to have your union to always create new content in hot remote - so eplfs (existing path, least free space)

thank you, that was another pressing question - that policy says least free space, but in this case you want to always use the "hot" remote, which confused me. I suppose that another way to control that is to make the "cold" remote no-create.

you are right indeed - your idea is actually better as explicitly defines cold one

1 Like

I forgot to mention the biggest hurdle, which also started this thread.

If you have a file on one remote:


how do you move it to the same location on another remote? You need to recreate this very/long/path/to/a/file folder structure:

mkdir -p remote2/very/long/path/to/a/file

and this only works if you mounted it. Not sure if rclone has a corresponding mkdir -p command.

and then you need to move the file:

mv remote1/very/long/path/to/a/file remote2/very/long/path/to/a/file

This is really cumbersome and. If you cd into that folder you need to format paths to be relative to the remotes. If you miss cache the first command may take a long time. It would be great if there was an rclone command that simply does:

rclone migrate ./file remote1 remote2

rclone copyto remote:file remote1:aaa/bbb/ccc/ddd/file

moveto does the same

1 Like

Thank you, I think that solves it.

A few gotchas: if you are in a mounted directory and need to convert its files to "rclone absolute paths" (e.g. ./file to 'remote:very/long/path/to/a/file') this can be accomplished with a zle widget (only works with zsh):

rclone-convert-path () {
    # split buffer using "shell semantics" (quotes will be recognized)
    local words=(${(z)BUFFER})
    local newwords=()
    # (Q) - Remove one level of quotes from the resulting words.
    for word ( ${(Q)words} ) {
        if [[ ! -e $word ]] {
        rclone_drive=$(df "$word" | sed 1d | awk '{print $1}')
        rclone_mount_path=$(df "$word" | sed 1d | awk '{print $6}')
        file_rel_path=${"$(readlink -f $word)"#$rclone_mount_path}
    # join array with " " whitespace character
    BUFFER=${(j: :)newwords}
zle -N rclone-convert-path

you can then copy that path to fit rclone copyto remote:very/long/path/to/a/file remote:very/long/path/to/a/file format. This can be done relatively quickly

If you are moving an encrypted file use rclone's reverse cryptdecode:

rclone cryptdecode --reverse remote-encrypted: 'very/long/path/to/a/file'

mind the space between remote-encrypted: and 'very/long/path/to/a/file'

gives you:

archive/notes/dataset-source-ideas.txt ebp...00/q6v...q0/8da...t1g

you can then use the second part to fill in copyto/moveto command. Make sure to append name of the root of the encrypted remote.

rclone copyto remote-encrypted:ebp...00/q6v...q0/8da...t1g remote:crypt/ebp...00/q6v...q0/8da...t1g

That's still a few steps that might be automated - zle widgets like rclone-copyto-format rclone-convert-path-encrypted might be useful.

1 Like

Thank you for sharing it. Maybe somebody can use it.

When you finish your setup maybe you can create github repo with all details? It can be really useful as I think problem you are solving is not unique.

Sure, I will try to create and link here a github gist with rclone.conf details and other gotchas later on.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.