Multiple –config rclone.conf files? Part 2 (solution follow up)

The old thread has been closed already, but I just found a way to solve that problem of concatenating multiple config files from separate directories in rclone.

Sadly it only works for zsh. After dabbling recently with zsh's slightly lesser known feature of process substitution (which instead of pipes uses temporary files) (which was really useful when using files without extensions), it struck me that this could be used as well for rclone.

Here it is:
rclone copy <REMOTE1>: <REMOTE2>: --config =(cat .config/rclone/<REMOTE1>.conf <(gpg -d <REMOTE2-ENCRYPTED>.conf))

works like a charm :slight_smile: It was especially useful when you have encrypted .conf files like in the <REMOTE2> example above. I couldn't however decrypt the .conf file on the command line if it was encrypted with rclone's config, so I went back to standard gpg.

Using regular shell process substitution results in error (that's why it's a zsh-exclusive for now):
Failed to load config file "/proc/self/fd/14": seek /proc/self/fd/14: illegal seek

I hope that it will be useful for others as well. This simplified my usage of rclone by like a lot.

PS did you know that you can use cat command to combine files :wink:
PPS do you mind if I tag you @asdffdsa - I thought that you might find it interesting - maybe you could unlock the previous thread to update it with this suggestion?

1 Like

yes, that is interesting.

if only i could weld such power in the forum, oh yeah :bomb:
in reality, i think the title of this topic is enough for other rcloners to find.

How does this work when you have rclone updating the config? Many remotes refresh tokens often.

Thanks, I haven't thought about that. The temporary config file gets removed after the command finishes, so the refresh token is gone.

I am not an expert on Oauth2, but I thought that as long as you have "bearer" token you can still regenerate new refresh tokens later?

My google drive config still works at least, but I haven't tried it on longer running commands.

I am not exactly sure why combining conf files is a good idea.

I have not used the rclone.conf file for many years.

Here is my setup of conf files for Gdrive

I have a single file called drives.conf

This single conf file has all the information for every TD I have access to. It is literally only meant to inspect drives so I can se contents quickly by using 1 conf. But it can also handle moving stuff from drive to drive as there are no limits on moving stuff from TD to TD

I have another 3,000 conf files 1 for every TD I have access to.

I have 200,000 SA accounts, this means I can auto cycle them via cmd without having to cycle through a small number of them.

This was done because even as a Workspace owner Google did terminate some of my SA, but now with so many I cycle through them all before going back to SA 0 which means it takes a long time to cycle through all the SA accounts.

NOTE: I never use tools like gclone etc, I find they can easily create duplicate folders when there is an error. So I am strict on using rclone commands directly to prevent this form happening

Since I use SA for everything this means I even used excel to auto generate my rclone config files.

Some friends have access to some TD they shared with me. To do that they simply copied their rclone.conf info for that TD and I pasted into. The token I get rclone auto updates it every time I run it and their token and mine never match yet I still maintain access to the token/TD.

I use excel to even generate all of my rclone commands for 3 reasons

  1. It is easier and faster then simply manually typing it out
  2. I can make a spreadsheet designed for a specific remote upload and assign it as many SA accounts
    required to upload it
  3. I have a track record of every file/folder/domain etc I have ever used rclone for

I will modify my excel spreadsheets and post them in the Howto Guides for others to use

It depends on use case, I guess that for most people it won't be useful, but in this case one of the config files was encrypted and ideally its decryption happened only within the "scope" of copying/moving/deleting files between these remotes. But it's probably an edge case.

I missed the encryption part.

Will have to read your post a little more thoroughly later since I plan a doing a massive of amount of encrypted drives soon

1 Like