Mount all remotes together / rclone serve all remotes together

Hello,

like command rclone rcd --rc-serve to list an object in any remote with http://127.0.0.1:5572/[remote:path]/path/to/object, is it possible to make it not ro (read-only), but +w writable? In another words, can we mount all remotes together?

For example, I have remote_1, remote_2, .... remote_10.
using rclone serve webdav command without give specific remote, rclone then shows [remote_1:], [remote_2:], ... [remote_10:] as subfolders, which have fully read, write, list, etc functions.

Do we have a similar way to achieve it?

It would be a great function, without mounting every remote as a separate process.
Thank you.

Check out:

That allows you to combine remotes and adjust how they work via policies.

I guess you are looking for the combine backend:

Thank you for your kindly reply.

I've made a test with command combine.

From the debug output, I only copy one file into mega_1, but it looks rclone creates all backend remotes first and then make the transfer to mega_1? Creating all backend remotes together will cost memory resource? Or am I wrong? Thanks again.

$ rclone copy file1 combine:mega_1/test -vv
2022/12/14 18:50:56 DEBUG : rclone: Version "v1.60.1" starting with parameters ["rclone" "copy" "file1" "combine:mega_1/test" "-vv"]
2022/12/14 18:50:56 DEBUG : Creating backend with remote "file1"
2022/12/14 18:50:56 DEBUG : Using config file from "/home/username/.config/rclone/rclone.conf"
2022/12/14 18:50:56 DEBUG : fs cache: adding new entry for parent of "file1", "/home/username/.config/rclone"
2022/12/14 18:50:56 DEBUG : Creating backend with remote "combine:mega_1/test"
2022/12/14 18:50:56 DEBUG : Creating backend with remote "mega_2:data"
2022/12/14 18:50:56 DEBUG : Creating backend with remote "mega_3:data"
2022/12/14 18:50:56 DEBUG : Creating backend with remote "mega_1:data"
2022/12/14 18:51:00 DEBUG : file1: Need to transfer - File not found at Destination
2022/12/14 18:51:04 INFO  : file1: Copied (new)
2022/12/14 18:51:04 INFO  : 
Transferred:   	       10 MiB / 10 MiB, 100%, 3.334 MiB/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:         8.0s

rclone config file:

[mega_1]
type = mega
user = xxxxx
pass = xxxxx

[mega_2]
type = mega
user = xxxxx
pass = xxxxx

[mega_3]
type = mega
user = xxxxx
pass = xxxxx

[combine]
type = combine
upstreams = mega_1=mega_1:data mega_2=mega_2:data mega_3=mega_3:data

Correct.

Correct.

No, but your focus may be.

How much memory does this command take:

rclone copy file1 mega_1:data/test --ignore-times -vv

How much memory does this command take:

rclone copy file1 combine:mega_1/test --ignore-times -vv

What is the relative difference?

Is it possible only create one remote backend when it is needed, not all together every time?

rclone rcd --rc-serve only create one needed backend when request, but read only.

Here is the test result:

$ /usr/bin/time -v rclone copy file1 mega_1:data/test --ignore-times -vv
DEBUG : rclone: Version "v1.60.1" starting with parameters ["rclone" "copy" "file1" "mega_1:data/test" "--ignore-times" "-vv"]
DEBUG : Creating backend with remote "file1"
DEBUG : Using config file from "/home/username/.config/rclone/rclone.conf"
DEBUG : fs cache: adding new entry for parent of "file1", "/home/username/.config/rclone"
DEBUG : Creating backend with remote "mega_1:data/test"
DEBUG : file1: Need to transfer - File not found at Destination
INFO  : file1: Copied (new)
2022/12/14 20:14:53 INFO  : 
Transferred:   	       10 MiB / 10 MiB, 100%, 0 B/s, ETA -
Transferred:            1 / 1, 100%
Elapsed time:         2.1s

DEBUG : 12 go routines active
	Command being timed: "rclone copy file1 mega_1:data/test --ignore-times -vv"
	User time (seconds): 0.26
	System time (seconds): 0.05
	Percent of CPU this job got: 14%
	Elapsed (wall clock) time (h:mm:ss or m:ss): 0:02.20
	Average shared text size (kbytes): 0
	Average unshared data size (kbytes): 0
	Average stack size (kbytes): 0
	Average total size (kbytes): 0
	Maximum resident set size (kbytes): 65680
	Average resident set size (kbytes): 0
	Major (requiring I/O) page faults: 0
	Minor (reclaiming a frame) page faults: 9089
	Voluntary context switches: 1895
	Involuntary context switches: 62
	Swaps: 0
	File system inputs: 20480
	File system outputs: 0
	Socket messages sent: 0
	Socket messages received: 0
	Signals delivered: 0
	Page size (bytes): 4096
	Exit status: 0
$ /usr/bin/time -v rclone copy file1 combine:mega_1/test2 --ignore-times -vv
DEBUG : rclone: Version "v1.60.1" starting with parameters ["rclone" "copy" "file1" "combine:mega_1/test2" "--ignore-times" "-vv"]
DEBUG : Creating backend with remote "file1"
DEBUG : Using config file from "/home/username/.config/rclone/rclone.conf"
DEBUG : fs cache: adding new entry for parent of "file1", "/home/username/.config/rclone"
DEBUG : Creating backend with remote "combine:mega_1/test2"
DEBUG : Creating backend with remote "mega_3:data"
DEBUG : Creating backend with remote "mega_1:data"
DEBUG : Creating backend with remote "mega_2:data"
DEBUG : file1: Need to transfer - File not found at Destination
INFO  : file1: Copied (new)
INFO  : 
Transferred:   	       10 MiB / 10 MiB, 100%, 0 B/s, ETA -
Transferred:            1 / 1, 100%
Elapsed time:         4.5s

DEBUG : 22 go routines active
	Command being timed: "rclone copy file1 combine:mega_1/test2 --ignore-times -vv"
	User time (seconds): 0.46
	System time (seconds): 0.05
	Percent of CPU this job got: 11%
	Elapsed (wall clock) time (h:mm:ss or m:ss): 0:04.55
	Average shared text size (kbytes): 0
	Average unshared data size (kbytes): 0
	Average stack size (kbytes): 0
	Average total size (kbytes): 0
	Maximum resident set size (kbytes): 67660
	Average resident set size (kbytes): 0
	Major (requiring I/O) page faults: 0
	Minor (reclaiming a frame) page faults: 9567
	Voluntary context switches: 2383
	Involuntary context switches: 44
	Swaps: 0
	File system inputs: 20480
	File system outputs: 0
	Socket messages sent: 0
	Socket messages received: 0
	Signals delivered: 0
	Page size (bytes): 4096
	Exit status: 0

User time (seconds): 0.26 -> 0.46
Maximum resident set size (kbytes): 65680 -> 67660 increase a little bit. I'm using 3 backend remotes. If I use 2000 remotes, will this memory usage becomes influence?

Thanks again.

Perhaps, I don't know. Probably a lot easier said than done, it would require some kind of on-demand loading of nested remotes - that could easily get very complex.

Let's make a guestimate: 65680KB + (67660 KB - 65680 KB) * 2000 / 3 = 1.3 GB

How much memory do you expect to be used by 2000 mounts started individually or using rclone rcd?

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.