<< Best remote-2-remote configuration >>

What is the problem you are having with rclone?

I run simulations on a remote cluster with the slrum queue manager.
I upload the files to Google Drive but now, I started using OpenDrive.

I want to know the best practice/configuration/setup to copy/move from one remote (Google Drive) to another (OpenDrive).

Run the command 'rclone version' and share the full output of the command.

rclone v1.65.2
- os/version: centos 7.9.2009 (64 bit)
- os/kernel: 3.10.0-1160.108.1.el7.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.6
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive & OpenDrive

The command you were trying to run (eg rclone copy /tmp remote:tmp)


Please run 'rclone config redacted' and share the full output. If you get

type = drive
client_id = XXX
client_secret = XXX
scope = drive
token = XXX
team_drive = 

type = opendrive
username = XXX
password = XXX

A log from the command that you were trying to run with the -vv flag


Not sure what your issue is. It is like asking what is the best car without specifying any details... for racing? or for moving heavy cargo?

IMO the best practice is to keep things as simple as possible... and there is no one "the best" set of flags to suit all requirements. If not sure - use defaults.

if rclone copy/sync/move src: dst: works then just use it.

Your remotes configuration looks ok. Only if you have specific issues then look for more specific solution. Default values used by rclone are usually good enough.

There is an issue like "the transferred files are copied into memory or to local hard disk when coping from one remote to another?". This can affect the performance of the node/computer where you are running the transfer, for example.

correct. And what is particular performance impact you are trying to mitigate? One million 1KB files copy vs one 1GB file copy is very different problem.

Again if not sure use defaults.

For example, if the transfer use memory, I need to specify the amount of memory available in the job script.
If transfer use local hard disk, and it is an SSD, this can cause degradation.

We are making circles here:) You want specific answer for very general question. And the answer is use defaults - they try to balance various scenarios to provide the best end user experience.

Ok. Thank you very much for your time.

there is no such setting as --max-memory-usage

each storage system is different but in general:
total memory = number of transfers x chunksize

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.