Can I run multiple jobs for same source and destination path?

What is the problem you are having with rclone?

I need to transfer a folder with few TB size from Google Drive to Dropbox, currently running only 1 job, I wonder if I can run another job on different machine with descending order to speed up the transfer. So Job 1 is running on ascending order, Job 2 running on descending order, what will happen when they eventually meet up in the middle? If both job copying the same file, will it be duplicated on the destination? For the record, running 2 jobs with different source and destination is not possible because everything is in 1 folder and I cannot simply add sub folder to segregate the files.

What is your rclone version (output from rclone version)

rclone v1.57.0

  • os/version: Microsoft Windows 7 Professional Service Pack 1
  • os/kernel: 6.1.7601.24544 (i686)
  • os/type: windows
  • os/arch: 386
  • go/version: go1.17.2
  • go/linking: dynamic
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Google Drive (source) and Dropbox (destination)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Job 1: rclone copy -Pv gdrive:Folder dropbox:Folder --order-by name,ascending
Job 2: rclone copy -Pv gdrive:Folder dropbox:Folder --order-by name,descending

The rclone config contents with secrets removed.

type = drive
client_id = Google_ID
client_secret = Google_Secret
scope = drive
acknowledge_abuse = true
stop_on_upload_limit = true
stop_on_download_limit = true
token = Google_Token
team_drive =
type = dropbox
client_id = Dropbox_ID
client_secret = Dropbox_Secret
shared_folders = true
token = Dropbox_Token

A log from the command with the -vv flag

I have not run both job yet, just wondering if it is possible. 

hello and welcome to the forum

  • not sure exactly what would happen but i would saw given those commands.
    the worst that would happen is rclone would try to copy the same file multiple times.
    might get some errors and take more time to transfer the files.

tho not sure the need to run multiple rclone commands at the same time using the same source and dest.

why not increase:

  1. --transfers and --checkers
  2. increase chunk size
  • for this particular case, gdrive to dropbox, perhaps @Animosity022 might have a suggestion?

I think you'd be just fine doing that.

If one finishes on A and B is in the middle, you'd get a 404 not found on B and it would just error out.

If you are copying and it's already there, it would ignore it as it would be there already.

All in all, I wouldn't worry much as I feel pretty good about a collision not causing any issues if in the odd case it did happen.


  • do you have an optimized command to copy from gdrive to dropbox or are the defaults good enough?
  • is it better to?
    --- run one rclone command and increase --transfers, --checkers and chunk size
    --- to run multiple rclone commands using the same source and dest?

I'm not sure what checkers are, but I did try the transfers flag and only able to get to --transfers 6 as anything higher than that the job will run into an error after some time (my machine is a low end VM).

Does chunk help with small file size? the folder have about 300k files and the biggest file size is only about 50MB.


  • if the max transfer is 6, not sure the value of running two rclone commands.
  • if the vm is low of memory, then increasing chunk size would not help.

The second job will be on another machine so I think it would totally help, although I'm not sure if my internet bandwidth will bottleneck the cloud to cloud copying.

yeah, i missed that in your first post.

you could use a free/cheap vm from google and run the second copy of rclone on that.

Do not use a Google VM as the egress cost will murder you from Drive to Dropbox.

You pay $0.12 per GB and it gets a little cheaper, but to move say 10TB, you are in for quite the sticker shock.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.