Questions about moving from gdrive to dropbox

What is the problem you are having with rclone?

Hey guys, first of all thank you so much for the amazing tool you've created its really changed the game in a lot of ways. Apologies if these are dumb questions, I am new to rclone.

I am currently moving my data of around 400TB from drive to dropbox and would like inquire about a few things
1- I have 5 active seedboxes active, can I utilize them to make the syncing process faster? or would they interfere with each other like syncing the same file at the same time?

2- Is there any wrong flags with the command I'm using?

3- I have my data on a shared drive with multiple gmails having access to it. Can I use different gmails (with different API keys) to transfer more than 10TB a day?

Run the command 'rclone version' and share the full output of the command.

- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-73-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.20.2
- go/linking: static
- go/tags: none```

####  Which cloud storage system are you using? (eg Google Drive)
Google drive to dropbox

#### The command you were trying to run (eg `rclone copy /tmp remote:tmp`)  

```rclone sync gdrive: dropbox: -P --checkers 12 --log-file /home/vault/.config/logsrclone/rclone_gdropbox.log -v --tpslimit 12 --transfers 8 --max-transfer 9000G --drive-chunk-size 32M```

Thank you in advance
1 Like

You could benefit from it - create individual dropbox and Google App IDs for each server.

To avoid sync creating any conflict simply do not sync the same data, e.g:

server1 - sync folders starting with A to K
server2 - sync folders starting with L to Z

Or whatever similar logic. Here there is post describing how to do this using filters.

Thats brilliant man thank you!

Just making sure I understood it right.

server 1:

rclone sync gdrive: dropbox: -P --checkers 12 --log-file /home/vault/.config/logsrclone/rclone_gdropbox.log -v --tpslimit 12 --transfers 8 --include "/[A-C]**/**" --max-transfer 9000G --drive-chunk-size 32M

This server would sync files starting from A to C

server 2:

rclone sync gdrive: dropbox: -P --checkers 12 --log-file /home/vault/.config/logsrclone/rclone_gdropbox.log -v --tpslimit 12 --transfers 8 --include "/[D-F]**/**" --max-transfer 9000G --drive-chunk-size 32M

This server would sync files starting from D to F

Is that correct? and would you recommend that I change any of the flags I am using?

these are folders not files (hence **) - also only applies to capital letters - it is better to use filter where you can specify more details - all is in linked post. All detail in docs.


+ /[a-b]**/**
+ /[A-B]**/**
+ /[0-2]**/**
- **

Note that it applies to root folder only.

you might have to change it based on your data structure e.g.:

+ /FolderWithMyData/[a-b]**/**
+ /FolderWithMyData/[A-B]**/**
+ /FolderWithMyData/[0-2]**/**
- **

experiment - you can test with --dry-run to see if you get what you want before you run it for real

got it, cheers bro

if you miss something it is not big deal. When finished you should run:

rclone sync source: dest:

to verify if all in sync and eventually sync some missing bits. If nothing missed it will be quick.

Awesome man! I will start running those commands and I should be done transferring all my data in 2-3 weeks thanks to you.

Just wondering, say I lose connection in the middle of uploading a file, and it uploads a corrupted version of that file. When I run the sync command again would it replace the corrupted version with a good one?

With this size of transfer some corruption can happen - but subsequent or final sync will catch and re-upload what is needed. So I would not worry about it.

In your command you should add --tpslimit-burst 0

If it was me I would also stick to default checkers and transfers. With your settings you only gain if you have many small files. For bigger files it does not make any difference and limits chances to hit dropbox throttling your traffic if you hit it too much. It can still happen...

I would run sync with -vv debug output option and check at least at the beginning every few hours if there are not throttle errors. If there are you will have to lower tpslimit and/or checkers/transfer. At the end without throttling your transfer will be always much faster. Throttling is the worst thing that can happen - it will bring your transfer speed to crawl.

How much Space get you in Dropbox?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.