Rclone sync connections


#1

How many connections the rclone sync makes when running?
Does it have any way to increase that?


#2

What’s the problem you are trying to solve?


#3

increase sync speed to Google cloud.


#4

What’s the command you are running? Do you have any logs from the command?


#5

rclone sync XXX gcloud:BUCKET/DESTINATION/XXX --config rclone.conf -v


#6

What’s the speed you are seeing? What are you expecting? Are there any logs? more information you provide, the better the process goes :slight_smile:


#7

seeing 2.058 MBytes/s
expecting 10x more

what kind of log?

Transferred: 283.884G / 283.884 GBytes, 100%, 2.058 MBytes/s, ETA 0s
Errors: 0
Checks: 1675059 / 1675059, 100%
Transferred: 1675060 / 1675060, 100%
Elapsed time: 39h14m9.4s


#8

You can run rlcone with logging:

--log-level INFO --log-file /home/felix/logs/rclone.log

or even a little higher to see what’s going on.

You can use --fast-list to help speed it up a bit, but it looks like you are moving a lot of little files so that’s just not efficient

Are you using your own API key?


#9

yes, lot of little files.

Are you using your own API key?
sorry, didn’t understand

there isn’t a way to increase connections/threads?


#10

You can’t really speed up a lot of little files as the nature of using the cloud storage side of it.

If you aren’t using your own API key, you are probably getting errors but you haven’t included any logs so I can just guess.

Make a key using this:

https://rclone.org/drive/#making-your-own-client-id

Google has a limit of API calls per second and you can’t get around that so you really can’t do much other than using your key and sharing the logs to see what’s going on.


#11

I’m using Google Cloud Storage and not Google Drive.


#12

Increase --checkers and --transfers that should speed things up a lot.


#13

Not that much as it has limits on it as well unfortunately.

https://cloud.google.com/storage/quotas


#14

Looks quite generous though so I would have thought winding --transfers up to 64 or 128 will work just fine.

  • There is no limit to writes across multiple objects. Buckets initially support roughly 1000 writes per second and then scale as needed.
  • There is no limit to reads of an object. Buckets initially support roughly 5000 reads per second and then scale as needed.