Suggestion on rclone copy terabytes of small files to dropbox

What is the problem you are having with rclone?

I have about 20TB of small files (each file is about 50kB) on two separate cloud severs, I'd like to copy these files to dropbox. I have been using rclone copy, but I constantly encounter errors of "too many request" and my upload speed is only at about 100kB/s. I am wondering if there's any suggestion on how to speed up the process. I know there's --dropbox-batch-mode flag but it doesn't seem to work with rclone copy.

Also, I am wondering if there's a way to copy files from two source directories to one destination directory, and do it in parallel?

What is your rclone version (output from rclone version)

v 1.55.1

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Window 7, 64 bit

Which cloud storage system are you using? (eg Google Drive)

dropbox

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone -vv --tpslimit 10 --tpslimit-burst 10  copy /source_folder/ dropbox:destination_folder/

The rclone config contents with secrets removed.

type = dropbox

client_id = u6v4v5ur2zsz4sz

token = {"access_token":"sl.AwJ2PBEiIV_aNi0eh5aoNLG-93e98u3XHj_KAq55jRcp9J3ZSLxEznQWiWgVgq6mUSvvJo8zfKUDUfazpS94lgkNEKRqTBg5mG6E37qPnkdUgdgt2fD6x4TwIl0-7ngUrHhged-p","token_type":"bearer","refresh_token":"58dQHT7_yGAAAAAAAAAAAXjsyetrjAyZbHdZ9egTEvGMLa69j7OOoo4mXkclI1SX","expiry":"2021-05-04T03:35:54.808239-04:00"}

A log from the command with the -vv flag

(this is the last section of the log)

2021/05/04 19:21:44 INFO  : 54065/970700/54065_011644_09E.png: Copied (new)
2021/05/04 19:21:44 NOTICE: too_many_write_operations/.: Too many requests or write operations. Trying again in 1 seconds.
2021/05/04 19:21:44 DEBUG : pacer: low level retry 1/1 (error too_many_write_operations/.)
2021/05/04 19:21:44 DEBUG : pacer: Rate limited, increasing sleep to 1s
2021/05/04 19:21:44 DEBUG : 54065/970700/54065_011634_09S.png: Received error: upload failed: too_many_write_operations/. - low level retry 1/10
2021/05/04 19:21:44 DEBUG : pacer: Reducing sleep to 750ms
2021/05/04 19:21:44 DEBUG : 54065/970700/54065_011644_09N.png: DropboxHash = e8e3af9a61ef62b110ad860ff24e1b13f2b1e700330985483c5752808cf1ec1b OK
2021/05/04 19:21:44 INFO  : 54065/970700/54065_011644_09N.png: Copied (new)
2021/05/04 19:21:45 DEBUG : pacer: Reducing sleep to 562.5ms
2021/05/04 19:21:45 DEBUG : 54065/970700/54065_011644_09S.png: DropboxHash = b13e07fae64e858123da3feeb2ba72fb58a32b3dde9554888334d956eef97a23 OK
2021/05/04 19:21:45 INFO  : 54065/970700/54065_011644_09S.png: Copied (new)
2021/05/04 19:21:45 DEBUG : pacer: Reducing sleep to 421.875ms
2021/05/04 19:21:45 DEBUG : 54065/970700/54065_011644_09W.png: DropboxHash = bfd60c638d3c09cf42d035c4e1056ae8b2d7e1ab8c99d4f0c85b272a2a12f5bc OK
2021/05/04 19:21:45 INFO  : 54065/970700/54065_011644_09W.png: Copied (new)
2021/05/04 19:21:46 DEBUG : pacer: Reducing sleep to 316.40625ms
2021/05/04 19:21:46 DEBUG : 54065/970700/54065_011634_09S.png: DropboxHash = ef06efd347da578551460589f9d54905a9e31e2d7206b79230e8764a1e805608 OK
2021/05/04 19:21:46 INFO  : 54065/970700/54065_011634_09S.png: Copied (new)
2021/05/04 19:21:46 DEBUG : pacer: Reducing sleep to 237.304687ms
2021/05/04 19:21:46 DEBUG : 54065/970700/54065_011645_09E.png: DropboxHash = bde9de231615883c0e2b87b3a1e5a5f34c7b4f12a5d9d6a7c68bb8e1258fd244 OK
2021/05/04 19:21:46 INFO  : 54065/970700/54065_011645_09E.png: Copied (new)
2021/05/04 19:21:47 DEBUG : pacer: Reducing sleep to 177.978515ms
2021/05/04 19:21:47 DEBUG : 54065/970700/54065_011645_09N.png: DropboxHash = 990011e9df4b700af8950d667e7e0f87ac7b5dcc7f6792138b5d50577fb3b987 OK
2021/05/04 19:21:47 INFO  : 54065/970700/54065_011645_09N.png: Copied (new)
2021/05/04 19:21:47 DEBUG : pacer: Reducing sleep to 133.483886ms
2021/05/04 19:21:47 DEBUG : 54065/970700/54065_011645_09S.png: DropboxHash = e2e868ba3209167d3dbd62190b6d46d376d6a5d4c7b86d6a4ec2a72432b1cb60 OK
2021/05/04 19:21:47 INFO  : 54065/970700/54065_011645_09S.png: Copied (new)
2021/05/04 19:21:48 DEBUG : pacer: Reducing sleep to 100.112914ms
2021/05/04 19:21:48 DEBUG : 54065/970700/54065_011645_09W.png: DropboxHash = fdc26ed8222bd9d27c7101660d73ad53eec8450987bceb31967bdd89e1a8fb7f OK
2021/05/04 19:21:48 INFO  : 54065/970700/54065_011645_09W.png: Copied (new)
2021/05/04 19:21:48 DEBUG : pacer: Reducing sleep to 75.084685ms
2021/05/04 19:21:48 DEBUG : 54065/970700/54065_011646_09N.png: DropboxHash = 60e33f627c9e069133368f20d0d4157b44caa71132d82056731ae755499e4f55 OK
2021/05/04 19:21:48 INFO  : 54065/970700/54065_011646_09N.png: Copied (new)
2021/05/04 19:21:49 DEBUG : pacer: Reducing sleep to 56.313513ms
2021/05/04 19:21:49 DEBUG : 54065/970700/54065_011646_09S.png: DropboxHash = 5ae4c00821da92ad448a9e63fd303b7e3549acea3485a8b387f9e899aa45adfe OK
2021/05/04 19:21:49 INFO  : 54065/970700/54065_011646_09S.png: Copied (new)
2021/05/04 19:21:50 DEBUG : pacer: Reducing sleep to 42.235134ms
2021/05/04 19:21:50 DEBUG : 54065/970700/54065_011646_09W.png: DropboxHash = ec3b857055f79953f12b817d36654b2bdd66a3cb119c2665903965587d746e5f OK
2021/05/04 19:21:50 INFO  : 54065/970700/54065_011646_09W.png: Copied (new)
2021/05/04 19:21:50 DEBUG : pacer: Reducing sleep to 31.67635ms
2021/05/04 19:21:50 DEBUG : 54065/970700/54065_011646_09E.png: DropboxHash = 7cc60efb8d15d70bc47ee4326d72e0488b8284c9ee53c795a755b15d3947357e OK
2021/05/04 19:21:50 INFO  : 54065/970700/54065_011646_09E.png: Copied (new)
2021/05/04 19:21:51 DEBUG : pacer: Reducing sleep to 23.757262ms
2021/05/04 19:21:51 DEBUG : 54065/970700/54065_011647_09N.png: DropboxHash = dd97ebebb69ee03c2ae9e2f3c16101adb4a6192456d5ca178b9368ca087d3834 OK
2021/05/04 19:21:51 INFO  : 54065/970700/54065_011647_09N.png: Copied (new)
2021/05/04 19:21:51 NOTICE: too_many_write_operations/...: Too many requests or write operations. Trying again in 1 seconds.
2021/05/04 19:21:51 DEBUG : pacer: low level retry 1/1 (error too_many_write_operations/...)
2021/05/04 19:21:51 DEBUG : pacer: Rate limited, increasing sleep to 1s
2021/05/04 19:21:51 DEBUG : 54065/970700/54065_011647_09E.png: Received error: upload failed: too_many_write_operations/... - low level retry 1/10
2021/05/04 19:21:51 DEBUG : pacer: Reducing sleep to 750ms
2021/05/04 19:21:51 DEBUG : 54065/970700/54065_011647_09W.png: DropboxHash = 22fe803e5fd403b1bcd3c65995bce4adccee87426aaf21a5b51e451e726b1e86 OK
2021/05/04 19:21:51 INFO  : 54065/970700/54065_011647_09W.png: Copied (new)
2021/05/04 19:21:52 DEBUG : pacer: Reducing sleep to 562.5ms
2021/05/04 19:21:52 DEBUG : 54065/970700/54065_011647_09S.png: DropboxHash = da644741c9f4e55fab1dd5343cc0a7e8ab466764bdf163d60625cb57c3dfa8d6 OK
2021/05/04 19:21:52 INFO  : 54065/970700/54065_011647_09S.png: Copied (new)
2021/05/04 19:21:52 DEBUG : pacer: Reducing sleep to 421.875ms
2021/05/04 19:21:52 DEBUG : 54065/970700/54065_011647_09E.png: DropboxHash = 42a7712772a807a7afd3bb6951248bd55f5ea29c0925f848dccefa1b08c7a8e9 OK
2021/05/04 19:21:52 INFO  : 54065/970700/54065_011647_09E.png: Copied (new)
2021/05/04 19:21:52 INFO  :
Transferred:      722.117M / 722.117 MBytes, 100%, 103.027 kBytes/s, ETA 0s
Checks:               891 / 891, 100%
Transferred:        12009 / 12009, 100%
Elapsed time:   1h59m39.0s

hello and welcome to the forum,

kind of hard to offer good advice when so much of the template has not been filled out.
did not see the exact command, the config file, or the top of a debug log?

not sure it will help, as dropbox is throttling rclone but these are commonly used flags to speed up the process.
--transfers and --checkers

you can run as many instances of rclone as you want at the same time.
each instance can have a different source and dest folder.
given the throttling, not sure the logic of doing that.

To use the --dropbox-batch-mode flag you need the beta from the fix-dropbox-batch-sync branch which you can find here:
https://beta.rclone.org/branch/fix-dropbox-batch-sync/

Thanks everyone for suggestion!

I agree with @arajin Dropbox seems to be limiting the uploads via the API a bit more aggressively since a month or so, so find a version of rclone that has the --dropbox-batch-mode flag available. I have made very good use of the following options for upload:

--dropbox-batch-mode=async --dropbox-batch-size 1000 --dropbox-batch-timeout 10s --checkers 28 --transfers 28 --tpslimit 12

with your very large amount of smaller files, I would probably reduce the --checkers and --transfers to something like 4 though unless you have a huge amount of memory (such as 32GB or more, preferably much more) in the server you use as we have observed some issues with rclone crashing when we try to be a bit too aggressive (but to be honest the async upload is much faster, so even with a lower value on the transfers and checkers you will see a drastic improvement from what you have today)

Thanks a lot! I will update rclone to the beta version and try this out.

On the other note, I found that if I tar the folder, the transfer speed is much faster.

I use the code

tar -zcvf /source/directory/a/b/folder | rclone rcat dropbox:folder.tar.gz -vv --s3-chunk-size 100M 

However, I do not want to have the entire parent directory path in the tarred folder, anyone know how to do that?

I see that there's a way to change directory when tarring but I don't know how it works when combining with rclone.

You are just uploading one file then which can be streamed nicely.

What you want is this I think

tar -zcvf . -C /source/directory/a/b/folder | rclone rcat ...

Thanks @ncw! When I ran the following code. I got an empty tarred folder.

I was able to copy the folder if I use

cd /source/directory/a/b/folder
tar -zcvf ./| rclone rcat...

However my tar folder directory will have a "." subfolder. I am trying to see how to remove that . subfolder.

I think you have to put the -C at the start - that works when I try it

tar -C /source/directory/a/b/folder -zcvf . | rclone rcat ...

If you don't have any files starting with . then this is the easiest solution

cd /source/directory/a/b/folder
tar -zcvf * | rclone rcat...

Thanks @ncw! I got it to work with

cd /source/directory/a/b/folder 
tar -zcvf - * | rclone rcat ...
1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.