Upload baixo para enviar arquivos para o Dropbox

What is the problem you are having with rclone?

When uploading files to Dropbox, the upload rate does not exceed 3.0 MBytes/s, and when you do the same test, but sending to Oracle, the upload rate exceeds 20 MBytes/s (using the same server and the same command, just changing the destination).
There are many small files, not many large files, above 5GB.

Run the command 'rclone version' and share the full output of the command.

rclone v1.54.1

  • os/arch: linux/amd64
  • go version: go1.15.8

Which cloud storage system are you using? (eg Google Drive)

Dropbox

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy /directory0/ UNISISRS-BKP-DROPBOX:directory1/directory2/ --auto-confirm --progress --exclude Thumbs*

The rclone config contents with secrets removed.

[UNISISRS-BKP-DROPBOX]
type = dropbox
client_id = xxxxxxx
token = {"access_token":"xxxxxxx","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}

A log from the command with the -vv flag

2022/01/24 17:25:35 DEBUG : rclone: Version "v1.54.1" starting with parameters ["rclone" "-vv" "copy" "/directory0/" "UNISISRS-BKP-DROPBOX:directory1/directory2/" "--auto-confirm" "--progress" "--exclude" "Thumbs*"]
2022/01/24 17:25:35 DEBUG : Creating backend with remote "/directory0/"
2022/01/24 17:25:35 DEBUG : Using config file from "/xxx/.config/rclone/rclone.conf"
2022/01/24 17:25:35 DEBUG : Creating backend with remote "UNISISRS-BKP-DROPBOX:directory1/directory2/"
2022/01/24 17:25:36 DEBUG : fs cache: renaming cache item "UNISISRS-BKP-DROPBOX:directory1/directory2/" to be canonical "UNISISRS-BKP-DROPBOX:directory1/directory2"
2022-01-24 17:25:36 DEBUG : tsfhn.zr: Size and modification time the same (differ by 78.855773ms, within tolerance 1s)
2022-01-24 17:25:36 DEBUG : tsfhn.zr: Unchanged skipping
2022-01-24 17:25:36 NOTICE: Local file system at /directory0/: Replacing invalid UTF-8 characters in "compras/~$Mapa de Pre\xe7os - Modelo grande.xlsm"
2022-01-24 17:25:36 NOTICE: Local file system at /directory0/: Replacing invalid UTF-8 characters in "compras/1 - Mapas de pre\xe7os 2021"
.
.
.
Transferred:        3.982M / 5.313 GBytes, 0%, 1.140 MBytes/s, ETA 1h19m30s
Errors:                 5 (retrying may help)
Checks:               614 / 614, 100%
Transferred:            2 / 6072, 0%
Elapsed time:         4.7s
Transferring:
 * xxxxxxxxx

hello again,

thanks for answering all the questions.

as i mentioned in that other post, need to update to latest stable v1.57.0 and test again.
there have been significant upgrades to the dropbox backend.

New version was installed, but the error persists.

root@xxx[xxxxx]:~# rclone version
rclone v1.57.0
- os/version: slackware 14.1 (64 bit)
- os/kernel: 3.10.17 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.17.2
- go/linking: static
- go/tags: none

If you want to share a larger log more than 2 seconds, I'll take a look but there's nothing to figure out with a 2 second log.

A log showing what these errors are would be very helpful.

Log of file uploads and errors

logfile.log (1.5 MB)

You have lots of bad path/things not allowed in there which slows things down.

It looks like you have a lot of small files. You'd benefit a lot of moving up transfers to a larger number.

The biggest item on your speed is the pacer messages as files seem to be changing:

2022/01/25 13:47:52 DEBUG : pacer: low level retry 1/10 (error Post "https://content.dropboxapi.com/2/files/upload_session/append_v2": can't copy - source file is being updated (size changed from 60894 to 60979))
2022/01/25 13:47:52 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2022/01/25 13:47:52 DEBUG : pacer: Reducing sleep to 15ms
2022/01/25 13:47:52 DEBUG : pacer: low level retry 2/10 (error Post "https://content.dropboxapi.com/2/files/upload_session/append_v2": can't copy - source file is being updated (size changed from 60894 to 60979))
2022/01/25 13:47:52 DEBUG : pacer: Rate limited, increasing sleep to 30ms
2022/01/25 13:47:52 DEBUG : pacer: low level retry 3/10 (error Post "https://content.dropboxapi.com/2/files/upload_session/append_v2": can't copy - source file is being updated (size changed from 60894 to 60979))
2022/01/25 13:47:52 DEBUG : pacer: Rate limited, increasing sleep to 60ms
2022/01/25 13:47:53 DEBUG : pacer: low level retry 4/10 (error Post "https://content.dropboxapi.com/2/files/upload_session/append_v2": can't copy - source file is being updated (size changed from 60894 to 60979))
2022/01/25 13:47:53 DEBUG : pacer: Rate limited, increasing sleep to 120ms
2022/01/25 13:47:53 DEBUG : pacer: low level retry 5/10 (error Post "https://content.dropboxapi.com/2/files/upload_session/append_v2": can't copy - source file is being updated (size changed from 60894 to 60979))
2022/01/25 13:47:53 DEBUG : pacer: Rate limited, increasing sleep to 240ms
2022/01/25 13:47:54 DEBUG : pacer: Reducing sleep to 180ms
2022/01/25 13:47:54 DEBUG : pacer: low level retry 6/10 (error Post "https://content.dropboxapi.com/2/files/upload_session/append_v2": can't copy - source file is being updated (size changed from 60894 to 60979))
2022/01/25 13:47:54 DEBUG : pacer: Rate limited, increasing sleep to 360ms
2022/01/25 13:47:54 DEBUG : pacer: Reducing sleep to 270ms
2022/01/25 13:47:54 DEBUG : pacer: low level retry 7/10 (error Post "https://content.dropboxapi.com/2/files/upload_session/append_v2": can't copy - source file is being updated (size changed from 60894 to 60979))
2022/01/25 13:47:54 DEBUG : pacer: Rate limited, increasing sleep to 540ms
2022/01/25 13:47:54 DEBUG : pacer: Reducing sleep to 405ms
2022/01/25 13:47:54 DEBUG : pacer: Reducing sleep to 303.75ms
2022/01/25 13:47:54 DEBUG : pacer: low level retry 8/10 (error Post "https://content.dropboxapi.com/2/files/upload_session/append_v2": can't copy - source file is being updated (size changed from 60894 to 60979))
2022/01/25 13:47:54 DEBUG : pacer: Rate limited, increasing sleep to 607.5ms
2022/01/25 13:47:55 DEBUG : pacer: Reducing sleep to 455.625ms
2022/01/25 13:47:56 DEBUG : pacer: Reducing sleep to 341.71875ms
2022/01/25 13:47:56 DEBUG : pacer: low level retry 9/10 (error Post "https://content.dropboxapi.com/2/files/upload_session/append_v2": can't copy - source file is being updated (size changed from 60894 to 60979))
2022/01/25 13:47:56 DEBUG : pacer: Rate limited, increasing sleep to 683.4375ms
2022/01/25 13:47:57 DEBUG : pacer: Reducing sleep to 512.578125ms
2022/01/25 13:47:58 DEBUG : pacer: low level retry 10/10 (error Post "https://content.dropboxapi.com/2/files/upload_session/append_v2": can't copy - source file is being updated (size changed from 60894 to 60979))
2022/01/25 13:47:58 DEBUG : pacer: Rate limited, increasing sleep to 1.02515625s
2022/01/25 13:47:58 DEBUG : pacer: Reducing sleep to 768.867187ms
2022/01/25 13:47:59 DEBUG : pacer: Reducing sleep to 576.65039ms
2022/01/25 13:47:59 DEBUG : pacer: Reducing sleep to 432.487792ms
2022/01/25 13:48:00 DEBUG : pacer: Reducing sleep to 324.365844ms
2022/01/25 13:48:01 DEBUG : pacer: Reducing sleep to 243.274383ms
2022/01/25 13:48:01 DEBUG : pacer: Reducing sleep to 182.455787ms

Dropbox has rate limiting on it's API and 12 seems to be a good sweet spot.

I use:

--tpslimit 12 --tpslimit-burst 12

and I use a specific app registration solely for upload as API rate limiting is per app registered not per user.

I'd make those changes and re-test and share a new log. You'd have to decide on the path issues as repeated errors make things slower to so you may want to exclude them.

Log below with the new parameters.

logfile.log (1.9 MB)

Doubts:
How would you increase the transfer to a higher number?

These files are on a samba share on linux, people might be accessing them while I copy, could this be the biggest problem when uploading?

Could these path problems be due to files with characters like "~$", or would they be due to the conversions that you make while transferring? Is there a way to disable this file conversion?

This flag:

      --transfers int                        Number of file transfers to run in parallel (default 4)

Each file failing might be unique as I didn't check everyone. Dropbox has files that aren't allowed.

2022/01/25 13:45:18 ERROR : lare/Downloads/desktop.ini: Failed to copy: file name "desktop.ini" is disallowed - not uploading

This is a bad name.

2022/01/25 13:45:25 ERROR : lare/Downloads/~$exo-a-it-33 (1).doc: Failed to copy: upload failed: batch upload failed: path/disallowed_name

You'd have to identify why each one fails.

Dropbox (rclone.org)

Apparently so, the problem with the low upload rate, is related to the files "Thumbs.db, desktop.ini and temporary files created by applications (such as Microsoft Word, Excel or PowerPoint), which start with ~$.

The problem can also be related to a large number of small files. That's why I added the flag described below.

--transfers 20 --exclude "~$*" --exclude "desktop.ini" --exclude "Thumbs.*"

I don't know if this would be the correct way to use these parameters.

Despite the upload rate fluctuating, I managed to increase the speed of sending, but there came a time when the transfer rate reached 0 B/s.

If you want feedback, only way I can help is with log files as I am not sitting next to you and can't see a thing you are doing.

Command:

rclone -vv copy /homenas3/ UNISISRS-BKP-DROPBOX:NAS04/homenas3/ --transfers 20 --auto-confirm --progress --exclude "Thumbs.*" --exclude "~$*" --exclude "desktop.ini" --tpslimit 12 --tpslimit-burst 12

Log file:

logfile.log (4.0 MB)

You are running a copy command.
The log is filled with files already there so they won't be transferred.
That impacts transfer speed since it isn't uploading and it's just checking the destination.

What are you expecting to occur that isn't?

That's also a snippet and not a complete log.

How to send log file larger than 4MB?

Anyway you want.

Pastebin
Dropbox link
Google Drive Link
OneDrive link

etc.

Command

rclone -vv copy /homenas3/ UNISISRS-BKP-DROPBOX:NAS04/homenas3/ --transfers 20 --auto-confirm --progress --exclude "Thumbs.*" --exclude "~$*" --exclude "desktop.ini" --tpslimit 12 --tpslimit-burst 12

Complete log file: logfile

So in that logfile, you have:

Transferred:      287.132 GiB / 287.132 GiB, 100%, 7.019 KiB/s, ETA 0s
Errors:              2788 (retrying may help)
Checks:            301092 / 301092, 100%
Transferred:       133501 / 133501, 100%
Elapsed time:   13h7m43.9s

Lots of checking and lots of small files.

You have roughly 4k bad path names going on in there:

grep " path/disallowed_name" logfile.log  | wc -l
    4828

If you are not changing much per day, you can increase checkers to something bigger as well.

      --checkers int                         Number of checkers to run in parallel (default 8)

The files change a lot every day, is there a way to just send without checking?

So you want to make it take longer and reupload files that are already there and the same? Sure, you can use this flag:

      --no-check-dest                        Don't check the destination, copy regardless

That makes it copy all the time.

Summary that may interfere with upload, time spent and transfer rate:

  • Files not allowed on Dropbox;
  • Several small files;
  • Files updating at the time of transfer;
  • Bad path names;
  • Use the --no-check-dest flag;

Just one more doubt. Can I get it to ignore files being updated?