Very low speed in Dropbox

I am testing Rclone connected to Dropbox, but the upload speed is very slow. I would like a suggestion of what is the best cloud service to store large backups, as I have a Dropbox account with 3TB but I have had issues with Upload time.

hello,
i have been using wasabi for backups, they offer fast uploads and verizon fios gigabit.
I use veeam backup and replication for the backup files and rclone to upload the files.
right now, the east-us location is slow and they are creating a second east-us endpoint.
you should use the west endpoint.

you can try wasabi for free, and if you do rclone config, choose the s3 compatible option and select wasabi.

also, if you are using microsoft windows and you are uploading large files to the cloud, you might to enable VSS.
please check out my wiki about it at.

I haven't used dropbox so I don't know if it is more limited. Certainly there are plenty of good cloud providers who can max out most normal upload connections. I max my 162'ish Mbit on Google drive easily, and Wasabi as mentioned should be even less restrictive in rate limiting ect.

That said - I always advice that you understand what is limiting you before you try to fix it (by changing your cloud provider in this case). Otherwise you might very well spend a lot of time and money on not solving the right problem.

Be aware that cloud drives generally do behave differently and perform poorly in some circumstances without compensating for it. A very typical thing for example is that uploading many small files is much less efficient than a few large files of the same total size. Try testing this by uploading a single large file of say 100MB or more. See if your upload speed is poor for that too. If not then you may be running into rate limiting more than a bandwidth problem (which I don't really expect Dropbox to have honestly). This can sometimes be worked around with using more concurrent connections to help smooth out the inefficiencies and operation-latency of many small files.

The second big one is often the upload chunking size. Google drive has a quite low default of 8MB which really limits the speed of even large files as the TCP spends so much time ramping and re-ramping up rather than staying at full speed.

Dropbox does have a setting for this too, but the default is much better (48MB) so there is less to gain, but you can set it up to 149M if needed for a bit more performance at the cost of some RAM on files that are up to that size or larger.

--dropbox-chunk-size

TLDR: I'd do some quick testing and share the specifics of your results with us (check your bandwidth graph) so we can see what the limiting factor actually is before making any big decisions.

Thanks for the advice, did not know Wasabi Cloud Storage.
I noticed that smaller files transfer more slowly than large files, I didn't know about Rclone, I just tested it yesterday after trying several backup software that made cloud copies.
None of the backup software, get a good upload rate.
I will test the chunk size settings.

i thought that dropbox was for sharing files, not backups
here is a snippet from my upload of a file to wasabi.

here is the log

2019/09/20 17:04:11 DEBUG : rclone: Version "v1.49.1" starting with parameters ["C:\data\rclone\scripts\arclone.exe" "sync" "v:\EN07\en07.veaamfull\EN07" "wasabiwest01:vserver03-en07.veaamfull\backup\" "--stats=0" "--progress" "--backup-dir=wasabiwest01:vserver03-en07.veaamfull\archive\20190920.170411\" "--log-level" "DEBUG" "--log-file=C:\data\rclone\logs\en07.veaamfull\20190920.170411\en07.veaamfull_20190920.170411_rclone.log"]
2019/09/20 17:28:01 DEBUG : EN072019-09-14T104415.vbk: MD5 = 892315144346cdfab149475df9e4aafb OK
2019/09/20 17:28:01 INFO : EN072019-09-14T104415.vbk: Copied (new)
2019/09/20 17:28:01 INFO : Waiting for deletions to finish

so 24 minutes to transfer 40.2GB, including rclone overhead.
so that is an average of 1.8GB per minute.

Does wasabi cloud have a data recovery if you have problems with ramsonware? The only advantage of dropbox is that I have 180 days of data recovery history.
I will do a wasabi trial account to test performance.

yes, wasabi itself is s3 compatible provider, and as such, has ransomware protection, such as immutable data and versioning.

however, it is rclone itself that can offer you ransomware protection for any cloud provider.
if you use 'rclone sync' and the flag '--backup-dir', you should be good.

if you look at my log, the --backup-dir is set to the current date.time that rlcone is executed.
"--backup-dir=wasabiwest01:vserver03-en07.veaamfull\archive\20190920.170411"

so let's say that just now, ransomware damaged some files, and rclone is run.
rclone would copy those files to the cloud, but using --backup-dir, the files already in the cloud are moved to the backup folder BEFORE, rclone uploads the damaged files.

rclone sync and --backup-dir is the way to go.

also, you mention that dropbox has only 180 days or data recovery history.
for me, that is not acceptable.
any s3 provider, such as amazon or wasabi, is not limited to just 180 days.
s3 storage has a versioning feature, you should look into that.


and
https://wasabi.com/blog/use-immutable-storage/
and
https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html

I created a wasabi account, can you help me, how do i create the best script to connect?

sure, but i use a script that i have in python code at over 300+ lines of code.

to use rclone you need to config a new provider.
https://rclone.org/commands/rclone_config/
select '4' / Amazon S3 Compliant Storage Provider
then select '9', wasabi.

start with that and let me know where you get stuck.

also, i know that rclone can be very flexible about flaky internet connection but i know little about that.
i am spoiled with both verizon fios gigabit and cable internet.

perhaps someone can add their advice and experience here.....
planet earth to thestigma
calling thestigma
thestigma come in...

186/5000

I have already configured the new provider and am testing the connection speed, but I do not know if the settings I am using to mount the drive are the best for the cloud provider.

rclone mount --log-file "C:\rclone\logs\rclone.log" --log-level INFO --allow-non-empty --allow-other --default-permissions --fuse-flag sync_read --tpslimit 10 --tpslimit-burst 10 --dir-cache-time=160h --buffer-size=128M --attr-timeout=1s --vfs-read-chunk-size=140M --vfs-read-chunk-size-limit=2G Wasabi: Z: --config "C:\Users\R.config\rclone\rclone.conf"

again, i am not an expert but:

  1. what endpoint are you using. do not use the default east-us endpoint, use 's3.us-west-1.wasabisys.com. right now, the east endpoint is overloaded and wasabi is creating a second east endpoint.
    https://wasabi-support.zendesk.com/hc/en-us/articles/360015106031-What-are-the-service-URLs-for-Wasabi-s-different-regions-

  2. if you are uploading files to the cloud, why are you using mount and --vfs flags?
    perhaps i am not understanding something, are you trying to download file or upload files?

I set it with the east-us endpoint, because I'm from South America and I believe the latency is lower than west-us.
I found this script on the internet, because I understand very little of rclone.
I only want to use wasabi to store backups, just in case of failure I will download the content.

you will not be able to upload files quickly using the east-us endpoint, do not use east endpoint!, repeat, do not use east endpoint!!!!
https://status.wasabi.com/
"Due to recent rapid growth, we are currently experiencing unforeseen data throughput capacity issues in our us-east-1 data center"

you must use the west endpoint, as i do.

latency has nothing at all, to do with with uploading large files; forget about latency.

that script is terrible for you. for backup, forget about mount.

if you want to backup to cloud, use 'rclone copy' or 'rclone sync' and flag --backup-dir.

let me know

Thanks for the information and patience, I will search for rclone copy and rclone sync.
I will also change the server on wasabi

check out https://wasabi.com/help/downloads/

and check out my wiki to enable VSS

i'm looking for information how to script rclone sync, i'm seeing the links you passed.

https://rclone.org/commands/rclone_sync/

here is my rclone sync command

C:\data\rclone\scripts\rclone.exe sync "c:\data\rclone\source\one" wasabiwest01:en07-one\backup\ --stats=0 --progress --backup-dir=wasabiwest01:en07-one\archive\20190920.200530\ --log-level DEBUG --log-file=C:\data\rclone\logs\one\20190920.200530\one_20190920.200530_rclone.log

Tell us what you want to happen and we can provide you a script that does that.

Syncing with rclone is easy. the basic syntax is just this:

rclone sync C:\MyLocalFolder\ MyRemote:\SomeFolder\

(this would make the files in "Somefolder" identical to the ones in "MyLocalFolder" - including deleting extras that don't exist locally anymore)

Here are some options you can consider for performance:

--fast-list (default off)

This will not work through a mount, but in a script/commandline it will let you do much more efficient listings. This is important for regular sync jobs because with fast list it may take you 30-40 seconds to list your full archive. Without it you can expect at lest several minutes (or more).

--transfers 20 (default 4)

This will be important for high performance on many small files (if you have a lot of that). NCW (main rclone author) says Wasabi can do up to even 32 transfers, so you can experiment a little with that number

--s3-chunk-size 64M (default 5M)

This will greatly impact upload (only) performance on large files (ie. it benefits anything above 5MB). WARNING! This can use up to this much memory for each transfer, so don't to nuts with this. 64MB x 20transfers = 1280MB of memory at maximum. Do not run out of memory or rclone will just crash. But on the other hand if you have tons of RAM then you can go even up to 128M, but beyond that there is not much benefit unless your bandwidth is huge.

--verbose (or just -v)
--progress (or just -P)

When you are using rclone manually or just testing (as opposed to a robot-script) you will want to use these so you actually get some feedback on what is happening with the transfer...

Putting it all together it would be:

rclone sync C:\MyLocalFolder\ Wasabi:\Somefolder\ --fast-list --transfers 20 --s3-chunk-size 64M -v -P

Just as one final though: sync can be "dangerous" since it is allowed to delete files. You can use --dry-run to test what happens with a command without actually performing it. Also remember that you have "rclone move" and "rclone copy" too. A copy is just a sync minus any deleting of "excess" files.