What is the problem you are having with rclone?
My transfer speed to the teamdrive is really low. I tested directly from my PC with a shared connection from my phone without bandwidth restriction (20Mb/s DW and 15Mb/s UP)
Unfortunatly I do not reach 700 kb/s when I transfer the files
What am I doing wrong?
Thank you for you help
What is your rclone version (output from
Rclone V1 50.1
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Windows 7 64 bits
Which cloud storage system are you using? (eg Google Drive)
The command you were trying to run (eg
rclone copy /tmp remote:tmp)
rclone copy "C:\CSV" REMOTE_IDF_TEST:"\TestDebit2" --log-file Testdebit2 --log-level INFO -P --transfers 8 --bwlimit 3000
A log from the command with the
-vv flag (eg output from
rclone -vv copy /tmp remote:tmp)
Can you share the log file with debug?
Based on the log - it looks like this is mostly a lot of smaller files, correct?
Gdrive does not have great performance on small files because it can only start about 2 new transfers/sec. On slightly larger files this is no problem and your bandwidth will be the limiter, but on very small files they will transfers so fast that it is the sheer number of files and the 2/sec limit that effectively decides the speed.
Try transfering a larger file, like 50-100MB and observe the speed with --progress
This will be a decent benchmark for what you can achieve speed-wise when not being limited by the sheer number of files pr second.
Then you can also add
--drive-chunk-size 64M (or 128M if you have lots of RAM to play with)
This will drastically improve the bandwidth utilization in transfers of larger files (above 8MB as that is the default chunk size).
Feel free to re-run the same test again to see the difference and it should be quite noticeable.
Unfortunately it will do nothing to help smaller files.
Unfortunately there is very little you can do in terms of rclone settings to improve effective speed on very many / very small files. This is a backend limitation on Google (and many other Cloud-servers outside of certain high-performance ones).
The only real "fix" to this problem is really to consider archiving together collections of small files so they become one larger file - and then this can be transferred much more efficiently. This is well worth it for "archiving" data for long-term storage, but it does make it a little more hassle to retrieve often used files obviously. At some point down the road I expect there will come a backend remote that can handle grouping and archiving of small files transparently - but that sort of functionality is still in the idea-stage.
Well, you get what you pay for.
Gdrive has "unlimited" storage and no use-charges for transactions, egress ect - but you have to deal with some moderate limits. Wasabi is a hot-cloud with way better performance and basically no limits, but you pay for everything that is stored and every transaction upon that data. What is better is really dependent on what you need and your budget.
First thank you for your quick answer
The log I sent is a test sample of what kind of file I need to transfer.
you're right, It 's mostly MS Office Files, from a few Kb to 1 or 2 Mb.
That is weird, I've already done this kind of transfert a fiew months ago and i could reach 2,5 Mb /s
And i can't archive the files
Well, as long as there is at least 1 big file mixed in with the current transfers that will help a lot - so on mixed big/small files the speed can vary a lot depending on what is transferring right then and there.
You should keep an eye one this project wich is a remote that will transparently archive files:
Not quite done yet, but it's still in progress I understand.
Unfortunately this does not yet merge small files into bigger ones (yet). But once this is done it should be a much smaller job to add that function I think - so I think something like that will come eventually. maybe I have to pick up go and get my hands dirty It's certainly something I have been wanting for a while myself since this is really the #1 performance-related problem on Gdrive and several other backends - and a system like this could certainly solve this very well.
Please asdffdsa, it is not the subject, can we stop this discussion please
Yea, no you are right. Wasabi is one of the few that has free transactions and egress on their unlimited pricing model. I think I was thinking of backblaze. Wasabi seems to be fairly unique in this when it comes to "hot-clouds". Usually you pay both on egress and transactions. In that sense I like Wasabi a lot since you only have to think of the storage size and not how you use the data.
What i meant by Gdrive and storage was that on an unlimited drive you don't pay any more or less depending on how much you store. It's a flat price. On wasabi (and pretty much all high-performance clouds I have seen) you pay more if you store more.
EDIT: Sorry, not intending to derail the thread. Me and asdffdsa have a bad habit of doing this
@gorak, again, welcome to the forum but sorry this is an open forum, and thestigma and myself, we often discuss such things.
if you have a problem with that you are welcome to flag my post and the moderator can decide.
and i was trying to be helpful, to offer you another option for cloud storage, since you have reached the limits of gdrive.
i have muted this post, i will not be notified about new posts
i will not post again here, good luck to you.
Thank you to offer me another option I didn't mention that i need to use Google
Don't worry there's no problem, no need to flag your post, i'm a noob in this forum so i didn't know that you have such conversation
Thank you I'm going to keep an eye on this project.
It's really a problem to have such problems with small files
I've just made a test
Same Size / less files ( 8 Mo/ files ) and 2.8 Mb /s
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.