Optimize speed for Google Drive

There is something wrong, why does the time to send 1k file about 1 second and to send a 10M file about 1 second?

The copy is not made as an ssh, for example. I have been sending 60G to the drive for more than 15 hours, and it was only 14G, because it has millions of small files

Any solution?

Google only allows about 2-3 files per second to be created so small files are just slow as there isn't anyway around that other than zipping them up prior.

1 Like

Is it that using some other assembly protocol, something like an ssh, someone who creates a tunnel to send a batch file?

I am not sure what you are asking.

Using rclone and sending lots of small files to Google is slow as Google only allows about 2-3 files to be created per second.

I switched to BakeBlaze (b2), it seems faster, but I'm still finding it slow to copy the files, any optimization to be done?

Description=rClone Mount RondonEmbyMain

ExecStart=/usr/bin/rclone mount \
   --config=/home/tales/.config/rclone/rclone.conf \
   --allow-other \
   --log-file /home/tales/logs/rclone.log \
   -vP \
   NC: /home/tales/b2
ExecStop=/usr/bin/fusermount -u /home/tales/b2


I saw on the b2 website that it has no limit for 2-3 files like google Drive. B2 is mounted, as you can see, does it have any flag to speed up the copy?

Yes, it's written up all on the documentation page:


1 Like

if you really need to optimize for speed, nothing is faster than wasabi, a s3 clone known for hot storage.
it does not have all those gdrive/b2 limits, quotas and whatnot.

1 Like

this post compares gdrive to wasabi.

I'm just assembling the b2, in the documentation it doesn't say which flags work only for the assembled b2 drive, for example these flags, only work for assembling the drive?

--b2-chunk-size 128
--transfers 30

The b2 is good, and has a price that only pays for what you use I found most interesting. I was already recommending the use of wasabi, but it is for personal use.

that would be just 128 kBytes whereas with your gdrive config you used --drive-chunk-size=128M

and --fast-list does nothing on a mount.

I will subscribe to wasabi :slight_smile: It also pays for what to use.
what is his documentation page here?

it is a bit more complicated.
wasabi has a minimum storage of 1TB for $5.99.
and it has a minimum retention period of 90 days so if you upload a file today, delete it tomorrow, you will get charged for 89 days of pro-rated storage fees based on the file size
but if you upload veeam backups files, you can request the retention period, for any file, to be reduced to 30 days.
but wasabi does not charge egrees fees or api fees
i stream my media from a rclone crypt at wasabi.

as for cost, i find that following the best for my use-case
most recent backups and data in wasabi and deleted after 30 days.
older backups to aws s3 deep glacier for $1.01/TB/month


I don't know what an assembled drive means. I use rclone mount.

/usr/bin/rclone mount \
   --config=/home/tales/.config/rclone/rclone.conf \
   --allow-other \
   --drive-chunk-size=128M \
   --transfers=30 \
   --log-file /home/tales/logs/rclone.log \
   -vP \
   NC: /home/tales/b2

I'll stay with the b2. I will only pay for what I use and there is no minimum.

Just a doubt, I saw that the default for --transfers = 30 is equal to 4, 30 would be too much?
It would be the amount of files transferred simultaneously, correct?
What is more spent? CPU, RAM? I have 6GB of idle RAM.

that only work for gdrive, you need to use --b2-chunk-size

1 Like

depends on internet connection and other variables unique to your use case.

cannot compare cpu to ram.

why not do a test for yourself and then you will know.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.