Rclone mount random slow speeds

142.250.76.106 www.googleapis.com

keep in mind, if this IP gets decommissioned by Google, the stream will stop.

Yep got it, any reason you chose this IP?

Would another way to be to block the suspect IPs in my firewall rules? Or would that cause an issue as well.

The problem with only blocking it in the firewall is that if dns gives a blocked ip, it just stops

If you use your firewall to block it, it will block the connection. Blocking the connection does not cause another DNS lookup. So basically, you will break it, until a new DNS lookup is performed and a permitted IP is allowed through the firewall. Hosts file specifies the IP address that shall be used, you can put multiple IPs in the host file. At the moment I am using just one as a proof of concept.

Note: If you use for example 5 IPs in your host file and Google stops using one IP for the API service, 1 out of 5 requests wont work and therefore it will break. So sticking to just one IP in the hosts file may be a good final solution.

I chose the IP because I was getting 300Mbps-1.2Gbps synchronous which is all I need. It doesnt mean that tomorrow it will be like that. I did my testing 2 days ago, so it may have already changed.

These IPs also performed similar;

142.250.66.202 - tested
142.250.76.106 - tested
142.250.204.10 - tested
172.217.167.74 - tested

Most modern systems will allow you to enter multiple IPs for the same DNS in the hosts file, no different to having multiple A records on a proper DNS. (older linuxes apparently only give you the first result, be sure to test)

I'm going to go though the IPs and add the "good ones" to my hosts file and see how i go.

UNRAID is linux right? Need to edit your /etc/hosts file. Add some extra lines to the bottom like this

172.217.167.106 www.googleapis.com
142.250.66.202 www.googleapis.com

test by doing a dig on the a record for the domain

dig www.googleapis.com a

Yep just added that IP to my hosts file and tested with a 4K 180Mbps rip and it struggled to load so that IP doesn’t seem to work very well for me. How do you guys recommend testing?

I'm adding one of them to the hosts file then doing a

rclone copy -v -P google:/file ./

and watching the download speed.
swap the ip and try again to see what the good ones are

1 Like

Yip as @Nebarik said, do a file copy from the google drive to the server and monitor the speed.

Also just a note, those 4 IPs are from Sydney.

I need to look in to the chunk sizes, as this might be a good plan B

Here are more IPs to test with...

A 142.250.71.74
A 172.217.24.42 (when I tested this was a "bad one")
A 172.217.167.106
A 142.250.66.170
A 142.250.66.234
A 142.250.67.10

Are you located in Adelaide? Your IPs seem to be peering through adelaide instead of sydney. Im getting a diffrent list to you.

Here's my testing so far

142.251.41.10 - Passed - 30MB/S
142.251.40.106 - FAILED - 300KB/s
142.250.64.106 - FAILED - 300KB/s
142.250.65.234 - Passed - 30MB/S
142.250.80.106 - Passed - 30MB/S
142.250.72.106 - Passed - 30MB/S
142.251.32.106
142.250.80.42 - FAILED - 300KB/s
142.251.40.202
142.250.65.170
142.251.35.170
142.250.65.202
142.250.81.234
142.251.40.234
142.250.80.74
142.250.176.202

FYI im testing a little different, im running plex in docker, so i've spun up a testing container and passed the following flag to the container --add-host www.googleapis.com:142.251.41.10

then i am downloading a 4K blu ray rip in chrome using plex's download function.

All servers so far can burst 100MB/s but the ones failing drop down to 300 KB/s within 3-4GB of downloaded data. The ones that are passing can burst at 70-90MB/s for 6-9GB before dropping down to 30MB/s.

My server, 1x vCPU, 1GB RAM is in Melbourne using a Vultr VPS

All my files are under 12GB and direct play.

Maybe continue going through the list of IP's until you get a decent peer. If you dont find a decent one, maybe move your server to Melbourne or Sydney, for testing at least, and you might get better peering.

I just noticed I misread your previous post, if you can download at 30MB/s a second you should be good, thats 240mbps. Im going to have a play with this tonight so see what I can get down, my VPS can reach about 2gbps on a speed test, so I should be able to get at least 1gbps from Google API.

Have a look at...

--transfers=N` (default N=4)

Number of file transfers to be run in parallel. Increasing this may increase the overall speed of a large transfer, as long as the network and remote storage system can handle it (bandwidth and memory).

UPDATE:
Here are my results
Default Settings
Transferred: 2.966 GiB / 10.676 GiB, 28%, 1.106 MiB/s, ETA 1h58m58s
#Comment: It looks like when it gets to around 5GB of throughput or 50,000 requests shaping takes place.

--transfers=4 --drive-chunk-size=1024
Transferred: 8.121 GiB / 10.676 GiB, 76%, 369.125 KiB/s, ETA 2h57s
#Comment: Decreasing the chunk size just decreases the performance

--transfers=16 --drive-chunk-size=65536
Transferred: 10.676 GiB / 10.676 GiB, 100%, 39.818 MiB/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 3m6.5s
Comment: with 16 streams and 64MB chunk size, performance was signficantly improved and downloaded the 10.6GiB file in 3mins.

--transfers=16 --drive-chunk-size=131072
Transferred: 10.676 GiB / 10.676 GiB, 100%, 84.543 MiB/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 2m31.9s
Comment: increasing the chunk size again has significantly increased the download time, by 30secs or 1/6th.

I've done a thing!

I was worried about the ips being cycled and it being annoying to manually test and set a new ip so i automated it.
Here's the shell script I wrote. It finds the fastest endpoint and sets your hosts file as it. Feel free to use or improve my horrible coding skills.

Before running for the first time. Make sure your hosts file is clean without any records for the googleapi. As the script will create a backup of this and will refer to it often.

With your test file. Super small files are no good as they download too fast to report a proper speed. The bigger it is, the faster your results will be due to the ramping speeds. But big files take too long on the slow endpoints. I've found 50MB to be a good middle ground.

Enjoy.

5 Likes

This is really nice and works a charm.

A couple of questions, are you taking the last MiB/s speed taken or are you taking the average speed. It looks to me like you are taking the last MiB/s taken.

I̶t̶ ̶w̶o̶u̶l̶d̶ ̶b̶e̶ ̶c̶o̶o̶l̶ ̶i̶f̶ ̶y̶o̶u̶ ̶c̶o̶u̶l̶d̶ ̶a̶d̶d̶ ̶a̶ ̶b̶l̶a̶c̶k̶l̶i̶s̶t̶ ̶o̶f̶ ̶I̶P̶s̶,̶ ̶s̶o̶ ̶i̶f̶ ̶w̶e̶ ̶k̶n̶o̶w̶ ̶t̶h̶e̶r̶e̶ ̶i̶s̶ ̶a̶ ̶p̶o̶o̶r̶ ̶I̶P̶ ̶t̶h̶e̶n̶ ̶d̶o̶n̶t̶ ̶e̶v̶e̶n̶ ̶b̶o̶t̶h̶e̶r̶ ̶c̶h̶e̶c̶k̶i̶n̶g̶ ̶i̶t̶̶

EDIT:
Added function to custom blacklist known poor servers (this goes above section Checking each IP)

#------------------------#
# Backlist Known Bad IPs #
#------------------------#
NaughtyServer='172.217.24.42'
grep -v $NaughtyServer tmpapi/api-ips > tmpapi/tmpfile && mv tmpapi/tmpfile tmpapi/api-ips

awesome. I'll give this a shot

This is fantastic, thanks guys!

Kinda both. From what I understand rclone does it's own averaging logic for it's transfer summary. And for smaller files like 50MB it's over so quick that we only get 1 text dump into the rclone.log, tailing for the last mention is kind of pointless on good endpoints.

Where it comes into use is for slower endpoints, or larger files that result in multiple log dumps. Main reason for grabbing the last one in that case is just to narrow it down to a single result and I figured that's as good as any for my needs.

The blacklist idea is great. I'll see if I can't automate it a bit.

Could you automate the creation of the dummy file? I’m unsure how to create a dummy file using my setup.

I’m using UnionFS to join my local storage with my rclone storage and its mounted at /mnt/user/mount_unionfs/gdrive_vfs/

I've updated my script to automate the blacklisting (plus some other smaller stuff).
The way the blacklisting works is if the endpoint records a speed in the KiB/s range, it records that ip in a hidden file nearby called .blacklist-apis. And then next time you run it the script it clears those ips out from the endpoint lookup results so it doesnt bother trying them again.

I'm not sure if those IPs are permanently bad or just temporarily. If the later, it would be wise to delete that .blacklist-apis file every now and then to give it a fresh start.

If you're having trouble running that command im not sure what a good alternative would be. No worries, just use a existing file on your Gdrive. Edit the testfile variable at the begining to a file of your choice. It will need to be in rclone format (not mount format) like my example. "rclonegoogle:/folder/file".

I noticed that when the dummy file was nearly finished downloading (using multiple streams), the speed dropped right back and in most cases was a lot lower than what the average. I also found using a larger file, e.g. 500MB was better. I think taking the total time to download, and then and finding the lowest number would be more accurate.

Results:
Please wait, downloading the test file from 142.250.70.138... 54.74 MiB/s
Please wait, downloading the test file from 142.250.70.170... 27.65 MiB/s
Please wait, downloading the test file from 142.250.70.202... 54.92 MiB/s
Please wait, downloading the test file from 142.250.70.234... 59.53 MiB/s

For me, 59.5MiB (or 500mbps) is pretty good. The max I got was 75MiB (630mbps)