Rclone mount random slow speeds

Ubuntu doesnt cache DNS entries

The script uses dig to get all the a records from the DNS lookup and then searches through each of them. The fastest one then gets added tot he hosts file.

On ubuntu I havent figured out how to keep the dns entry in the hosts file after a reboot. So I added a line in cron to run the script on reboot. This way I always have an entry in the hosts file.

I am also using one DNS entry in the hosts file at a time.

Keep in mind that MiB/s is different to Mbps and different again from MB/s

@Nebarik is might be worth putting the cleanup at the top of the script, so logs are left after the run but are deleted before the next run.

I got @cg0 script working beautifully in unraid. One thing i also noticed is that the script didnt respect the blacklist IPs running it a second time using a 500M test file.

Im running my script daily at 6AM.

Ok, but how do you ensure that the rclone download command goes to each IP? From what I have seen, each call gets to a random IP from the ones available (nslookup in Windows gives me the same eight addresses that came out of my experiment with flushing the dns cache multiple times).

So I could test them manually but I donā€™t see how to automate it. If I have eight IPs to check I would need to have a batch that tests them iteratively (flushing the dns cache before each attempt, thatā€™s fortunately doable through command line ipconfig /flushdns) until all eight came up. I canā€™t ask rclone to download from a specific IP, but thatā€™s how the process of the script is described. I guess I am missing something.

you're right. that was a typo. i meant to say MiB (mebiB) and KiB (kibiB) in the message, as reported by rclone in the log file. will fix it (well, apparently I cannot edit that post). thanks for the heads up.

that's because the cleanup function deletes the entire local tmp dir before exiting the script. I'll review the implementation of blacklisted ips later today. I'm not entirely convinced it's a good idea blacklist IPs permanently.

the /etc/hosts file takes precedence over external dns requests (the request never leaves the host if the requested address, such as www.googleapis.com, matches an entry in the hosts file). the script tests each IP that was first obtained via dig and whatever dns your host normally uses (e.g., 1.1.1.1., 8.8.8.8). that is, once dig returns a list of IPs, the scripts then edit the hosts file to test the first IP, then clean it up and test the second, and so on.

Oh, ok. That's how I would have done it but I am unfortunately no coder, so my ability to understand scripts is seriously limited. Got it.

And thanks for explaining. The random nature of this is disconcerting, I have to say. I don't get what Google is doing and what it's trying to achieve. It all smells of a mistake on their side.

Hi, IĀ“m trying to execute the script in my ubuntu, i edit the file and add the correct dir, etc... but when i execute i obtain errors like: [GES] [WARNING] Unable to connect with ...ip...
what i am doing wrong? thanks

I updated the google endpoint scanner script with an option to add/parse permanent blacklisted IPs (uncomment #USE_PERMANENT_BLACKLIST="true" and optionally edit the related storage location vars) and, by default, the script now appends all whitelisted IPs to the host file (to append only the best one, uncomment #USE_ONLY_BEST_ENDPOINT="true").

also fixed an issue with parsing the speed results (different endpoints with the same speed) and improved the documentation at the top (how to install and use the script). the latter should be useful for anyone trying to use the script itself or modifying it for their own needs.

i'll now keep it running daily to see if it works as a temporary solution to this issue. will report back if I notice anything wrong.

1 Like

either a network issue or double check all the REMOTE variables. REMOTE itself has to match the exact name of your remote in the rclone config file. if it is called gdrive, then:

REMOTE="gdrive"

REMOTE_TEST_DIR must point to the directory where your test file (REMOTE_TEST_FILE) is stored on the remote. for example, if your test file is on /my_remote_dir/subdir/my_dummy_file, then your variables should be:

REMOTE_TEST_DIR="/my_remote_dir/subdir/"
REMOTE_TEST_FILE="my_dummy_file"

finally, make sure your root is trying to access the right rclone.conf. by default, the config is at $HOME/.config/rclone/rclone.conf, which means that when running this script as root (required), it will look at /root/.config/rclone/rclone.conf for the config file. if your config is not there, then you can uncomment and edit the CONFIG variable to match the location where your actual rclone.conf is stored. in my case, for example, it is stored under my own user cgomes, so my CONFIG looks like this:

CONFIG="/home/cgomes/.config/rclone/rclone.conf"

here is a copy of the installation and usage instructions from the new version of the script:

# Installation and usage:
# - install 'dig' and 'git';
# - in a dir of your choice, clone the repo that contains this script:
#   'git clone https://github.com/cgomesu/mediscripts-shared.git'
#   'cd mediscripts-shared/'
# - go over the non-default variables at the top of the script (e.g., REMOTE,
#   REMOTE_TEST_DIR, REMOTE_TEST_FILE, etc.) and edit them to your liking:
#   'nano googleapis.sh'
# - if you have not selected or created a dummy file to test the download
#   speed from your remote, then do so now. a file between 50MB-100MB should
#   be fine;
# - manually run the script at least once to ensure it works. using the shebang:
#   './googleapis.sh' (or 'sudo ./googleapis.sh' if not root)
#   or by calling 'sh' (or bash or whatever POSIX shell) directly:
#   'sh googleapis.sh' (or 'sudo sh googleapis.sh' if not root)
# Noteworthy requirements:
# - rclone;
# - dig: in apt-based distros, install it via 'apt install dnsutils';
# - a dummy file on the remote: you can point to an existing file or create an
#                              empty one via 'fallocate -l 50M dummyfile' and
#                              then copying it to your remote.

Regarding the google endpoint scanner script, when I tried rclone copy 50MB dummy with --multi-thread-cutoff=32M, some of endpoints showed low speed more distinctively. I have little knowledge behind this, so am asking this could be used as a key factor to filter out bad endpoints.

Some more testing results.

From my location, googleapis can be reached through eight different IPs. Five have fast download speeds (close to saturating my gigabit connection), three are stuck at 20Mbps. So far, during the course of the past couple of days, the fast IPs have always been fast. One time one of the slow IPs has been fast too, only to later go slow once more when tested at a later time.

Today, after checking through rclone, I wanted to check through the official Google Drive app (it used to be Google Drive File Stream). Same exact results but this time I checked the upload speed of the three slow IPs and it saturated my upload speed of 200Mbps. Completely saturated. So three IPs out of eight were performing at 20/200, download/upload.

I wonder, at this point, considering this is through the official application, if it could be worth it getting in touch with Google customer service and let them know about this. And check if they have any explanation.

If you've replicated it with the official app, then contacting them is a good idea.

If you have a google customer support contract or channel use them. If not you can try reporting a bug, but that is very hit and miss in my experience

Yes, I'm gonna do that.

The other thing I am looking for is if there is an equivalent to dnsmasq (available in Linux) for Windows.
For people using Linux this can temporarily solve for good the problem as, if I've understood correctly, you can set up IP addresses to ignore for specific domain name requests. See here: Can I skip some IPs returned by DNS round robin? - Unix & Linux Stack Exchange

a bit OT - but are you able to get unlimited dropbox for 1 user? I thought minimum was 3?

Regardless if you have 1, 2 or 3 users, you pay for 3 as the minimum user count is enforced. I don't mind paying for a service to avoid the limits of Google. Some folks share with 2 other people but I'm a hermit so the cost is fine for me :slight_smile:

Interesting - thank you. Until your post, I didn't know we had options. If google continues to erode, I may jump ship too. Thanks for everything you've posted around here.

Yeah... problem is having to transfer TBs of data. Since there's no way to do that Drive--->Dropbox directly, it becomes a bit of a nightmare.

That's exactly what I did. I rented a cheap unlimited VPS and let it run. Depending on your data size, you have to plan for that as I forgot the exact amount you can move per day before you get limited from Google. It was maybe 10TB a day or something as I really don't recall.

Yeah, it's 10TB per day. That's how I remember it, at least. Doesn't Dropbox have a limit to how much you can upload daily? Google has 750GB daily upload limit, IIRC.

And then there's the little matter of being completely ignorant about renting a VPS, setting it up, etc., etc.

Have to quote myself here, as I've come to the realization that I have been experiencing the same issue as the rest of you. It wasn't immediately clear to me what was happening, because I've been troubleshooting a major peering issue with my ISP at the same time. While the latter problem causes a complete loss of connectivity to my Plex server, the former results in endless buffering when hitting the slow IPs (as you all know). This seems to have been getting gradually worse, as I can't even watch my smaller REMUX episodes anymore. Used to affect only my UHD stuff.

Switching cloud providers is not an option for me either. The hoard is simply too big :sob:

Looks like I'm the only North America user here so far, which is surprising.

1 Like

I think your best bet is to use nslookup (or similar tool) to get the list of IPs that www.googleapis.com is using. And then test each of them, by modifying your hosts file. Once you find one (or more) that work well, you keep those in the hosts file.

In the meantime I have contacted Google to check on their point of view on this, since it affects their official app as well.

1 Like