Google Drive Throttling

Ok I figured out the slow scan issues I was having. I had the Plex media mounted over smb. I switched it to being on the local machine and scans finish almost instantly. Really happy about that because I had thought it was just a con of using a cloud storage medium.

In the process of switching from smb, I created a new instance of rclone for the Plex machine. So since I now had two instances of rclone with the same configuration, I thought I would test the copying again.

Unfortunately, I had the same slow speeds copying using rsync locally. Although I noticed something strange. Copying using rsync from one machine resulted in slow speeds, but playing back media on the other machine using a different rclone mount resulted in the first machine’s copy to speed up.

So there’s some sort of interaction that is going on.

I’ll continue to do more testing, but something makes me feel that changing my IP will result in the issue going away…

Do you ever get duplicate files when using rclone copy?

I have been having this issue occasionally and have been using rclone dedupe to correct it. A quick search shows that it is a Google Drive issue but I feel that I get them pretty frequently?

Also I have another weird issue. I get these messages every so often:

NOTICE: Encrypted drive ‘direct-decrypt:’: ChangeNotify was unable to decrypt “413167835.LZ”: illegal base32 data at input byte 9

That particular file is not within the scope of direct-decrypt: at all.

I have it setup now so GD: has a root folder id of a folder called “Sync” and direct-decrypt decrypts the files in the “Sync” directory. That particular file “413167835.LZ” is outside of the “Sync” directory and in a completely different folder.

rclone lsd GD: just shows the encrypted folders.

Why is rclone still able to see those files?

Nope. I’ve never had a single duplicate. I really only upload from a single machine so not quite sure how you’d get one.

If you have duplicates, you can use rclone dedupe as it runs interactively and you can clean it up.

That error means you have a non ecrypted file in a crypted folder.

You’d need to share your rclone.conf and give a bit more details on the conf/error and recreate it.

I have been running rclone dedupe. I don’t understand what I’m doing that is causing it.

What rclone copy command do you use?

I don’t though. In fact, that particular file doesn’t even exist in my drive.

I also have the root dir set to a different folder. The files that do exist that come up are from completely different folders.

Does --drive-chunk-size do anything if there is no uploading to the mount?

Just a straight rclone move.

Drive chunk size is only for uploading with Google Drive.

If you want to share config and some logs as they are telling you an unencrypted file is in there.

Heres what my config looks like. Its very simple.

[GD]
type = drive
client_id = 
client_secret = 
service_account_file = 
token = 
root_folder_id = ["Sync Folder"]

[direct-decrypt]
type = crypt
remote = GD:
filename_encryption = standard
password = 
password2 = 

I’ve also used this config and had the same problems.

[GD]
type = drive
client_id = 
client_secret = 
service_account_file = 
token = 

[direct-decrypt]
type = crypt
remote = GD:Sync
filename_encryption = standard
password = 
password2 = 

It’s really strange because the error message only comes up occasionally right after this

2019/04/14 22:20:34 DEBUG : Google drive root '': Checking for changes on remote

Is there a way to get rclone to show the full directory path when it gives errors for those files? I know they are in my drive somewhere but not inside the rclone folder.

I use this command

rclone copy --config=/config/rclone.conf --buffer-size 512M --checkers 16

Does anything seem off with that command? I’m honestly not sure if buffer-size or checkers makes a difference with copy and can remove them if it doesn’t affect performance.

I just noticed that I didn’t have directory name encryption enabled in my config, but the directory names in drive are still encrypted?

That’s the normal polling message you quoted there.

You can either put the mount in debug mode with -vv or you can run something like

rclone ls -vv direct-decrypt:

I just copied a simple file to create the error:

[felix@gemini ~]$ rclone lsf gcrypt: -vv
2019/04/14 23:39:36 DEBUG : rclone: Version "v1.47.0" starting with parameters ["rclone" "lsf" "gcrypt:" "-vv"]
2019/04/14 23:39:36 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2019/04/14 23:39:37 DEBUG : hosts: Skipping undecryptable file name: not a multiple of blocksize
Movies/
Radarr_Movies/
TV/
TV_Ended/
Test/
mounted
2019/04/14 23:39:37 DEBUG : 4 go routines active
2019/04/14 23:39:37 DEBUG : rclone: Version "v1.47.0" finishing with parameters ["rclone" "lsf" "gcrypt:" "-vv"]

For your copy command:

rclone copy --config=/config/rclone.conf --buffer-size 512M --checkers 16

I am not sure why you are setting any buffer size. Google limits to 10 transactions per second so setting 16 checkers is going to 403 a lot as the default transfers is 4. I’d run with 4 and 4 or something along those lines.

drive-chunk-size is useful on copy commands as 32M or 64M is generally the sweet spot for Google.

My command I use is pretty straight forward with an exclude:

/usr/bin/rclone move /data/local/ gcrypt: -P --checkers 3 --log-file /opt/rclone/logs/upload.log -v --transfers 3 --drive-chunk-size 32M --exclude-from /opt/rclone/scripts/excludes --delete-empty-src-dirs

I have some folders I don’t upload so I filter those out.

Yeah I just ran

rclone ls -vv direct-decrypt:

and got nothing. There were no errors except for some 403 rate errors. Nothing even resembling the error I mentioned earlier.

So do you mean I should just leave checkers at the default 4? Why did you adjust your checkers and transfers to 3? How did you know if 32M or 64M is better for chunksize?

rclone copy --config=/config/rclone.conf --drive-chunk-size 32M

Does the directory_name_encryption = true matter? I didn’t have it but have all of my directories encrypted. Not sure what happened.

It’s super hard asking many different questions in the same thread to follow what you are trying to fix or change.

If that isn’t your rclone.conf, what is?

Can you share what your actual config is?
Can you run the same command grab the whole log file and share? Is the error gone now?

The directory names don’t encrypt themselves. If you had it set to true, they would be encrypted. If not, they wouldn’t be.

For uploading, give 32M a test and 64 a test. See what works for your setup. I only wanted to transfer 3 files at a time so I set it to that.

That is my rclone.conf file exactly as is. There was no “directory_name_encryption” and the directory names were being encrypted by itself. Thats what I was trying to say. I think I also just answered my question. It appears that now when editing rclone.conf, it always adds a directory_name_encryption and sets it as either true or false. Mine didn’t have anything, so it might have been because a previous version of rclone didn’t require it.

The error with the files isn’t gone. My mount logs still occasionally show the error. Its not consistent. I believe that it is a bug because I don’t have those file even in the path of the rclone crypt mount.

With uploading, what should I be looking for to know what works better?

If you ran rclone config, it prompts so you have to pick yes or no when you create an encrypted remote:

If your directories are encrypted, it was true.

If you ran the ls command on the same remote as your mount, it would produce the error and capture it.

For uploading, speed would be the thing I’d look for.

I’ve had time to do more testing on this issue. I connected a laptop directly to my modem from Comcast and received a public IP on the device. No router or anything in between.

I then attempted to download files from

  1. Google Drive Web Interface
    I had similar results to before. Speeds were at a steady 2MB/s.

  2. Rclone Rsync copy in Ubuntu VM
    Same results as web interface, slightly slower. Speeds started at 10MB/s then dropped to 1.5MB/s.

  3. Speedtest.net and DSLreports
    Received results of ~950mbps down and ~40mbps up.

I had been searching around and seen this issue but am unsure how to use it?

Should my hostfile look like

172.217.7.234 www.googleapis.com
or
172.217.7.234 googleapis.l.google.com

If its the first one, I tried two different IPs and had no luck.

You were right about Google not throttling, I think. I changed my IP several times and still had these slowdowns. Is this a Comcast issue?

Ok I have been doing some testing using Wireshark to determine the ip that Chrome downloads were coming from to see what I could add to the hosts file. After adding several items to the host file, I was still unable to get it to use the ip that I entered (keep seeing different IPs in Wireshark).

Anyways, I noticed that my downloads from Chrome returned to full speed again when it was the slow speeds yesterday. Copy speeds from Rclone mount have also went to full speed. I made no configuration changes, so I have no clue what happened or if the issue will reappear again.

I would guess you are a victim of throttling either at Comcast or at Google, or maybe just a temporarily congested link.

The drive web API is different to the drive v3 API that rclone uses and it uses different endpoints so can be throttled differently.

If you have trouble again, you could try a VPN - that might help (or might not!)