Rclone size performance ACD vs GDRIVE

Quite huge difference between in performance and Iam not sure why since both drives have the same files, with exception of only couple of big backups ( plex metadata daily backups that are 40GB each) that i deleted on ACD.

Since iam using copy there is slight difference, but will sync drives and try again.
( but the difference its insane eg 10x )

If anything i would assume ACD would take much longer since it needs to decrypt those files while gdrive is checking encrypted ones.

rclone size acdcrypt:
Total objects: 38636
Total size: 39061.384 GBytes (41941842038668 Bytes)
05.02.2017/03:48 ACD SIZE TIME 156 seconds

rclone size gdrive:/crypt
Total objects: 38719
Total size: 39158.661 GBytes (42046292073713 Bytes)
05.02.2017/03:48 GDRIVE SIZE TIME 1620 seconds

UPDATE
Tested with same directly checking size of crypted folder and performance is basically on pair .
( eg size wont really unencrypt anything to get count/size )

rclone size acd:/crypt 
Total objects: 38637
Total size: 39070.922 GBytes (41952083495844 Bytes)
05.02.2017/04:32 ACD SIZE TIME 157 seconds

it was slightly better today

05.02.2017/12:13 START
ACD FILES & SIZE (rclone size acd:/crypt)
Total objects: 38632
Total size: 39068.844 GBytes (41949851516851 Bytes)
05.02.2017/12:13 ACD: It took 156 seconds

GDRIVE FILE & SIZE (rclone size gdrive:/crypt)
Total objects: 38639
Total size: 39075.611 GBytes (41957117688435 Bytes)
05.02.2017/12:13 GDRIVE: It took 784 second

Interesting. I’ve noticed that Google Drive definitely has some tight limits on the number of files created (~3/sec); your data seems to indicate that all their metadata operations are slow (perhaps that’s why they throttle file creation).

@ncw, any chance there’s a way around that (for example, using a different API call, or even the same call but on a different API version) to obtain better Google Drive metadata performance?

Cheers,

Durval.

Speed wise for me they are on pair, sometimes ACD is slightly faster and sometimes GDRIVE is.

This is the single file copy test ( 1GB of data )

06.02.2017 09:37:35 ACD COPY SPEED TEST START
1GiB 0:00:18 [55.3MiB/s] [55.3MiB/s] ] 100%
ACD Copy ended 19 seconds

06.02.2017 09:37:54 GDRIVE COPY SPEED TEST START
1GiB 0:00:17 [ 57MiB/s] [ 57MiB/s] 100%
GDRIVE Copy ended 18 seconds

Is the from your server to ACD and gdrive?

How are you able to get those speeds copying just 1 file.
My will run at like 5 MB/s on single file copy from server to ACD

Hi Ajki, St0rm,

From my own experience with both providers (ACD and Google Drive), their single-large-file upload speed was always constrained by my upload speed (ie, the available from-me-to-the-internet bandwidth). I would be very surprised if you folks are seeing anything different at least for 100MB/s (full fast-ethernet) unconstrained uplinks…

Cheers,

Durval.

@Ajki re you using rclone’s built in credentials for drive or your own? If you are using rclone’s own credentials then I’ve just applied for some more quota with Google which should help.

It is all based on your quota… rclone has a global quota of 500 Queries per second if you are using rclone’s credentials.

If you get your own credentials, then I believe you start with 10 QPS.

These numbers aren’t in the google console, but I’ve had quite a few email chats with the google drive team about quotas so I have a better idea of how it works!

1 Like

Iam using rclone’s credentials.

Let’s see if the new Quota helps. It will probably take a day or two of back and forth to get it agreed.

I’m not even sure we know if the QPS is what causes the ban. I think only once we know precisely what causes the ban do we have a hope of fixing it, or @ncw you implement something like the open issue to cache the structure that way a plex scan won’t pull basic file data from Google everytime.

Hi @Stokkes,

I just had an idea: if it’s indeed the number of queries-per-second going over a certain threshold, @ncw could implement a throttle for that inside the specific storage code for GDrive so rclone would “pace” (for example, by sleeping until the QPS is back within limits) and therefore avoid the ban. OTOH, perhaps the easiest way to investigate that would be to implement some debugging in rclone so it calculates the current QPS all the time, and prints it to the log everytime it gets an error from the remote – that way we could check the logs when the ban starts and see how many QPS we were doing. Then, @ncw could use that number to implement a throttle code in order to preemptively avoid the ban.

I’m not so sure about that. I run rclone against the same google drive account in a number of machines (both reading and writing), and coordinating the caching between all of them would be, IMHO, unfeasible. I already have caching issues when using “rclone mount” when I do parallel “rclone copy”, “delete”, etc commands.

Another idea: why not modify Plex so it does its own caching? I just noticed it’s an open-source project… and it doing its cache internally would solve the problem not only for rclone, but also (I imagine) alleviate any related issues when dealing with slow storage (like CIFS/NFS over a VPN, etc).

Cheers,

Durval.

@durval

I can’t think of a business case to propose to the Plex team to modify their code so that us poor people who use cloud services can get better performance. Plex performance is absolutely fine using any storage that offers up the files with low latency. I’ve used Plex with direct attached, local NAS (CIFS/NFS) and it’s fine. I can’t imagine they would modify Plex so we need to find another alternative.

Plex is not open source. I believe one of their clients (Plex Home Theatre) is open source, but the server and “brains” of the operation are completely closed source.

What I suggested from ncw is an implementations similar to the node-gdrive-fuse product that was developed over a year ago now. It’s not longer supported, but it uses the official Google changeling API to modify for file system changes. Rclone doesn’t use this currently but it’s very likely that using the change API that we would be able to save bans.

Many who use node-gdrive-fuse have never been banned. Unfortunately, the product doesn’t work well for me due to my large library and since it’s not being developed anymore, it’s not going to be fixed.

Thanks for the clarifications @Stokkes. So Plex is closed source – one more reason for me not to touch it with a 6-feet pole.

And that’s exactly the issue with closed source software – you can’t fix it, you have to “convince” the people hogging the source code to do it. If you can’t (or they have gone bankrupt, or were bought out by a competitor and the product shelved, etc) you are really, irremediably, thoroughly SOL.

Fortunately we rclone users don’t have that problem – and to boot, we have our great @ncw standing by it better than any commercial closed-source vendors I ever seen (and I’ve seen quite a few of them). Hope you can convince him to work around the Plex issue, but from what I’ve seen from @ncw so far, it should not be hard :wink:

I just hope he doesn’t do it by using a local cache (due to the issues I mentioned) or if he does, at least he does it using an optional switch so only the Plex (and similar) L^Husers run the risk of running into those issues…

Cheers,

Durval.