Future of rclone?


Hi Nick - just curious … do you know what Googles might have of intentions for Rclone now or in the future? Like Amazon they denied access to Rclone some time ago. Thinking about If all our content is encrypted via rclone and all of a sudden, Google denies access to Rclone etc. How to get access to the data .? :slight_smile:


It’s a little different as majority of folks have their own API keys. Google would have to revoke my key to block me from accessing it via the API.

If you are concerned about your data, it’s always best to have it in multiple spots :slight_smile:

Thanks for the reply :slight_smile:

Well im a bit concerned about the new EU directive regarding copyright material . Im at the moment encrypting my data, just to make sure of that Google might wont delete the content.

Are you encrypting your data?

Yep. Best to encrypt everything.

1 Like

So If i want to make a copy to another account, i can do a serverside copy right ? Do I need two Google APIs ?

The thing is my current setup is Gcd1 is unencryptet, i made a copy to another gcd2 encrypted Used rclone copy gcd1: gcd2: etc :slight_smile:

Gcd2 is bought via eBay , and oddly its still active ( aprox 1 year ) , my main account is my own. But when i want to copy back the data from gcd2 to gcd1 , i encountered download issues 403 error i recall. I even used max bwlimit to 8M so i wouldnt hit the max upload for aprox 750GB pr day.
I didnt had any issues from gcd1 to gcd2 , the copy went for aprox 22 days (16TB) .

Any ideas .?

If you normally try to copy or startup another entry that is configured as a cache, it would error out saying the cache.db or something along those lines is already active.

For my rclone moves/etc/such, I do not use a cache entry in my rclone.conf

I basically have GD->Cache->Decrypt for my cache mount.
For rclone copy/sync commands, I use GD->AnotherDecryptName with the same passwords as the first.

So if I run a rclone move, I use the second item and if I’m mounting, I use the first.

Anything copied should get picked up on the cache via polling as the default interval is 1 minute.

In your case, you can you sync to go from GD1->GD2 and I’m guessing on GD2, you want to to the GD2->Cache-Decrypt and mount the decrypt.

Oh by the way , I dont use cache. Im using Plexdrive. I tried cache, but no matter how I tried to sort it out, I encountered buffering. Plexdrive just run without any need to configuring all the things like rclone . I find it a bit confusing with all the settings needed to make rclone cache run with plex :frowning:

Maybe I should give cache a go again, but I see there are tons of threads in here with soooo many settings which really confuses me . Seems like the settings depends on the single user and what internet speed he has , and server specs …etc. .

Would be awesome if there were an optimized settings for best performance for ex. 15 users , 10 users etc . :slight_smile: but there isn’t .

I am in reasonably regular contact with the drive team about rclone’s quotas and I know they are aware it is a popular program for use with google drive.

It is much, much easier to get API keys for google, so even if they did revoke rclone’s keys, it is easy enough to get your own, so I don’t think google would bother.

If by some unlikely catastrophe rclone was no longer able to talk to google drive, you can still download the data with another tool and use rclone to decrypt it, or run an rclone crypt mount overlaying some other program supplying the actual google mount.

So I think your data is safe in that regard.

Amazon have a closed API key access program and a qualification program where they test your program before they grant you a production set of keys so they are completely different in that respect.


Thanks for the answer Nick :slight_smile:

Keep up the good work. Love your system .



1 Like

I don’t think rclone losing access to gdrive is a concern. “Unlimited storage” is probably going the way of the dodo at some point.

Yes, unlimited is a very big number :wink: