Google Photos Backup - Speed & Exclude Issues

Hi rClone Community

Just discovered this program and wish I had found it years ago.

Have felt really hobbled by Google Photos clunky/minimalist web interface and found it impossibly tedious to upload/backup my digital photo collection because I have everything sorted into albums(folders) and there is no bulk/auto web method to keep that information/structure…until rClone.

My situation:

External hard drive with all my personal digital photos on it “E:”

Top level folder name is “Digital Photos”

Then each EVENT is nested in a folder (Ex” 2021-03-24 rClone Pictures” or similar).

Some of those folders have subfolders (Exs: “2021-03-24 rClone Picture Edits” or “thumbnails”)

I want rClone to create individual albums that match the name of each subfolder of Digital Photos and (most*) subfolders within each of those folders.

Read this: rclone.___/googlephotos/ (links are blocked?)

And started out with a basic command line:

rclone copy E:\Digital Photos remote:album/

with just a few albums in the director (temporarily/test run)

This seemed to work, but the copy rate was painfully slow, occasional 409 errors, and did not want the .thumbnail subfolders to be included. By slow I mean…5 GB overnight, so <1GB an hour. Which is a fraction of the upload speeds I get if I just drag and drop into the Google Photos web interface.

So I have since

-made my own “custom remote” – all default settings, just created by own OATH/API through cloud console

And tried to upgrade my command line:

rclone copy E:\Digital Photos remote:album/ --exclude /.thumbnails/* --log-file=C:*PATH*\logs\rcloneGPhotos.txt --log-level NOTICE

Again, seems to work, still pretty slow?, but my exclude command seems to have been ignored. Log file shows the occasional

“Failed to copy: failed to create media item: The operation was aborted. (409 ABORTED)” (I assume throttling issue? Average 1-2 per hour, not really worried- figured it might get it “next time”)

&

“Failed to copy: couldn't upload file: { "code": 3, "message": "Payload must not be empty" } (400 400 Bad Request)” (for a .db or similar non media file that I missed)

Requests:

  • Any tips to increase my speed? I’ve read numerous other topics in the forum some saying adjust chunk, adjust tps limits, others saying leave it all default, some say it’s just Google’s API limits (any way to attach a screenshot of my API dashboard showing the statistics? it starts high but then is low).

  • Any thoughts on why my --ignore is failing? Those sub folders were auto-created by a photo manager I use and I’m loathe to delete them or it will take a while for the program to re-process my database. An album was created serer side and uploaded photos named “EVENT/.thumbnails”. The disk path is "F:\Digital Photos\EVENT.thumbnails\filename.*"

  • Should I be using “sync” command rather than “copy”? Since I don’t usually/ever delete local copies of the files, I would expect the functionality to basically be the same. Maybe “Copy” is safer so as to remove any server-side delete functionality?

    • Would either of those be affected by Google Photos having resized/compressed the server side copy of the file? Trying to not blow too much of my Google account quota by having Google Photos downgrade files to “High Quality” AFTER uploading

INFO:

OS:
Windows 10 x64
It’s an older machine I use as a server ( 2nd Gen i7, 8GB RAM) so that could be an issue given the large number of folders and files (although, relative to other users here I think my use is small. Dozens of GB, well below 1TB)

rClone Version:

rclone-v1.54.1-windows-amd64

Cloud Storage:

Google Photos

*I did have to muddle through creating my own OATH remote – the instructions I found (rclone.___/drive/#making-your-own-client-id) did not seem to have Photos specifically, just drive, and did not mention which permissions needed to be enable (I enabled everything listed for photos).

Current Command:
rclone copy E:\Digital Photos remote:album/ --exclude /.thumbnails/* --log-file=C:*PATH*\logs\rcloneGPhotos.txt --log-level NOTICE

Config:
/// [remote]
type = google photos
read_only = false
token = {"access_token":"XXXX","token_type":"Bearer","refresh_token":"XXXX","expiry":"2021-03-24T16:34:28.7781737-07:00"}
client_id = XXXX.apps.googleusercontent. com
client_secret = XXX

rClone is currently running, so I am a bit stuck on changing the logging level or getting more information until it finishes the current task (which at the current rate will take a couple of days…). Any hard in closing it and restarting or with copy does that create issues/require a lot of processing time to compare local:server?)

I think the best tip is to adjust --tpslimit until you avoid tripping any of google's rate limiters. That will slow the initial transfer down but then it won't stop later.

This only excludes .thumbnails directories at the root of the transfer.

I expect you want --exclude .thumbnails/** which excludes .thumbnails anywhere in the transfer and recursively all their content.

Copy is fine - it is the safe version of sync. Always use copy until you are sure you know what you are doing!

As far as I know they are just image files either way so you'll transfer more or less data. I don't think that has a great effect on API limits - it is the number of operations the photos API seems to really care about.

Rclone should use very little RAM - I doubt that will be a problem.

The Google photos API is a poor effort by Google, I'd give them C+ - could try harder. If you can get it working with rclone then it will stay working but if you can't you could use an alternative like gphotos-cdp which drives an embedded chromium browser and clicks the buttons in the web interface. This has the great advantage that you get unmodified images unlike using the API.

Thanks for the very thorough reply.

Regarding --tpslimit any suggestions on where to start?
What's rClone defaulting to since I didn't include anything?

From reading some other posts you had suggested people try "--tpslimit 2", but looking at my API & Services dashboard on google my average Traffic was < 0.5/s, only peaking at the very beginning and the very end (got up to ~2.3)
Other stats: Latency ranged 15-30 continuously, Errors Avg~10%, jumping between 0-20.

Perhaps also noteworthy that I am using a VPN, so maybe I'm being extra throttled by Google.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.