Copy Google Photos without list?

Hi - I have a large album of family photos in Google Photos that is automatically generated from photos taken by me and my wife, based on facial and pet recognition.

Currently goog:album/Gladlees contains 17185 photos and videos.

I'm trying to pull random photos from this gallery for display on an old iPad 1. I've dumped the list of images into a text file, which I'm randomizing and picking 50 images, but for every rclone copy for each file, I believe it's doing a full pull of the listing for the album, which of course takes several calls because of pagination, and chews through a good amount of time and quota.

When I tested downloading a full album of ~80 photos, it was fast, less than 1 minute.

Sounds like a job for cache, right? So I set it test-cache to point at goog: and indeed, rclone ls test-cache:albums/Gladlee is now wicked fast.

But *rclone copy test-cache:album/Gladlees/texas-billyjanedarryljimmy.jpg /Users/dlee/Pictures/gladlees/ still takes over 5 minutes, and it looks like it's still making a list call:

2020/11/26 02:23:29 DEBUG : Google Photos path "album/Gladlees": List: dir=""

Any ideas on how this might be resolved?

Oh, and to top it off, the local copy of texas-billyjanedarryljimmy.jpg is 0 blocks. :-{

It's super helpful to use the help and support template and it collects all the right information and we don't have to re-ask everything that wasn't added. If you can fill that out, that's really the best way to progress.

Thanks!

The cache probably doesn't like the unknown size of the google photos.

Yes it will be. Unfortunately there is no API to look up files by file name :frowning:

What you want is something to cache the google photos listing. I don't think the cache backend will work here, but you could try using rclone mount

So mount your gphotos as a local directory. Note that all the files within will appear as 0 length however you can copy them out, you'll just have to use the right copying program.

If I use cp then it works fine. Other programs may copy 0 length files - you'll have to experiment.

rclone mount will cache the albums (and other directory listings) for

  --dir-cache-time duration                Time to cache directory entries for. (default 5m0s)

PS I think using rclone rcd and using the API to copy files would work too.

Hey thanks Nick! I'm trying rcd, but unfortunately it doesn't appear to perform any faster.

I'll give mount a try.

So I tried mounting just the one album:
rclone cmount goog:album/Gladlees /Users/dlee/Pictures/gladmount/ --vfs-cache-mode full -vv --cache-dir /Users/dlee/Pictures/gladcache

There's a promising start, then it tries to get a directory listing:

2020/11/26 14:52:11 DEBUG : rclone: Version "v1.53.3" starting with parameters ["rclone" "cmount" "goog:album/Gladlees" "/Users/dlee/Pictures/gladmount/" "--vfs-cache-mode" "full" "-vv" "--cache-dir" "/Users/dlee/Pictures/gladcache"]
2020/11/26 14:52:11 DEBUG : Creating backend with remote "goog:album/Gladlees"
2020/11/26 14:52:11 DEBUG : Using config file from "/Users/dlee/.config/rclone/rclone.conf"
2020/11/26 14:52:11 INFO : Google Photos path "album/Gladlees": poll-interval is not supported by this remote
2020/11/26 14:52:11 DEBUG : vfs cache: root is "/Users/dlee/Pictures/gladcache/vfs/goog/album/Gladlees"
2020/11/26 14:52:11 DEBUG : vfs cache: metadata root is "/Users/dlee/Pictures/gladcache/vfs/goog/album/Gladlees"
2020/11/26 14:52:11 DEBUG : Creating backend with remote "/Users/dlee/Pictures/gladcache/vfs/goog/album/Gladlees"
2020/11/26 14:52:11 DEBUG : Google Photos path "album/Gladlees": Mounting on "/Users/dlee/Pictures/gladmount/"
2020/11/26 14:52:11 DEBUG : Google Photos path "album/Gladlees": Mounting with options: ["-o" "fsname=goog:album/Gladlees" "-o" "subtype=rclone" "-o" "max_readahead=131072" "-o" "attr_timeout=1" "-o" "atomic_o_trunc" "-o" "noappledouble" "-o" "volname=goog album Gladlees"]
2020/11/26 14:52:11 INFO : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2020/11/26 14:52:11 DEBUG : Google Photos path "album/Gladlees": Init:
2020/11/26 14:52:11 DEBUG : Google Photos path "album/Gladlees": >Init:
2020/11/26 14:52:11 DEBUG : /: Statfs:
2020/11/26 14:52:11 DEBUG : /: >Statfs: stat={Bsize:4096 Frsize:4096 Blocks:4294967295 Bfree:4294967295 Bavail:4294967295 Files:1000000000 Ffree:1000000000 Favail:0 Fsid:0 Flag:0 Namemax:255}, errc=0
...
2020/11/26 14:52:11 DEBUG : Google Photos path "album/Gladlees": List: dir=""
2020/11/26 14:52:15 DEBUG : /: Statfs:
2020/11/26 14:52:15 DEBUG : /: >Statfs: stat={Bsize:4096 Frsize:4096 Blocks:4294967295 Bfree:4294967295 Bavail:4294967295 Files:1000000000 Ffree:1000000000 Favail:0 Fsid:0 Flag:0 Namemax:255}, errc=0
2020/11/26 14:53:11 INFO : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2020/11/26 14:54:11 INFO : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2020/11/26 14:54:37 DEBUG : Google Photos path "album/Gladlees": >List: err=

I then see all the files, but then for unclear reasons, the mount gets destroyed:

2020/11/26 14:59:25 DEBUG : z znd lukie.jpg: >Size:
2020/11/26 14:59:25 DEBUG : /.localized: >Getattr: errc=-2
2020/11/26 14:59:25 DEBUG : Google Photos path "album/Gladlees": Destroy:
2020/11/26 14:59:25 DEBUG : Google Photos path "album/Gladlees": >Destroy:
2020/11/26 14:59:25 DEBUG : Calling host.Unmount
2020/11/26 14:59:25 DEBUG : host.Unmount failed
2020/11/26 14:59:25 DEBUG : rclone: Version "v1.53.3" finishing with parameters ["rclone" "cmount" "goog:album/Gladlees" "/Users/dlee/Pictures/gladmount/" "--vfs-cache-mode" "full" "-vv" "--cache-dir" "/Users/dlee/Pictures/gladcache"]`

Hmm, I wonder why?

Anything in the system logs?

Ah, I think it might be related to software my company requires: "Webroot SecureAnywhere for Mac".

When I turned that off and tried cmount again I got prompted to approve some security setting.

But then... after about 2-3 minutes I saw the files listed in the DEBUG output, and again, the mount ended up getting destroyed. Nothing in the log files I could find.

Anyways, I ended up gritting my teeth and hacking together the barest minimum of Python together to just dump all 17,000+ baseURLs from my album to a text file, which takes about 2-3 minutes. I then used this one-liner (made possible after installing GNU coreutils with brew):

shuf /Users/dlee/workspace/gladphotos/gladurls.txt | head -50 | gxargs -d '\n' -L1 -I{} curl -O -J "{}=w1024"

I suppose I could save some time by saving the list of mediaItems from my search, and then grabbing a page at a time until I hit one that I already have, then randomly pick 50 items and do an individual mediaItems call for each one.

Anyways though, thanks for the ideas on whether rclone would work for this. I am using rclone sync to grab a much smaller, curated album of "Favorites", and it works great!

I'm glad you got it sorted and sorry rclone didn't work for you this time.

I don't know why your mount got killed - that isn't normal!

Thanks so much for helping out Nick. I wanted to see if I could at least contribute something to this project, so I tried cmount again with a smaller album, and it was successful (didn't die), but when I tried to copy a file locally, I ended up with a zero-length file. I opened a separate topic for that:

Cannot copy files from Google Photos mount on OS X (Mojave)

(I can't link, apparently.)

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.