Rclone stuck retrieving some video files from Google Photo

What is the problem you are having with rclone?

I'm trying to use rclone to do a backup of my Google Photos collection on a local directory.

Everything seems to go fine (i.e. it downloaded ~36GB of files, and I can see photos and videos in local directory tree as I expected), but it seems that rclone gets stuck copying some video files.

When i run it with the --progress flag it keep saying "transferring" on the same files, even after several hours, like this:

Transferred:             0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors:                 0
Checks:             25916 / 25916, 100%
Transferred:            0 / 13, 0%
Elapsed time:     2h5m47.6s
Transferring:
 * 2019/2019-09/GH014601_…8306105616399_high.MP4: transferring
 * 2019/2019-09/GH014603_…8306105616399_high.MP4: transferring
 * 2019/2019-10/GH014626_…8306105616399_high.MP4: transferring
 * 2019/2019-10/GH014627_…8306105616399_high.MP4: transferring

It seems to stop always on the same files. I identified some of them on Google Photo, and they are mostly short clips (~30s, often less than 50 MB of size).

When I exclude .mp4 and .mov files with something like --exclude *.MP4 the job completes nicely.

Is there a way to solve the problem, or at least ignore the files causing trouble and get a list of them, so at least the job can complete and I can retrieve them at least manually?

What is your rclone version (output from rclone version)

$ rclone version
rclone v1.49.5
- os/arch: linux/amd64
- go version: go1.12.10

Which OS you are using and how many bits (eg Windows 7, 64 bit)

$ hostnamectl status
    ...
   Operating System: Ubuntu 18.04.3 LTS
   Kernel: Linux 4.15.0-65-generic
   Architecture: x86-64

Which cloud storage system are you using? (eg Google Drive)

Google Photos

The command you were trying to run (eg rclone copy /tmp remote:tmp)

flock -n '/srv/storage/Google Photo/media/' rclone sync GoogleFoto:media/by-month '/srv/storage/Google Photo/media/'

(I'm using cron to run rclone, and I'm using flock to lock a directory and prevent multiple concurrent processes doing the same thing)

A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)

Here's an excerpt from the log (I can put the full log on pastebin if needed):

2019/10/24 12:51:24 DEBUG : pacer: low level retry 1/10 (error  (500 500 Internal Server Error))
2019/10/24 12:51:24 DEBUG : pacer: Rate limited, increasing sleep to 16.626225582s
2019/10/24 12:51:42 DEBUG : pacer: low level retry 10/10 (error  (500 500 Internal Server Error))
2019/10/24 12:51:42 DEBUG : pacer: Rate limited, increasing sleep to 16.800405089s
2019/10/24 12:51:42 DEBUG : 2019/2019-09/InShot_20190924_162218079.mp4: >Open:
2019/10/24 12:51:42 DEBUG : 2019/2019-09/InShot_20190924_162218079.mp4: Received error: failed to open source object:  (500 500 Internal Server Error) - low level retry 1/10
2019/10/24 12:51:42 DEBUG : 2019/2019-09/InShot_20190924_162218079.mp4: Size:
2019/10/24 12:51:42 DEBUG : 2019/2019-09/InShot_20190924_162218079.mp4: >Size:
2019/10/24 12:51:42 DEBUG : 2019/2019-09/InShot_20190924_162218079.mp4: Open:
2019/10/24 12:51:57 DEBUG : pacer: low level retry 10/10 (error  (500 500 Internal Server Error))
2019/10/24 12:51:57 DEBUG : pacer: Rate limited, increasing sleep to 16.751940337s
2019/10/24 12:51:57 DEBUG : 2019/2019-09/GH014601_ALTA-1643738306105616399_high.MP4: >Open:
2019/10/24 12:51:57 DEBUG : 2019/2019-09/GH014601_ALTA-1643738306105616399_high.MP4: Received error: failed to open source object:  (500 500 Internal Server Error) - low level retry 1/10
2019/10/24 12:51:57 DEBUG : 2019/2019-09/GH014601_ALTA-1643738306105616399_high.MP4: Size:
2019/10/24 12:51:57 DEBUG : 2019/2019-09/GH014601_ALTA-1643738306105616399_high.MP4: >Size:
2019/10/24 12:51:57 DEBUG : 2019/2019-09/GH014601_ALTA-1643738306105616399_high.MP4: Open:
2019/10/24 12:52:14 DEBUG : pacer: low level retry 10/10 (error  (500 500 Internal Server Error))
2019/10/24 12:52:14 DEBUG : pacer: Rate limited, increasing sleep to 16.391133203s
2019/10/24 12:52:14 DEBUG : 2019/2019-09/GH014603_ALTA-1643738306105616399_high.MP4: >Open:
2019/10/24 12:52:14 DEBUG : 2019/2019-09/GH014603_ALTA-1643738306105616399_high.MP4: Received error: failed to open source object:  (500 500 Internal Server Error) - low level retry 1/10
2019/10/24 12:52:14 DEBUG : 2019/2019-09/GH014603_ALTA-1643738306105616399_high.MP4: Size:
2019/10/24 12:52:14 DEBUG : 2019/2019-09/GH014603_ALTA-1643738306105616399_high.MP4: >Size:
2019/10/24 12:52:14 DEBUG : 2019/2019-09/GH014603_ALTA-1643738306105616399_high.MP4: Open:
2019/10/24 12:52:19 INFO  :
Transferred:       56.043M / 56.043 MBytes, 100%, 191.075 kBytes/s, ETA 0s
Errors:                 0
Checks:             26466 / 26466, 100%
Transferred:          118 / 196, 60%
Elapsed time:      5m0.3s
Transferring:
 *          2019/2019-09/VID_20190920_104807.mp4: transferring
 *    2019/2019-09/InShot_20190924_162218079.mp4: transferring
 * 2019/2019-09/GH014601_…8306105616399_high.MP4: transferring
 * 2019/2019-09/GH014603_…8306105616399_high.MP4: transferring

2019/10/24 12:52:31 DEBUG : pacer: low level retry 2/10 (error  (500 500 Internal Server Error))
2019/10/24 12:52:31 DEBUG : pacer: Rate limited, increasing sleep to 16.049753207s
2019/10/24 12:52:49 DEBUG : pacer: low level retry 1/10 (error  (500 500 Internal Server Error))
2019/10/24 12:52:49 DEBUG : pacer: Rate limited, increasing sleep to 16.205664469s
2019/10/24 12:53:03 DEBUG : pacer: low level retry 1/10 (error  (500 500 Internal Server Error))
2019/10/24 12:53:03 DEBUG : pacer: Rate limited, increasing sleep to 16.4273424s
2019/10/24 12:53:19 DEBUG : pacer: low level retry 1/10 (error  (500 500 Internal Server Error))
2019/10/24 12:53:19 DEBUG : pacer: Rate limited, increasing sleep to 16.333723774s

It looks like this is causing the problem...

Internal server errors are causing retries and the retries are taking a very long time...

You could try setting --low-level-retries 1 to get the bulk of the sync done quickly. Then just run the sync again to pick up the missed transfers.

Thanks, this seems to solve my problem with files in media tree - the job completes nicely.

Anyway, I still have problems when I try to sync the album tree :frowning:

Running this command:

rclone -vv --low-level-retries 1 sync GoogleFoto:album/ '/srv/storage/Google Photo/album/'

I get errors like these:

2019/10/26 00:54:01 DEBUG : La Nostra Famiglia/GH014627_ALTA-1643738306105616399_high~2.mp4: >Open:
2019/10/26 00:54:01 ERROR : La Nostra Famiglia/GH014627_ALTA-1643738306105616399_high~2.mp4: Failed to copy: failed to open source object:  (500 500 Internal Server Error)
2019/10/26 00:54:01 DEBUG : La Nostra Famiglia/GH014646_ALTA-1643738306105616399_high.MP4: Size:
2019/10/26 00:54:01 DEBUG : La Nostra Famiglia/GH014646_ALTA-1643738306105616399_high.MP4: >Size:
2019/10/26 00:54:01 DEBUG : La Nostra Famiglia/GH014646_ALTA-1643738306105616399_high.MP4: Size:
2019/10/26 00:54:01 DEBUG : La Nostra Famiglia/GH014646_ALTA-1643738306105616399_high.MP4: >Size:
2019/10/26 00:54:01 DEBUG : La Nostra Famiglia/GH014646_ALTA-1643738306105616399_high.MP4: Size:
2019/10/26 00:54:01 DEBUG : La Nostra Famiglia/GH014646_ALTA-1643738306105616399_high.MP4: >Size:
2019/10/26 00:54:01 DEBUG : La Nostra Famiglia/GH014646_ALTA-1643738306105616399_high.MP4: Open:
2019/10/26 00:54:17 DEBUG : pacer: low level retry 1/1 (error  (500 500 Internal Server Error))
2019/10/26 00:54:17 DEBUG : pacer: Rate limited, increasing sleep to 16.728347088s
2019/10/26 00:54:17 DEBUG : La Nostra Famiglia/GH014628_ALTA-1643738306105616399_high.MP4: >Open:
2019/10/26 00:54:17 ERROR : La Nostra Famiglia/GH014628_ALTA-1643738306105616399_high.MP4: Failed to copy: failed to open source object:  (500 500 Internal Server Error)
2019/10/26 00:54:17 DEBUG : La Nostra Famiglia/GH014671_ALTA-1643738306105616399_high.MP4: Size:
2019/10/26 00:54:17 DEBUG : La Nostra Famiglia/GH014671_ALTA-1643738306105616399_high.MP4: >Size:
2019/10/26 00:54:17 DEBUG : La Nostra Famiglia/GH014671_ALTA-1643738306105616399_high.MP4: Size:
2019/10/26 00:54:17 DEBUG : La Nostra Famiglia/GH014671_ALTA-1643738306105616399_high.MP4: >Size:
2019/10/26 00:54:17 DEBUG : La Nostra Famiglia/GH014671_ALTA-1643738306105616399_high.MP4: Size:
2019/10/26 00:54:17 DEBUG : La Nostra Famiglia/GH014671_ALTA-1643738306105616399_high.MP4: >Size:
2019/10/26 00:54:17 DEBUG : La Nostra Famiglia/GH014671_ALTA-1643738306105616399_high.MP4: Open:
2019/10/26 00:54:22 INFO  :
Transferred:             0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors:               166 (retrying may help)
Checks:             16739 / 16739, 100%
Transferred:            0 / 21, 0%
Elapsed time:     3m33.9s
Transferring:
 * La Nostra Famiglia/GH0…8306105616399_high.MP4: transferring
 * La Nostra Famiglia/GH0…8306105616399_high.MP4: transferring
 * La Nostra Famiglia/GH0…8306105616399_high.MP4: transferring
 * La Nostra Famiglia/GH0…8306105616399_high.MP4: transferring

2019/10/26 00:54:34 DEBUG : pacer: low level retry 1/1 (error  (500 500 Internal Server Error))
2019/10/26 00:54:34 DEBUG : pacer: Rate limited, increasing sleep to 16.200953499s
2019/10/26 00:54:34 DEBUG : La Nostra Famiglia/GH014630_ALTA-1643738306105616399_high.MP4: >Open:
2019/10/26 00:54:34 ERROR : La Nostra Famiglia/GH014630_ALTA-1643738306105616399_high.MP4: Failed to copy: failed to open source object:  (500 500 Internal Server Error)
2019/10/26 00:54:34 DEBUG : La Nostra Famiglia/GHAA4645_ALTA-1643738306105616399_high.MP4: Size:
2019/10/26 00:54:34 DEBUG : La Nostra Famiglia/GHAA4645_ALTA-1643738306105616399_high.MP4: >Size:
2019/10/26 00:54:34 DEBUG : La Nostra Famiglia/GHAA4645_ALTA-1643738306105616399_high.MP4: Size:
2019/10/26 00:54:34 DEBUG : La Nostra Famiglia/GHAA4645_ALTA-1643738306105616399_high.MP4: >Size:
2019/10/26 00:54:34 DEBUG : La Nostra Famiglia/GHAA4645_ALTA-1643738306105616399_high.MP4: Size:
2019/10/26 00:54:34 DEBUG : La Nostra Famiglia/GHAA4645_ALTA-1643738306105616399_high.MP4: >Size:
2019/10/26 00:54:34 DEBUG : La Nostra Famiglia/GHAA4645_ALTA-1643738306105616399_high.MP4: Open:
2019/10/26 00:54:50 DEBUG : pacer: low level retry 1/1 (error  (500 500 Internal Server Error))
2019/10/26 00:54:50 DEBUG : pacer: Rate limited, increasing sleep to 16.491398609s
2019/10/26 00:54:50 DEBUG : La Nostra Famiglia/GH014642_ALTA-1643738306105616399_high.MP4: >Open:
2019/10/26 00:54:50 ERROR : La Nostra Famiglia/GH014642_ALTA-1643738306105616399_high.MP4: Failed to copy: failed to open source object:  (500 500 Internal Server Error)
2019/10/26 00:54:50 DEBUG : La Nostra Famiglia/InShot_20190809_224155530.mp4: Size:
2019/10/26 00:54:50 DEBUG : La Nostra Famiglia/InShot_20190809_224155530.mp4: >Size:
2019/10/26 00:54:50 DEBUG : La Nostra Famiglia/InShot_20190809_224155530.mp4: Size:
2019/10/26 00:54:50 DEBUG : La Nostra Famiglia/InShot_20190809_224155530.mp4: >Size:
2019/10/26 00:54:50 DEBUG : La Nostra Famiglia/InShot_20190809_224155530.mp4: Size:
2019/10/26 00:54:50 DEBUG : La Nostra Famiglia/InShot_20190809_224155530.mp4: >Size:
2019/10/26 00:54:50 DEBUG : La Nostra Famiglia/InShot_20190809_224155530.mp4: Open:

Any idea how can I solve?

This is a google server problem...

I'm not sure what is going on here. I can only suggest trying again later.

Is it very big the file?

Not really - the biggest of those two videos is a 20s clip that weights 10.5 MB. :thinking:

Most of my clips are shorter than 1 minute, can't tell for sure at the moment, but I think that the biggest one isn't bigger than 100 MB. Not sure if it's relevant: few of them are MP4s from my smartphone, but most come from my GoPro (downloaded from GoPro app, and them uploaded from Google Photo app on Android).

At the moment I'm getting a list of file generating a "500 Internal Server Error" grepping the output of rclone, so I can try to download them in some other ways later on.

In the meantime, do you know if is there a way to prevent those RESOURCE_EXHAUSTED errors? I guess there should be a way "slow down" rclone requests somehow - is it?

2019/10/25 22:04:27 INFO  : Local file system at /srv/storage/Google Photo/media: Waiting for transfers to finish
2019/10/25 22:04:27 ERROR : Local file system at /srv/storage/Google Photo/media: not deleting files as there were IO errors
2019/10/25 22:04:27 ERROR : Local file system at /srv/storage/Google Photo/media: not deleting directories as there were IO errors
2019/10/25 22:04:27 ERROR : Attempt 3/3 failed with 17 errors and: march failed with 16 error(s): first error: couldn't list files: Quota exceeded for quota metric 'photoslibrary.googleapis.com/read_requests' and limit 'ReadsPerMinutePerUser' of service 'photoslibrary.googleapis.com' for consumer 'project_number:202264815644'. (429 RESOURCE_EXHAUSTED)
2019/10/25 22:04:27 Failed to sync with 17 errors: last error was: march failed with 16 error(s): first error: couldn't list files: Quota exceeded for quota metric 'photoslibrary.googleapis.com/read_requests' and limit 'ReadsPerMinutePerUser' of service 'photoslibrary.googleapis.com' for consumer 'project_number:202264815644'. (429 RESOURCE_EXHAUSTED)

Thanks a lot for your help, I really appreciate it!!!

Rclone should try them again for you if you re-run the command.

Yes you can add the --tpslimit flag, so --tpslimit 2 would do 2 transactions per second. Doing that will probably fix the problem :crossed_fingers:

Are you using rclone's Google Photos credentials or your own? The default allocation is very low if you are using your own.

OK. I did a few more tries, and found that the problem affects always the same 115 files when I'm trying to backup albums. I can manually download them from the web interface, but not using rclone copy command (I get the same 500 Internal Server Error)

Adding --tpslimit flag helped, since it prevented RESOURCE_EXHAUSTED errors, and job ended cleanly in less time (less than half an hour, instead of ~2h).

Sorry, I'm not sure about this point - how can I check?

Thanks again!

It must be something about those files at google. The "500 internal server error" is indicative of that. So I'd say this is a bug in the google drive API.

If your config just looks like this then you are using rclone's credentials. You most likely are as you would remember getting your own as it is a bit of a pain!

$ rclone config show gphotos
--------------------
[gphotos]
type = google photos
token = {"access_token":"XXX"}
--------------------

Just found a relevant google photos issue which looks relevant. So it looks like Google are on the case.

https://issuetracker.google.com/issues/141255600

So, I confirm I'm using default credentials from rclone! :sweat_smile:

Thanks for your time! So I guess we just need to wait for them to solve the issue, isn't it?

I think I can wait for it, after all I already got ~60 GB worth of backup of my photos and videos thanks to rclone, that "small" number of videos can wait for a while :wink:


For future reference and for others, the commands I'm using right now in my crontab to minimise problems are:

flock -n '/srv/storage/Google Photo/album/' rclone --tpslimit 2 --low-level-retries 1 sync GoogleFoto:album/ '/srv/storage/Google Photo/album/' > /dev/null
flock -n '/srv/storage/Google Photo/media/' rclone --tpslimit 2 --low-level-retries 1 sync GoogleFoto:media/by-month '/srv/storage/Google Photo/media/' > /dev/null

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.