Not sure if this is universal but I have just started uploading my photos to google photos with rlclone. What I have noticed is, regardless of the number of transfers, my file - only CRW raw files at the moment - hang at 100% for 10-15 seconds, I assume while the baackend confirms the file acceptance.
Is there any way to stagger transfers or could there be? This way, in these kinds of circumstances, the first few tranfers would use bandwith and then hang around waiting for a "commit" then the next few tranfers start and so keep the pipe a little more filled.
What I find at the moment is that even with 32 transfers I get all at 100% for a while and then one or two clear and new transfers start and then more complete etc.
This is a cool idea, especially with many identically-sized small files. But unfortunately I think it would break down over the long term, because you would potentially end up with periods of time where nothing was uploading if your various sized files happened to align that way. There's probably a clearer way to say that, but hopefully it makes sense.
Anyway, as of now, I think the answer is no--but if you have subdirectories, you could potentially achieve something similar by copying each directory separately.
Or, if this is google drive, you could try making a union of several team drives, and then server-side move everything once it's uploaded. I haven't tested this, and make no warranties as to its usefulness.
If it acts like the Google Drive API, you can upload about 2-3 files per second so having a lot of transfers would slow it down.
I would give it a try to limit the number of transfers to something more manageable and see how that works. Try with the 8 or 4 and see how that works out and if the delay is still there.
Can you run an upload and share the debug log to see what the output is? Use -vv on the command.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.