What is the problem you are having with rclone?
I'm migrating my photo archive from Google Photos to an S3 service, and I'm using rclone to manage the files. I'm not transferring from Google Photos using rclone, but I've downloaded a Google Takeout with my originals or "high quality" media, and I also have a lot of originals locally.
Now that I have uploaded a lot of files to the S3 service, I want to remove duplicates such that files existing on the S3 service can safely be removed from Google Photos. The dedupe feature looked promising, but it only appears to delete files within a single backend.
What would be the best way to approach this? Looking at rclone's functionality, there's virtual backends called both combine and union. Can I use combine to create a single backend that encompasses both my S3 and the Google Drive, and then run a dedupe by name on that so that I remove files existing on Google Photos if the same file name also exists on the S3 service? I understand that Google Photos can't do match by hash, and my originals might be different size and hash than the files stored on Google Photos.
Run the command 'rclone version' and share the full output of the command.
$ rclone version
rclone v1.66.0
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-101-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.22.1
- go/linking: static
- go/tags: none