Alternative to Sync to Limit Transactions?

This is more of a general question. Most bucket-based storage has limits and/or charge for certain sync transactions. There was --fast-list option that was introduced by @ncw to reduce these transactions.

My question is about an alternative to this. This potential method would get around the transaction limit. Say you have a folder that you made changes to. You sync that folder locally to another folder or drive. But during that sync, rclone keeps track of all the delete and upload transactions that were done. These transactions are generally not limited in bucket-based systems. Once you have the list of transactions that were undertaken, you send all those transactions to your cloud storage.

Is this possible? Or if it is not currently, is it on the roadmap? If this was not clear, please let me know I will try to elaborate as best I can.
Thanks in advance.

Hmm, interesting idea!

You could potentially do an rclone check between the two directories and parse the output into a set of files which need uploading.

The logical conclusion of this is to use a local database which is what the cache backend does for you.

You can do something a bit similar like this... First use lsf to find local files that have been modified recently

rclone lsf /path/to/dir --files-only --max-age 1h > files

then copy these files to the remote

rclone copy /path/to/dir remote:bucket --files-from files

That will do an incremental copy. It won't delete deleted files so you'll need a periodic sync to pick those up.

1 Like

Thanks @ncw for your reply and all your work on rclone!

You understand the idea. We could use a low maintenance database like sqlite to record the upload and delete transactions. And just confirm on the database once those transactions have been performed on cloud storage.

Besides reducing restricted transactions, this approach could speed things up by performing the upload and delete transactions in parallel as they are idempotent. There is no reason I think that upload (put) and delete transactions would conflict.

Happy to help bring this feature about. The lsf method seems interesting and I will play with it. But unfortunately, for my use case, I will have to delete regularly.

You can try the cache backend which should do what you want. It was optimized for video files so may not work so well for lots of smaller files.

You can do this already with rclone with --delete-during. The default with --delete-after is the slow safe choice.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.