This is more of a general question. Most bucket-based storage has limits and/or charge for certain sync transactions. There was --fast-list option that was introduced by @ncw to reduce these transactions.
My question is about an alternative to this. This potential method would get around the transaction limit. Say you have a folder that you made changes to. You sync that folder locally to another folder or drive. But during that sync, rclone keeps track of all the delete and upload transactions that were done. These transactions are generally not limited in bucket-based systems. Once you have the list of transactions that were undertaken, you send all those transactions to your cloud storage.
Is this possible? Or if it is not currently, is it on the roadmap? If this was not clear, please let me know I will try to elaborate as best I can.
Thanks in advance.
Thanks @ncw for your reply and all your work on rclone!
You understand the idea. We could use a low maintenance database like sqlite to record the upload and delete transactions. And just confirm on the database once those transactions have been performed on cloud storage.
Besides reducing restricted transactions, this approach could speed things up by performing the upload and delete transactions in parallel as they are idempotent. There is no reason I think that upload (put) and delete transactions would conflict.
Happy to help bring this feature about. The lsf method seems interesting and I will play with it. But unfortunately, for my use case, I will have to delete regularly.